LineageOS is an operating system for everyone: from the average user to the advanced developer. While users have a giant playground in their hands with many customization options, we also want to make LineageOS a fun place for developers. The standards for official builds help ensure developers that their app will not end up in a bad state because of inappropriate Android API changes or broken hardware support, but this is not enough for us; we’re announcing some new APIs that will allow your apps to do more when they’re running on a LineageOS-powered device.
The Lineage platform SDK (LineageSDK for short) is a powerful resource that allows us to both keep our features out of the core Android frameworks (for better security and easier bringup processes) and expose some extra functionality to app developers.
We’ll have to wait and see if developers are willing to add some code to their Android applications for the features in this SDK.
a commitment to bring even 1 new model of phone into the list of supported devices each month…
Nigh on impossible to commit to such a thing.
Vendors are trying their very best to prevent this from happening.
Plus with each model your support gets stretched that much more thin.
What I want from ROMS to do is to have a LTS version of Android, that provides security updates.
Heck, that’s something I might pay for. I had to retire an older device because there were vulnerabilities out there, and it was never going to get another update. So I’m limiting myself to Apple/Pixel phones. Kind bummed that I had a nexus 6 that is EOL for security updates. It was still a great phone, it didn’t need Oreo just security updates.
Edit:
Also, really really don’t care about this SDK. Nothing really useful to me as a dev or user for my apps. YMMV, but I hate all of the focus on “Style” that ROMS have done. Android style is fine, the security of it is not. Focus on non trivial matters.
Edited 2018-03-22 17:42 UTC
Over here i was stuck on Android 5.1.1 Lollipop, and i flashed lineageOS onto the Samsung J3. Works like a charm. (Android version 7)
I think this is an informative post and it is very useful and knowledgeable. I really enjoyed reading this post. big fan, thank you!
– http://rolltheball.co/“ roll the ball
With project treble, things should get easier. You might even find LineageOS working out of the box, unless your phone has ‘hardware specialties’.
Therefore, don’t buy phones with Android version < 8 anymore!
Ah yes, the paperware Treble. So touted, and yet so nonexistent.
While I would subscribe that Google did a huge f***up with Android, treble is a push in the right direction. Take a look at this:
https://www.youtube.com/watch?v=hFGgSpgpI5M
Nice, with billions in hands and thousands of coders, even some almost free thanks to GSoC, Google took 12 years to bring project Treble to life. Must have been quite a challenge. Now you can ditch your perfectly capable and functional < 8.0 devices into the trash.
Hi,
I think they’re spending most of those billions trying to implement Fuchsia (so that they abandon Android and Treble, and make everyone buy new phones by not providing security updates for Android 8).
– Brendan
Clever point of view… Hope they learned their lesson and Fushia will be well done from the start. If ever it happens they “forgot” something in the process, I think I wouldn’t trust them anymore.
Hi,
In all seriousness; I do suspect that Fuschia is intended to fix various problems Google have had with the Linux kernel; including the lack of stable driver interface in Linux that was the reason for a lot of the difficulty manufacturers had with providing updates in older Android (and is most of the reason for Treble too) – switching to a micro-kernel with stable driver interfaces (Fuschia) would make it far easier to update pieces (while also reducing the security concerns of having closed source/proprietary drivers because they’re not in kernel-space).
– Brendan
Again, this monolith vs. microkernel debate. I always been convicted that the later was inherently better, for obvious reasons now that it has passed the test of life. Security and stability (micro) always wins to a slight 5% performance increase (monolith) “because no bags around meat”. Strange that in real life things gets more secured using “bags” but computers shouldn’t use them. Linus never been a visionary on this topic.
Except Linus works with people from across more industries than you probably do, so he has to deal with many performance requirements where that 5% is too much.
Hell, apparently companies like Facebook can save millions of dollars from a 1% improvement in string optimization. Imagine how much it would cost even more important organizations if they were slapped with a 5% performance degradation.
That performance decrease is a myth or bad programming. Self-modifying code is no voodoo.
Furthermore, sticking to C – while duplicating C++ features in C ^aEUR“ has been rather silly. This has been done for historical reasons, of course. C++ compilers used to be utter shit and barely portable. Times have changed.
Now all C++ needs is a decent portable definition of function argument passing, returns, register usage and you’re good to go…
PS.: Don’t expect anything ‘mind-boggling’ from Google. They have proven without a doubt that they can’t reflect upon their design decisions as well as others can. They will change their direction after three steps, as usual. Then they will move sideways, just to head back again…
‘Google’ means brainless chaos. Even the word itself is only in existence, because they failed the spelling bee.
The fastest microkernels, still fall behind in performance. That is not a myth. Maybe the performance gap is not as big as it used to be, but my point is that it is still expensive at the large scales that Linux is used, like in HPC.
And self-modifying code and microkernels are not related.
kwan_e,
Not sure what was meant by _LC_ in parent post, but techniques such as those used by singularity OS (which I came up with independently years ago!) make it possible for an OS to enforce isolation without switching the hardware address space, so there doesn’t need to be any context switching overhead at all. Such a microkernel can be built with even less overhead than a typical monolithic kernel that uses slow syscall context switching.
I think there’s innovation to be had in areas like this, but I suspect the incumbent technologies will win because so much money has been pored into them and existing systems are considered good enough. The path of least resistance is just to continue along the same path despite all the known flaws.
But doesn’t the Singularity model require managed languages, thereby negating all the performance gains anyway?
kwan_e,
Well, there’s different types of managed languages. The reputation for being slow comes from dynamic/run time enforcement and additionally the tendency to use unpredictable garbage collection algorithms that scan the heap rather than explicitly freeing objects. In this case your point is valid, however such limitations aren’t strictly shared by all managed languages. Languages like rust achieve the managed semantics at compile time by statically proving that the code can not violate the constraints. So at least in principal, a managed language using compile time verification doesn’t have to be slower at runtime than an unmanaged one.
Alas, C is the defacto programming language for most systems programming and I concede it’s highly unlikely any language will displace the C language as the industry standard for systems programming. We have too much vested in legacy platforms.
But back to your earlier point about meltdown and spectre, no amount of compile time verification can help if the flaw is with the processor design itself. But at least in most cases, the ring-based protection still serves to slow down those attacks.
With compile time verification, passing that means nothing if a processor flaw makes it invalid. You’ll still need a fallback protection mechanism at runtime.
I think there are a few things being mixed together here, which don’t ^aEUR“ necessarily ^aEUR“ inter depend.
For one thing, you can run services on a micro kernel in kernel space, if you wish to. You would thereby eliminate some of the context switching penalties and such. This can be done ^aEUR“ and for certain servers it might even make sense.
Likewise, you can have a monolithic kernel, which runs services and drivers in user space.
While micro kernels typically try to avoid running stuff in kernel space and Linux (the monolith we know of) typically runs every shit in kernel space, this is not a necessity of neither micro- nor monolithic kernels.
When it comes to ‘superior’ performance this is often attributed to monolithic kernels, because they can ‘put the code directly into the kernel’, whereas micro kernels have to have more of an ‘extension approach’ ^aEUR“ which comes with a certain ‘overload’ of course. Here, self-modifying code can help.
To break it down into simple terms, I want to give the example of the ability to have various governors/schedulers. For you being able to switch between them, there has to be some sort of ‘inquiry’. The system has to check, which scheduler has been set, and then run that code. It might do so millions of times each second. Therefore, this check ^aEUR“ be it only a tiny bit of code ^aEUR“ is a penalty. This penalty can be eliminated with self-modifying code. Your code looks like a tree before, which when it reaches the scheduler’s code has to decide which branch to take ^aEUR“ every time. With self-modifying code the branch becomes part of the trunk. The code runs faster…
This is a very simple (silly) example. It can be applied to a more complex ‘plug-in architecture’ of course. This would eliminate the theoretical advantage of a monolithic kernel.
In practice, Linux being a monolith kernel, has more to do with politics than with performance. The ones involved in the kernel development keep everything in their hands. With a micro kernel, drivers could be development independently from the kernel. Nobody would even have to bother about the kernel anymore. Your file-systems ^aEUR“ and everything else ^aEUR“ would no longer depend on the kernel version. Think about it. This is political, nothing else. A micro kernel would be beneficial for both the users and the developers ^aEUR“ not necessarily to those involved in the kernel development, though.
Keep in mind : provided the interfaces are stable.
Not necessarily. You can have a ‘version’ flag and keep supporting old interfaces (though the bridge code might bring a bit of overhead to the oldies).
But I can see your point. It certainly helps to make up your mind once your have enough information. *winking@Google*
_LC_,
There certainly are a lot of politics involved. And arguably it sometimes creates hardships for users, for example the difficulty that end users face in updating devices like android phones on their own independently from the manufacturer is a direct consequence of the monolithic kernel used by linux.
However I don’t think it’s entirely accurate to say it’s just about politics either, the conventional memory protection overhead imposed by hardware was quite significant, especially decades ago when these operating systems were written. On today’s CPUs it’s not as bad, but originally the benefits of memory protection had to be balanced with the performance costs of several hundred cycles.
I actually like the mainframe approach where everything gets processed in batches such that the overhead gets divided by a large number of requests included in a batch. That kind of design is both easy to program for and scales exceptionally well. Such a design can work fantastically with a microkernel too. *nix software typically uses a very different model where numerous fine grained syscalls are called in quick succession to set/get state etc, but unfortunately this kind of software design is the worst case scenario for a microkernel. For better or worse, microkernels are going to be judged on how well they execute software written for monolithic kernels rather than how well they can handle software especially written for them. This is not fair, but it is a big part of the uphill struggle faced by alternative platforms.
Well, Facebook is one thing special. They might gets millions from string optimization (for obvious reasons) but they should invest more into security to avoid these “strings” getting “leaked”, hence rendering any optimization useless.
Except, the “leaks” have nothing to do with the low level technology, but with high level business decisions. Microkernels would not prevent the mining of public user data at all.
Yeah, I know these aren’t correlated, but if they were really into string optimization, they would go the fpga way on the technical side.
Not really sure what FPGAs can do for string optimization. Most of the optimizations are about (avoiding) memory allocation, not calculation.
Code point and code page manipulations are really resource consuming. A hard wired fpga might help a lot in that field, not counting in memory accesses.
Why are they resource consuming? Because they require reallocation during processing?