Following the feature-rich release in August, with the new version 16.11, Genode’s developers took the chance to work on long-standing architectural topics, most prominently the low-level interplay between parent and child components. Besides this low-level work, the release features much improved virtual-networking capabilities. Originally introduced in the previous version, Genode’s network-routing mechanism has become more versatile and easier to use. Further topics include the added support for smart cards, kernel improvements of the NOVA hypervisor, and a virtual file system for generating time-based passcodes.
The efficient interaction between user-level components is one of the most important aspects of microkernel-based systems like Genode. The design space for this interplay is huge and there is no widely accepted consensus about the “right” way. The options include message passing between independent threads, the migration of threads between address spaces, shared memory, and various flavours of asynchronous communication.
When the Genode project originally emerged from the L4 community, it was somehow preoccupied with the idea that synchronous IPC is the best way to go. After all, the sole reliance on unbuffered synchronous IPC was widely regarded as the key for L4’s excellent performance. Over the years, however, the mindset of the Genode developers shifted away from this position. Whereas synchronous IPC was found to be a perfect match for some use cases, it needlessly complicated others. It turns out that any IPC mechanism is ultimately a trade-off between low latency, throughput, simplicity, and scalability. Finding a single sweet spot that fits well for all parts of an operating system seems futile. Given this realization and countless experiments, Genode’s inter-component protocols were gradually shaped towards the combination of synchronous IPC where low-latency remote procedure calls are desired, asynchronous notifications, and shared memory. That said, Genode’s most fundamental inter-component communication protocol – the interplay between parent and child components to establish communication sessions between clients and servers – remained unchanged since the very first version. The current release reconsiders the architectural decisions made in the early days and applies Genode’s modern design principles to these low-level protocols. The release documentation contrasts the original design that was solely based on synchronous IPC with the new way. Even though the new version overcomes long-standing limitations of the original design, at the first glance, it gives the impression to be more complicated and expensive in terms of the number of context switches. Interestingly, however, the change has no measurable effect on the performance of even the most dynamic system scenarios. The apparent reason is that the parent-child interactions make up a minuscule part of the overall execution time in real-world scenarios.
Even though the architectural work mentioned above is fundamental to the Genode system as a whole, it is barely visible to users of the framework. With respect to user-visible changes, the most prominent improvement is the vastly improved infrastructure for virtual networking, which is covered in great detail in the release documentation. Further topics are the added support for using smart cards, a new VFS plugin for generating time-based passcodes, and updated versions of VirtualBox 4 and 5 running of top of NOVA. Speaking of NOVA, the release improves this kernel in several respects, in particular by adding support for asynchronous map operations. Each of the topics is covered in more depth in the release documentation.
This is really interesting, but I never see any comments on the Genode links.
Does anybody here use it for anything?
I’m not using it for anything yet, but I’d like to try an experiment with using it for a VirtualBox host. I would love to have a secure OS for that!
I will also play around a little bit with the build process, to see what’s involved in that.
If I succeed in any of this, or discover anything useful, I will write it up for OSAlert.
Good to see the progress Genode is making.
The asynchronous IPC will make many things easier to implement from the application developers point of view and hopefully boost the userland.
But I was hoping that this release would bring Xen as a base for Genode, as the roadmap suggested…
I only tried Genode on top on Linux so far – Xen would have made it easy for me an a lot of other people to switch to a real microkernel based scenario.
@Norman: is Xen still in the pipeline?
Thanks for your feedback!
Xen support is still planned for the near future.
IME most programmers prefer synchronous interfaces, even when they aren’t suitable to the task. Overheads of emulating asynchronous over synchronous interfaces are often in the noise too (but can make some things less efficient and/or complicated).
Genode is a special case as it supports several different microkernel models and so can’t be optimized for one model – compare to e.g. QNX where higher level layers can assume lower layers use synchronous IPC with optional asynchronous notifications.
Asynchronous Interfaces require a different kind of thinking. Even more if you use things like promise/future.
But I think more and more developers are getting used to these kind of things. And once you went this way, you don’t want to look back.
Edited 2016-12-02 20:22 UTC
cybergorf,
I’ve been building async interfaces for a long time now and I’m applying it to more and more use cases. Unfortunately it’s still kind of awkward on linux because many syscalls and libraries are only available in a synchronous format. Wrapping a synchronous function call inside of an asynchronous API requires threads and synchronization primitives that almost completely defeat the benefits of asynchronous IO.
Very frequently my code needs to do something basic like a DNS query, but the hostname resolver blocks, so if I don’t want to re-implement the system libs, then I’m forced to marshal the events between a wrapper thread and the AIO thread, which adds more complexity and decreases efficiency… I don’t know if Genode has this problem, but how I wish the linux kernel and userspace libs would get an overhaul to make AIO interfaces work consistently across the board.
Edited 2016-12-03 01:52 UTC
Have a look at zeromq and nanomsg (if you not already have)
http://zeromq.org
http://nanomsg.org
cybergorf,
Thanks for the links.
While they don’t seem to solve my specific problems, but they are neat never the less. I’ve built my own AIO library from the ground up. Linux has made some progress with AIO but regrettably is still incomplete for some pretty basic use cases like file IO. Until this gets fixed at the kernel level, user-space programs are forced to use threads rather than AIO if they can’t afford to block on file operations. This is why the posix AIO implementation was forced to use threads under the hood on linux.
I didn’t see any mention of the package management work, or the move towards a cross-microkernel binary format per architecture.
The departure from the original road map is mentioned in the introduction of the release documentation. In short, we prioritized the architectural work over the originally planned features. The package management in particular will considerably scale up the workloads of Genode. With the architectural improvements in place, we feel much more confident to go forward in this direction.
The cross-kernel binary compatibility of dynamically linked binaries is actually already in place (since the previous release). It just happens to be not yet prominently visible in the current work flows because most binaries are still statically linked. I am currently (quite literally) changing this: https://github.com/genodelabs/genode/issues/2184
Do you no longer provide Live CDs with Genode?
The last one is from 2010 ^aEUR“ https://genode.org/download/live-cds