Evidence has just submitted to LKML a new version of the SCHED_DEADLINE real-time CPU scheduler for the Linux kernel. The project is basically a new scheduling policy (implemented inside its own scheduling class) aiming at introducing deadline scheduling for Linux tasks, and it is being developed by Evidence in the context of the EU-Funded project ACTORS. This version takes into account comments come from Linux kernel developers, and it also introduces a first drafted implementation of deadline inheritance.
Just what Linux needs, yet another scheduler. Oh well.
af far as i care they could well add not even one, but some hundred more scheduling classes, if at least one among them can let linux catch up with recent realtime OS reasearch – it’s been years since i read publications about deadline monotonic scheduling for HARD realtime systems, and it wasn’t “new” stuff even at the time…
hard realtime tasks don’t just require timely activation and shortest schedule-in latencies – they need to be able to rely on system calls they invoke, taking deterministic times to execute – without this, they cannot guarantee temporally correct behaviour (which is the essence of real time computing), no matter the scheduling classes
so the point is, what does linux do to ensure that code paths for IO operations, enumeration of processes or files in a directory, or *whatever*, run in a constant, or at least upper bound, time?
Edited 2010-03-01 13:54 UTC
You do know that the realtime code in Linux main mostly comes in junks and pieces from existing real time Linux projects ? It’s not like Linux not already does this, but it just takes time to make it more generic and suitable for general consumption.
yes, i know – but i’m not interested in a particular variant or branch or distribution, thus when i said “what does linux do to…” i actually meant “what is being done in the linux ecosystem to…”
i didnt make myself clear, my fault
so you’ll be able to point me to paper in which the “syscall execution time” problem is analysed, and solutions are detailed, which is what i asked earlier
it would be nice to define “general consumption” –
in reality, hard real time is something that belongs to critical systems devoted to specific tasks for which a specific software solution is holistically tailored and deployed
that HRT is needed for MP3’s and movies to play without stuttering is a common misconception – the most renowned multimedia prone desktop operating system (the BeOS) was not a real time OS, and many multimedia application are implemented as normal event loops (calculating delta’s between the current and a previous timestamp to , e.g. calculate the frames needed for the animation to appear fluid, but not so much relying on system call nominal execution times)
desktop users don’t actually need HRt, they don’t need to be able to run tasks at higher priority than the kernel itself (which is what a soft real time kernel mostly does), they need what they need is responsiveness, and responsiveness can be achieved via other means, like prioritizing interactive tasks at the IO operation (not just scheduling) level, and /or optimising local IPC (that between applications and the graphic server, for instance) with more efficient mechanisms than sockets (LRPC or doors, that other Os’s have had for years, but linux lacks, being born as a server OS) or even basic functionality like handoff scheduling (needed to shorten and make more deterministic ipc latency, and that linux AFAIK doesn’t have yet)
With general consumption I just mean, the people working on mainline don’t mind maintaining it. It can be added to mainline and less code is needed for ‘fringe’ cases, like some kind of embedded devices.