This is a situation where I need the help of you OSAlert readers who are experienced with low-level development on ARM, SPARC, PowerPC, MIPS, and other hardware architectures we have on computers nowadays. The issue is that I’m currently designing the part of my hobby kernel which takes care of interrupts. Although I mostly work on x86 at the moment, I’d like to keep this code portable to other hardware architectures in the future. To do that, I have to know how interrupt handling works on as much HW architectures as possible.
What I’m interested in is every process which OS developers have control on and/or must take into account between the moment where some piece of hardware connected to the computer’s motherboard emits an interrupt signal and the moment where the CPU is done switching to the interrupt handling code. As an example, on x86, the legacy PIC would fit in that category, and the interrupt/exception vector mechanism would also fit.
If there’s no standard way to do something, please mention the way(s) it’s *usually* done.
The interrupt handling model which I’m currently checking for portability is:
-
Hardware emits an interrupt signal
-
On its way, the signal optionally encounters one or more PICs which either stop it or direct it to one of the CPU’s interrupt pins (this means that PIC drivers are necessary on architectures which have one, but that they may be written separately)
-
The CPU reacts to the signal on its interrupt pin by fetching a “vector”, noticeable containing the memory address of the interrupt handler associated to that pin, doing some arch-specific tricks, and executing code at said address
Thanks in advance for your answers !
I remember on the Apple II, the keyboard would write the current key at a predetermined address, and the “OS” would poll that address to know keydown events and such.
Not sure if it’s useful to you though
I certainly don’t plan to go that far in hardware support However, it was interesting to read, because it shows that there has been a time where interrupts didn’t exist, or at least weren’t as widespread as they are now, and makes me wonder why they chose this polling-based solution which sounds spontaneously less efficient.
Concerns about context switch cost or HW complexity, maybe ? I know that Apple went very far in reducing hardware complexity in their early days, the floppy drive of the Apple II being a good example of that too.
Edited 2011-02-27 15:16 UTC
Hehe, I don’t know the specifics but remember: it was a 1Mhz CPU, 40k or so of memory.
Take the design of a simpel game: init, then loop over: (read keyboard address, calculate action, render new state). In an old computer the challenge was to fit the game logic in that “calculate action” step so that it would not take too long. It is also the days where the display herz would be in sync with the computer herz and you had a fixed set of instructions you can run until the new image is shown (Amstrad?).
Nowadays, the loop would run thousands of times per second and the keyboard polling is a waste at that speed.
It was also the days where the manufacturer would give you detailed manuals with program examples. I remember the AppleII had all it’s chips mounted on sockets and where replaceable.
By the way I wanted to verify my sayings. Didn’t find the specifics, but I found this cool AppleII doc archive : http://mirrors.apple2.org.za/Apple%20II%20Documentation~*~@…
I like how the design is simple. IMO, the tendency is to forget why a particular was introduced and add another layer of indirection instead of fixing that particular to the new needs. Computer history is the solution
Gonna attract some foes by saying this but…
Are you really so nostalgic about CLI ? Moreover, polling-based systems would probably have terrible power consumptions, which in our age would be unacceptable…
It was a smart solution to that time’s problems, but it does not mean that it is a smart solution to nowadays’ problems, which is what we’d need
Edited 2011-02-27 22:28 UTC
Nah I agree I believe to have acknowledged that polling is not adapted anymore in my previous post, but probably not really clearly. It’s just that I think it’s good to question past decisions and not always blindly build on top of it. Your approach of building an OS is a testament that you are on the same camp
I know, now… At the time I redacted that comment, I should have turned off my computer for an hour or two, and my brain wasn’t working properly anymore
Sole thing which remains not perfectly clear after a night of sleep is this:
Are you talking about how I build mine in particular, or about the idealized (and I hope improved) version of this building process which I preach in the “Hobby OS-deving” series, as opposed to the “tutorial” approach ?
No worries, that’s the curse of writing an osnews article, so many comments to answer :-p For the last bit I just means that by writing an OS you’re revisiting past decisions in some way.
If you haven’t found this yet I found it very helpful when tinkering with my (long since abandoned) hobby os
http://preview.tinyurl.com/6gsts9w
Indeed, sounds much more helpful than the ARM manual as far as understanding the ARM interrupt model is concerned Thanks !
Edited 2011-02-27 15:52 UTC
Thank you for the useful link !!! very good
Bet they have the knowledge and the Wiki to provide you with more informations that we, lurkers and trolls here at OSAlert, will ever…
As far as 680×0 is concerned, interrupt handling is so trivial and efficient that it’s scary nobody copied that.
Kochise
Edited 2011-02-27 15:59 UTC
Not sure about that. I’d spontaneously say that on OSdev, most people are experienced with x86, because it’s simply the best architecture for OS hobbyists. I have also noticed a few people who have some knowledge of ARM on occasion, though, so maybe you’re right that I could be surprised by the variety of answers.
On OSAlert, we have many people who have worked on computers at a low level as part of their job, which means that I can learn about a wide range of hardware and architectures, so I don’t think that’s such a bad choice…
I think about expanding OSdev’s wiki article on interrupts with that knowledge afterwards, though
EDIT : Sounds like I have to add at least an “interrupt priority level” to my model, as it seems to be a very widespread hardware feature.
Edited 2011-02-27 16:16 UTC
Please do! This kind of information about non-x86 architectures can be like gold dust sometimes, if you run out of steam you could always just post a bibliography of the sources you found somewhere
Almost everything about the 680×0 architecture wise is clean and efficient. I really was a joy to write assembler for this architecture. Shame that x86 became the preeminent standard in our time. Out of all the great architectures out there, MIPS, Sparc, PPC, Alpha, 680×0, PA-RISC, and we choose freaking x86?
Oh well, at least ARM seems to be picking up speed, another nice processor from an assembler standpoint.
Thats the problem with closed source software, it is tied to the architecture it was published on and only the original vendor can change it, which they wont do unless the architecture has enough users… Look what happened to IA64, despite Intel and HP pushing quite hard very little closed source software ever got ported to it… Linux runs well on it, but thats not enough for most people.
no, it isn’t
think about this: in the early 80’s, when the AT platform was headed to become the de facto industrial standard but wasn’t yet, who were its predominant users?
businesses, thus managers or more often employees – who could type the letters their boss dictated, on wordperfect rather than on a typewriter – and calculate the mortgage using lotus 123 rather than pen and paper – ie. the kind of person who just wants/needs) a tool to be productive with, and wants that tool to just work for as long as possible with as little hassle as possible – and who also could be bothered with installing it on themselves, if the installation process is as quick and as little error prone as possible
thus, people who would have gotten wordperfect or Lotus 123 as a package and installed it from binary, even if it had been open source
thinking that just because it was proprietary software, it forced people to use the PC, is a deluded perspective that ignores the fact that other platforms were either too costly (risc workstations) too niche, deemed inadequate for business use (amiga) by many, single sourced or simply not pushed by IBM, or lacked needed applications (neither closed nor open sourced) – so simply were not an option for office (and many home) users of the time
because most people don’t really care about the underlying architecture, as long as deployed sw runs and will run in the future… but it really is a detail that doesnt touch them as amuch as those who make that sw – but that’s not them
Edited 2011-02-28 15:28 UTC
I know.. I like SPARC myself… as its about as open as it gets and is a fairly clean and well documented design.
However … there are VERY few resources on OS programing for SPARC unless you feel confident with Linux or BSD sources… :C
I’d like to get this working… but its probably horribly out of date with current gcc and assemblers.
http://www.cheesecake.org/sac/os-supp/os-1.html
You then should get a look at Hitachi’s SuperH architecture, such the SH-2 found in the Sega Saturn or the SH-4 in the Sega Dreamcast. You dreams will be enlightened :p
Kochise
When writing a driver for VAX/VMS, there were a few rules to follow.
The interrupt elevated the IPL (interrupt priority level) above 0.
You handled the immediate stuff at that elevated IPL (1-15) in as few instructions as possible.
THen you dropped the IPL down to 0 so that other devices could interrupt when they needed.
It was all quite straightforward.
Heh, I’ve just read the VAX-11 documentation following the advice of someone else… Indeed, the IPL system sounded like an interesting way of prioritizing interrupts.
However, I wonder if it was as safe as nowadays’ schemes where you disable interrupts altogether until the basic interrupt handler which saves the CPU’s state is done. Wouldn’t there be a risk of smashing register values or trashing the stack if an interrupt occurs while another interrupt is being processed ?
Or is it the case that the VAX fully saves the CPU state on the stack all by itself (kind of like hardware multitasking) ? I thought it only saved the PC and the PSL…
Edited 2011-02-27 20:11 UTC
VAX indeed only saves PC and PSL. There may also be a stack change depending upon what the processor was doing. That’s enough; the instruction set is powerful enough to allow the ISRs to manage saving the remainder of the context in a re-entrant manner.
A VAX kernel is essentially a primitive RTOS for which scheduling is managed by the interrupt system.
The interrupt hardware includes a mechanism for software to request interrupts, the Software Interrupt Request Register. Setting a bit in the SIRR requests an interrupt at the associated IPL.
The fundamental rules for managing IPL in the kernel are:
<ul><li>You can always raise IPL. Since you have the CPU, you are the highest-priority task running.
<li>If you have raised IPL, you can lower it to your original IPL without additional effort.
<li>If you need to lower IPL below your original IPL, you have to make other arrangements. This is usually done by pushing a “fork block” onto a queue and requesting a software interrupt.
</ul>
If these rules are followed, the hardware can guarantee that the processor is executing the highest-priority IPL.
A fork block consists of a queue entry that contains a minimal processor context; typically, only a handful of register values. The ISR for interrupts generated by the SIRR saves that handful of registers then grabs a fork block from its queue, loads the handful of registers from it, and calls the function indicated by the fork block. For VMS, the SIRR ISR saves only R0 through R5, and pulls R3 and R4 from the fork block (R5 is used to hold the address of the fork block). If the called function needs more registers than that saved, it must save them.
Since the VAX processor has interlocked queue instructions, it takes astonishingly few instructions for an ISR to fire off a fork block.
Operating system resources are serialized by requiring they be manipulated at a specified IPL; for example, memory is allocated at IPL 8. An ISR operating at a higher IPL must queue a fork block to be able to lower IPL far enough to allocate memory.
This originally got extended to multiprocessors by using spinlocks to ensure that only one processor was running at a given IPL. Processors can also interrupt each other using doorbell interrupts. Since then, the spinlock scheme has gotten finer.
Thanks for your OS articles, they’re very interesting.
This may not be much help, but I’ve been working with pic32. These are MIPS32 M4K chips. Here’s some info for a fairly standard chip.
<a href=”http://ww1.microchip.com/downloads/en/DeviceDoc/DS-61108E.pdf“>…
http://www.microchip.com/wwwproducts/Devices.aspx?dDocName=en532432… .
In my somewhat limited experience, the basic idea of interrupt handling is pretty universal across platforms. An interrupt comes in, consults the vector table and jumps to appropriate address/service routine. Variation comes into play wrt how the table is accessed, how and what sort of masking can take place, etc. I don’t anticipate it would be too difficult to abstract that away.
You’d basically hook a couple of function call like:
void program_isv(int interrupt_number, &service_routine() );
int mask_interrupts(int interrupt_mask);
etc, making them implementation specific. But while I’ve worked on embedded systems, I’ve never written my own kernel (from scratch), so take that with a significant grain of salt.
I think some of our earlier commenters are too kind on the 68k line. It was a good architecture, make no mistake, but among other things, the instruction set was too bloated. It was partly because of CISC processors like the 68k that we saw such a movement toward much more RISC processors like ARM, culminating in excessively small ISs like the ridiculous PIC. But anyway. Such are my opinions.
So that matches my model in the article
OTOH, looking at all these manuals I’ve started to collect, there could be a notion which I might like to keep instead of abstracting it out, that of interrupt priority.
As said earlier, in the VAX family, high-priority interrupts could preempt the interrupt handler of lower-priority interrupts. It also seems to work this way on the MIPS architecture.
Even without going this far, many other architectures seem to have some mechanism for prioritizing some interrupts above others. ARM chips have their IRQ and FIQ pins where FIQ interrupts are prioritized over IRQ interrupts in case of simultaneous events, x86 takes the lowest interrupt vector in such cases if I remember well, and so on.
Maybe this priority information could be useful later (one could imagine e.g. starting higher-priority threads to handle higher-priority interrupts). I don’t think I should abstract it out right now.
Edited 2011-02-28 08:24 UTC
Well bloat is a relative thing: the 68k ISA is quite lean compared to the VAX ISA
And amusingly most RISC now have a variable length instruction set variant: ARM Thumb, MIPS, etc.
An earlier poster glossed Apple’s Infinite Loop; Mac OS 1.3 through 9 ran on cooperative multitasking rather than interrupts, passing control from waiting task to waiting task with a pass through input buffer polling rather efficiently. Input on the Apple II required polling because the keyboard cable was 16 lines wide and thus permitted detection of 30 or so simultaneous key-down cases; surprisingly WoW wasn’t released for the platform in 1987. You could take advantage of the built-in decoder and get more rudimentary keycodes out, but for ctrl-openapple-left-alt-4 in VisiCalc….
I suppose the alternate to interrupts, which you should certainly look up Atmel code while you’re considering breadth of field, is the exception. Oak processors carried those orthogonally (see BYTE archives,) and by picking a language like E or Occam you can architect cores with those orthogonally. Occam was used in ST Semi’s Transputers so a hypercube didn’t have to have interrupt handling all over the place.
You haven’t considered SHARC or H.2xx stream processing, possibly. There are CCD/video-only/vector processors to consider; not least Qualcomm and software radio (or sideband, or IBOC) systems. Bluetooth has an ‘architecture’ for interrupts meant to keep power use down…then there’s Zigbee…
So does not compiling for Apple II mean you waon’t have the IIgs as an available target? It certainly has something in complexity only matched later by VMware or xen graphics acceleration pass-through. It helps that it can be implemented multiply in a low-power Xilinx chip (for example.) Perhaps you were going to start inclusion at dual Xeon of some sort and go up in scale instead?
You can consider PSoC architectures if you like; that would allow some pretty complex triggering. Just go from that to the latest ISSCC news on analog computing and see what you want to make of interrupt architecture.
There are some standards (Federal USA, EU, RUS, ITU-T) you might consider as well in considering PLC interrupt structure (see also: StuxNet) and security/safety of remote operation. They don’t always mask H8, Sharc, or 8xxx CPUs or PLCs with trigger modes.
There’s no reason not to include governments in your study; why not an OS for the Duma? Putin breakpoints? Vatican runtime jails? The Ivory Coast could use a little patchkit before their cocoa rots on the dock.
Oh, myself ? I’m writing an OS targeting personal computers whose screens can display a relatively large amound of information (with current paradigms, 8 inches would be a minimum), including a pointing device and some mean of inputting text, and at least as powerful as a modern low-end netbook.
Now that desktops and laptops have matured and that we have tried a lot of things in that area, some experiments being more successful than others, I think it’s time to go back to a clean state and create some modern, powerful and consistent implementation of the now well-defined desktop OS concept, without all the layers of cruft and rotting legacy that have slowly accumulated in the existing ones.
My first hardware target will be the PC-compatible desktop/laptop, for obvious reasons. Then I’d like to have a look at tablets, if they become a bit more friendly towards OS developers. And then… Who knows what the next big iterations of the personal computing concept will be ? ARM laptops ? “Smart” glasses with an OLED HUD and an eyetracking-based UI ? I’d like to stay open to future innovations.
Edited 2011-02-28 11:18 UTC
Sounds like Be, Inc.’s line of thinking.
Yeah… I have quite a bit of sympathy for the BeOS project, although I don’t like some parts of the implementation.
There’s a difference, though : as a hobbyist, I don’t have to run a profitable business, and I don’t care if my project fails or remains a niche OS forever (which, let’s face it, it is likely to do). Be didn’t have that chance, they were one of them crazy guys who think that research without exterior funding can be a reliable source of financial profit.
Edited 2011-03-01 06:43 UTC
> The interrupt handling model which I’m currently
> checking for portability is:
I can tell you where your model is not universally portable. I cannot name specific architectures that use different models, because my experience comes from FPGA based circuits where you would change even the hardware parts as you see fit.
> Hardware emits an interrupt signal
The signal can be edge-triggered or level-triggered. This is usually of little concern for an OS developer, because either way the interrupt is captured such that an interrupt that occurs while the CPU cannot handle it immediately isn’t lost.
> On its way, the signal optionally encounters one or
> more PICs which either stop it or direct it to one
> of the CPU’s interrupt pins (this means that PIC
> drivers are necessary on architectures which have
> one, but that they may be written separately)
External modules such as the PICs from PC mainboards are indeed optional, but if present, might perform arbitary functions: masking out unwanted interrupts; re-mamping to different IRQ numbers; even masking two interrupts to the same number. Such modules would also be specialized to the hardware interrupt protocol (which again is of little interest to the OS developer).
> The CPU reacts to the signal on its interrupt pin by fetching a “vector”, noticeable containing the memory address of the interrupt handler associated to that pin, doing some arch-specific tricks, and executing code at said address
Some CPUs use an address table as you describe. Some others use a fixed address for all interrupts and let the software determine what caused the interrupt. Others use fixed addresses that depend on the IRQ number. Others use a few fixed addresses where most interrupts share one table entry and a few performance critical ones use another.
CPUs may also allow to mask interrupts internally at different scopes, e.g. globally and per IRQ number. CPU differ in the way that interrupts are automatically masked when an interrupt occurs: some mask all interrupts on interrupt entry, others define a priority and only mask lower-or-equal-priority interrupts.
x86 specifically defines a lengthy procedure that set up a process environment (not only an entry address) entirely in hardware before software handling even begins; this is called “interrupt gate” IIRC. Remember that x86 defines a whole OS in microcode, although that is rarely used anymore.
Hi,
If you can handle a complex/messy architecture like x86, then it shouldn’t be too hard to adapt that to something simpler.
For the following I use the term “OS” as a generic term meaning “thing that controls IRQs”. Depending on your OS you can replace it with “kernel” or “device manager” or whatever suits.
Here’s a list of things you might or might not have overlooked for x86:
1) Interrupts are prioritised, either with the method used by the PIC chips, or by the “vector number” based method used by the APICs. Some OSs bypass these priority mechanism/s (mask the IRQ, send the “End Of Interrupt” to allow lower priority IRQs to interrupt, then handle the interrupt and unmask it when you’re done) but this has slightly higher overhead (especially for PIC) and is rarely used unless the OS has it’s own priority scheme (e.g. relies on the scheduler’s thread priorities for deciding when the IRQ will actually be handled). In any case, from a driver developer’s perspective it’s the same – receive interrupt, handle the interrupt, then tell the OS you’ve finished. The OS abstracts the underlying details.
2) For PICs, there’s an extra “routing” layer added between things like the PCI host controller and the PIC chips that allows the OS and/or firmware to change the way PCI IRQs are mapped to PIC inputs. This can be used to (for e.g.) reduce the amount of IRQ sharing in some cases.
3) When APICs are used, the OS decides which interrupt vector (and therefore which priority) each IRQ uses. In multi-CPU systems the OS can also decide which CPU (fixed delivery) receives the IRQ, or can decide the IRQ should be handled by the lowest priority CPU within a certain group of CPUs or within the group of all CPUs (send to lowest priority, logical delivery).
4) For larger systems, there’s the concept of IRQ load balancing. If a CPU is sleeping (to save power or reduce heat) you don’t really want to wake it up to handle an IRQ when other CPUs are already running. If a CPU is running a very high priority task (e.g. real-time) then it’s better if other CPUs handle the IRQs instead. This is where “send to lowest priority, logical delivery” gets tricky (although it would also be possible to dynamically reprogram the APIC/s for the purpose of managing IRQ load).
5) For ccNUMA (recent Intel and AMD with “multi-socket” motherboards) it’s nice to get the IRQs delivered to a CPU that is “close” to the device itself; as this reduces a little overhead (e.g. bandwidth/latency across the Quickpath or Hypertransport links).
6) For PCI interrupts are level triggered and the same IRQ (from the CPU’s perspective) can be used by multiple different devices. The end result of this is that the OS has to maintain a list of drivers that are interested in each IRQ, and ask each driver in the list if its corresponding device was responsible for causing the IRQ. This complicates things a little. On a single-CPU system (where you’re asking one driver at a time if their device was responsible), you’d want to arrange those lists in order of which devices most frequently cause each IRQ. For example, if an IRQ occurs you might ask an ethernet card driver if it was responsible, and if that driver says the ethernet card was responsible for the IRQ then you wouldn’t bother asking any other device drivers (to avoid unnecessary overhead). On a multi-CPU system it may be better (for latency) to ask multiple drivers (on different CPUs) at the same time. For example, with 2 CPUs you’d ask the two most likely drivers, and if neither of them were responsible for the IRQ you ask the next 2 drivers in the list, etc.
7) For APICs (especially in larger systems), there’s no real limit to the number of IO APICs that may be present, and no fixed number of inputs that each IO APIC may have. You might have one IO APIC with 24 inputs, or a pair of IO APICs with 32 inputs each, or a set of four IO APICs with 16 inputs in the first 2 and 32 inputs in the second pair, or… During boot you have to detect the number of IO APICs and the number of IO APIC inputs on each.
8) For modern PCI (e.g. PCI express) there’s something called “Message Signalled Interrupts”. The idea is that the device uses the PCI bus to send the interrupt directly, without a using an interrupt controller input. This helps to avoid interrupt sharing caused by lack of interrupt controller inputs and/or lack of PCI interrupt lines.
9) The older “xAPIC” was only capable of handling a maximum of 255 “agents” (CPUs and IO APICs). For “x2APIC” this “255 agents” limit was broken severely (the new limit is something like “16 million agents” or something). To cope with this the IO APICs gained an interrupt redirection capability.
10) There’s a maximum of 256 IVT/IDT entries, 32 of them are reserved for exceptions. For multi-CPU/APICs, at least 1 IVT/IDT entry should be used for spurious IRQs, and some more are needed for IPIs. That leaves a maximum of about 220 IVT/IDT entries. This doesn’t necessarily have to be a global limit, and could even be considered a “per CPU” limit. For example, a computer with 4 CPUs could have a different IDT for each CPU and could handle up to 880 separate IRQ sources with no IRQ sharing at all. Of course this is likely to be overkill; however if you think about “huge” servers; and understand what I said in (5) about ccNUMA, what I said in (8) about MSI, and understand the APIC’s interrupt priority scheme; then you might (or might not) see the need for using a separate IVT/IDT for each NUMA domain.
11) An OS may need to be able to dynamically reconfigure “all of the above” while it is running (e.g. long after boot), in response to hardware changes caused by (for e.g.) hot-plug PCI devices, changing load, power management, etc.
12) For APICs (in multi-CPU systems), as far as the interrupt acceptance logic goes there’s little difference between an IRQ and an IPI (Inter Processor Interrupt). IPIs are used by the OS on one CPU to ask other CPU/s to do “something” (for a common example, read about “multi-CPU TLB shootdown” in your favourite Intel/AMD manual). These IPIs take part in the APIC’s interrupt priority scheme.
13) There’s also “special” types of IRQs and special types of IPIs. Examples include NMI (mostly used for watchdog timers and/or kernel-level profiling I guess); and the local APIC’s IRQs used for the local APIC’s timer, thermal status, performance monitoring, etc. Some of these special interrupts don’t take part in the APIC’s interrupt priority scheme (NMI, INIT IPI) while some do (most of them).
This all just for 80×86 alone. If your OS is flexible enough to support most/all of this; then other architectures are likely to be easy to support. About the only thing I can imagine being different enough to be interesting is only having a single IRQ and a “where from” register (rather than multiple IRQs and a table like the IVT/IDT). However; you can easily emulate a table if the architecture has none, and you can easily emulate an “interrupt number” if the architecture does have a table; so it doesn’t matter much either way.
– Brendan
Edited 2011-02-28 13:51 UTC
Nope. Multiple devices can interrupt simultaneously. If you do it this way, you have to pay the interrupt overhead twice if that occurs (the first time when you stopped asking at the Ethernet driver, but the interrupt was still asserted; the second time as sson as the Ethernet driver handled its interrupt).
Hi,
I’m used to micro-kernels, where drivers run in user-space and the IRQ overhead itself (and therefore the potential risk of a second IRQ) is nothing compared to the cost of a (potentially unnecessary) task switch.
For monolithic kernels where performance is considered far more important than protection/security; the fastest way is to trash the kernel’s code with random data as soon as any IRQ occurs. It’s guaranteed to be faster than waiting for any number of arbitrary drivers to trash the kernel…
– Brendan
What happens if two devices request an interrupt on the same IRQ at the same time? I imagine it goes something like this:
– ISR takes interrupt, finds first driver attached to this interrupt, and sends a message to it. Interrupt is disabled until driver gets a chance to run.
– Microkernel eventually schedules driver, which processes interrupt request and re-enables interrupts from the slot.
– ISR takes interrupt from second device, which has been plaintively waving its hand the whole time, finds appropriate driver, and sense message to it. Interrupt is disabled until driver gets a chance to run.
– Microkernel eventually schedules driver, which processes interrupt request and re-enables interrupts from slot.
Have you been looking at my code?
Hi,
The case you started with is 2 devices generating the same IRQ at same time and the first driver is one of them, with “early bail-out”:
– ISR sends message to first driver
– first driver handles message, tells kernel “it was me”
– kernel does EOI, APIC/PIC sends second IRQ
– ISR sends message to first driver
– first driver handles message, tells kernel “not me”
– ISR sends message to second driver
– second driver handles message, tells kernel “it was me”
– kernel does EOI
That’s probably 3 task switches. Without “early bail-out” you get this:
– ISR sends message to first driver
– first driver handles message, tells kernel “it was me”
– ISR sends message to second driver
– second driver handles message, tells kernel “it was me”
– kernel does EOI
That’s 2 task switches, which is better for this specific case. What if the first device generates an IRQ again before this is finished? In that case you get this (with early bail-out):
– ISR sends message to first driver
– first driver handles message, tells kernel “it was me”
– kernel does EOI, APIC/PIC sends second IRQ
– ISR sends message to first driver
– first driver handles message, tells kernel “not me”
– ISR sends message to second driver
– second driver handles message, tells kernel “it was me”
– kernel does EOI, APIC/PIC sends third IRQ
– ISR sends message to first driver
– first driver handles message, tells kernel “it was me”
– kernel does EOI
Now there’s 4 task switches. Without early bail-out:
– ISR sends message to first driver
– first driver handles message, tells kernel “it was me”
– ISR sends message to second driver
– second driver handles message, tells kernel “it was me”
– kernel does EOI, APIC/PIC sends third IRQ
– ISR sends message to first driver
– first driver handles message, tells kernel “it was me”
– ISR sends message to second driver
– second driver handles message, tells kernel “not me”
– kernel does EOI
That’s also 4 task switches, so early bail-out didn’t hurt but didn’t help either.
What if there’s only one IRQ? Here’s how that goes with early bail-out:
– ISR sends message to first driver
– first driver handles message, tells kernel “it was me”
– kernel does EOI
That’s 1 task switch for early bail-out. Without early bail-out:
– ISR sends message to first driver
– first driver handles message, tells kernel “it was me”
– ISR sends message to second driver
– second driver handles message, tells kernel “not me”
– kernel does EOI
That’s 2 task switches – almost twice as bad as “with early bail-out”.
What if there’s only one IRQ, but it’s for the second device and not the first? I won’t go through that, but it’s 2 task switches in both cases.
Now, how likely are each of these cases? The most common case is the “one IRQ to first device” case, where early bail-out is twice as good. You can guarantee that this is more likely than the “one IRQ to second device” case (by sorting the order drivers are told about the IRQ).
The interesting part is trying to figure out which conditions early bail-out causes a “net performance loss”. The cases where early bail-out don’t help or hurt don’t matter much, which leaves us with 2 cases to consider. If the “2 IRQs at the same time” case (with 3 task switches instead of 2) happens as often as the “1 IRQ for the first device” case (with 1 task switch instead of 2); then you get “3 + 1 vs. 2 + 2” or 4 task switches vs. 4 task switches. That’s the break even point.
Basically, early bail-out only causes a net performance loss when the “2 IRQs at the same time” case is more common than the “one IRQ for first device” case. That’s fairly unlikely.
– Brendan
*Arg*, by no means one should try to emulate the weird and crippled x86-IRQ architecture. Most SoCs provide a much simpler way giving one interrupt number/vector per external interrupt (PIC view).
Edited 2011-03-01 08:39 UTC
Well, if you want this, maybe you should review the source code and docs of NetBSD… it runs on many platforms, it’s something already done, the code maybe has some comments or doc for u to check… I would do.
I’d also suggest to look at the code of the various public RTOS implementations.
Nearly all SoCs with Power(PC) or ARM cores (including the Cortex ones) use some kind of PIC today. Though the cores itself only have only two or three interrupt lines.
At the time ARM started, the PIC wasn’t very sophisticated, but as of today most ARM SoC do some kind of vectorization with sometimes weird approaches, where the PIC induces op-codes on the bus (Marvel PXA).
But IMHO, when it comes to (RT)OS support, HW vectorization is not really needed, as the OS has to maintain the interrupt service threads and keep track of it.
If not hart real-time is needed, the model where the OS does everything on the HW level and only sends messages/signals to a ISR seems interesting. Esp. on ARM/PPC SoC with >400Mhz CPU clock.
But anyway, interrupt handling is the essential part of an (RT)OS and should be the first to be designed. And hiding it in a HAL certainly makes an OS easier to port to another CPU [even if the OS is written in assembly :-)].
Thanks for your OS articles, they’re very interesting.
This may not be much help, but I’ve been working with pic32. These are MIPS32 M4K chips. Here’s some info for a fairly standard chip.
<a href=”http://ww1.microchip.com/downloads/en/DeviceDoc/DS-61108E.pdf“>…..
http://www.microchip.com/wwwproducts/Devices.aspx?dDocName=en532432….. .