A few weeks ago, an interesting question cropped up: How fast is a PS/2 keyboard? That is to say, how quickly can it send scan codes (bytes) to the keyboard controller?
One might also ask, does it really matter? Sure enough, it does. As it turns out, the Borland Turbo Pascal 6.0 run-time, and probably a few related versions, handle keyboard input in a rather unorthodox way. The run-time installs its own INT 9/IRQ 1 handler (keyboard interrupt) which reads port 60h (keyboard data) and then chains to the original INT 9 handler, which reads port 60h again, expecting to read the same value.
That is a completely crazy approach, unless there is a solid guarantee that the keyboard can’t send a new byte of data before port 60h is read the second time. The two reads are done more or less back to back, with interrupts disabled, so much time cannot elapse between the two. But there will be some period of time where the keyboard might send further data. So, how quickly can a keyboard do that?
I love these questions.
I guess it depends if the keyboard has it’s own key bounce detector onboard or if the computer has to do it.
Debouncing is done in the keyboard before sending the key up/down code.
(as hinted in the article comments)
This kind of tricks makes emulation and virtualisation more difficult.
And Turbo Pascal 6.0 was released in 1990 ! Intel was already selling the i486.
Such hack could be excusable for 8088 IBM PC software, but, by that time, expecting any kind of timing garantees on computers able to multitask and run VMs was lucridous.
No 1990s PC was able to run VMs, they were far too slow and didn’t have the necessary virtualisation possibilities (other than running DOS VMs). And timing guarantees were fine when pertaining to hardware.
No early-90’s PC, at least… you’d not be running VMs on a 486. But in fact, it was the 90’s when virtualisation started to appear – e.g the first VMWare release was in 1999…
So very late 90s . Probably needed a very high-end P6 or the like.
The 486 didn’t support that kind of virtualization, it requires split (“Harvard”) instruction/data caches among other things. The 486 used an unified cache.
Actually, I think most were still using the DIN AT keyboard connector / the one previous to PS/2…
No 486 system used virtualization. It wouldn’t be a problem with this solution if it did anyway, that would be a bug in the virtual host.
Multitasking doesn’t matter. This is an interrupt routine that disables interrupts until the proper (chained) handler have finished.
I don’t know why you call this a hack given that it’s the proper design for the problem at hand.
Megol,
Speaking as someone who’s used those tools on more recent computers to run legacy software, we would experience keyboard bugs with borland tools, and now we know why. Haha.
Relying on arbitrary hardware timing characteristics to function is what our old friend Neolander would have called an ostrich algorithm:
https://en.wikipedia.org/wiki/Ostrich_algorithm
Some early software/games would rely on unchanging hardware performance and consequently are unusable on modern systems. IMHO this was amateurish back then (I was guilty of hard coding timing assumptions when I was learning to program). But there was less hardware variety back then, so it sometimes could pass as commercial software. This is obviously bad practice today.
Very interesting article BTW!
Edited 2018-08-07 14:31 UTC
Megol,
Speaking as someone who’s used those tools on more recent computers to run legacy software, we would experience keyboard bugs with borland tools, and now we know why. Haha.
Relying on arbitrary hardware timing characteristics to function is what our old friend Neolander would have called an ostrich algorithm:
https://en.wikipedia.org/wiki/Ostrich_algorithm
But this doesn’t do that.
Worst case scenario would be a 4.7MHz 8088 with a PS/2 interface attached.
Worst case timing would be the time of the keyboard buffer read in the interrupt routine to the reading of the keyboard buffer in the chained routine. Will that timing ever be exceeding the minimum time between keyboard interrupts? Nope.
This is basic real-time stuff.
Some early software/games would rely on unchanging hardware performance and consequently are unusable on modern systems. IMHO this was amateurish back then (I was guilty of hard coding timing assumptions when I was learning to program). But there was less hardware variety back then, so it sometimes could pass as commercial software. This is obviously bad practice today.
This isn’t hard-coded timing assumptions and not relevant. If things get faster the routine still work and the timing can’t get worse in hardware!
If one claim to emulate hardware and don’t actually do it, well, the problem isn’t in the original code.
Edit: quotes in bold as the ****** comment system doesn’t accept quote tags.
Edited 2018-08-07 14:46 UTC
Megal,
Re-read the part I quoted. I’m taking it at face value, but the hardware might not wait for the software to read the same value twice.
Edited 2018-08-07 15:34 UTC
I was sitting in front of an F5 BIG-IP back in the 1999/2000 timeframe. At the time, it was BSDI as the base operating system (they’ve since moved to Linux). It was being slammed by web requests and was frozen, because someone on live TV said “go to our website” and kabooom.
It exhibited a behavior I’ve not seen before or since. I would type something on the keyboard (old PS/2 interface), and on the VGA screen it would take several seconds for it to appear. I’ve always seen overwhelmed systems at least echo my typing back when on the VGA console. But this one didn’t. I wonder if it’s related.
tony,
I don’t know anything about that specific computer system, but I would guess it has to do with the screen interaction code not being interrupt driven.
When the screen is updated from within the keyboard interrupt handler, it ought to update immediately regardless of system activity. Technically, code executing “cli” would temporarily inhibit all system interrupt handlers, but interrupts don’t get disabled for a prolonged period in a normal application/OS setting, even on a busy system.
However, in applications that don’t use interrupt handlers and process screen interactions outside of interrupts, it could mean that the keystrokes will wait in a buffer doing nothing until they application polls for them.
On a related note I believe many operating systems handle the mouse pointer in an interrupt to minimize mouse pointer latency even during high system load.
Borlands solution isn’t a hack – it’s a proper design that works. There are no timing differences that matter, faster hardware will work and the slowest hardware possible will work. It works.
So you say some buggy software will fail to handle this case correctly? Sucks, but the fault is in that software that doesn’t handle things correctly. And handling it correctly isn’t exactly hard – real emulators do much worse things than emulating ~1msec signals.
Edit: excessive and removed.
Edited 2018-08-08 12:41 UTC
Megol,
To me, these two paragraphs contradict each other since borland’s own software breaks on some modern hardware/controllers.
Expecting the same value twice from port IO creates a timing race condition that would not exist if you only read it once. Perhaps they assumed that the race condition would be fairly safe on the hardware they had then, but this has the potential to introduce fragility with modern controllers (USB/bluetooth/etc) that might send key sequences immediately as they are read using port IO without adding the PS/2’s inter-character delays. “Normal” keyboard handlers are ready to handle the next character as soon as it read the last. Borland’s handler, on the other hand, doesn’t handle that because of it’s unique requirement to read the same input character twice.
If you want, I can budge and meet you somewhere in the middle: Borland’s approach worked back when hardware was more homogeneous and everyone’s computer used identical controllers, but they made assumptions that could break with new hardware.