“There is a sub-genre of historical fiction one could loosely call ‘what-ifs’. The computer industry also has a number of such obvious what if scenarios. What if Bill Gates and Paul Allen hadn’t lucked into the IBM contract for DOS, which was the basis for Microsoft’s eventual hegemony? What if the UNIX operating system hadn’t split into several minor variants, but gone on to become what Linux later became, only a decade or two before? What if BeOS had gone on to succeed on the desktop…?”
I usually call that subset of historical fiction ‘alternate history,’ such as you see here:
http://www.othertimelines.com/
Interestingly, one of the timelines I worked on there, MacOS/PC, came true several years later… I should work on more operating-system based alternate timelines, they’re fun to research and speculate upon.
Thank you for posting this. I love alternate history, and I can’t believe I haven’t stumbled upon this site sooner. Speculating about what could happen or what might have happened is what makes IT fascinating to me.
There’s really no telling what might have happened had BeOS succeeded. I think it’s a good thing Apple didn’t pay out to Be after all, since their most likely course of action would have been killing BeOS. It would have been interesting to see what Mac OS X would be like based on BeOS instead of NeXT. But it seems hard to believe that the OpenBeOS/Haiku team would have come together back then when Apple discontinued BeOS, so Haiku probably wouldn’t exist today.
On the other hand, if BeOS had succeeded, it most likely would still be proprietary now (and Haiku wouldn’t exist since BeOS was still around). However, Be GPLing large portions of it isn’t out of the question, since they started GPLing Tracker and such around the time they went of business. I could see the BeOS R9 of today being up-to-date, with support for x86-64, multiple users, and of course more than 1 GB of RAM, and it would really shine now that multi-core processors are in. I could also see it eating a large chunk of marketshare from both Windows and Linux, but not to the point of rising above both of them. Perhaps Linux and BeOS would have similar-sized marketshares.
With more competition for Windows (and Linux itself!), perhaps Linux and friends would have improved vastly in the performance and ease-of-use departments and stolen more marketshare from Windows by now. I don’t think the scheduler drama would have started, because Linux would have adopted something to keep its performance on par with BeOS on desktops. (Maybe they would have had two schedulers, one for servers, the other for workstations?) A lightning fast, elegant window manager and desktop environment for Linux based on (Open)Tracker and the Linux kernel adding support for BFS aren’t implausable, either.
On the downside, the BeOS would no doubt be showing it’s age now, meaning it would have a lot of old, backwards-compatibility crud just like the other operating systems. Perhaps not as much so, and I think BeOS would still be infamously fast, but it wouldn’t still have that just-built-from-scratch edge.
Edited 2007-08-10 23:22
MS hadn’t locked the IBM contract for DOS? I think that one is fairly easy to answer…..we’d be complaining about “Steve Jobs” and “Apple” as being the evil empire instead of “Bill Gates” and “Microsoft”.
Make no mistake, the only reason why we all complain about Windows now is that MS beat Apple to the punch on a few key business decisions in the mid 80s and early 90s. Jobs certainly wouldn’t have become the “wonder-boy” he is depicted as today.
Or alternatively the IBM compatible PC would still be the standard, but we’d have been running versions of CP/M and GEM for the last 25 years. With Digital Research sustaining the software empire that they looked to be building back in the late 70s.
Even if IBM’s product hadn’t come to dominate the computer market, I don’t think Apple could have taken their place, providing the computer on almost every desk. Apple would never have allowed Mac clones to be produced, and I doubt they would have made particularly low cost computers. I definitely don’t think there’s much chance that they’d have licensed their OS to other companies. A lot of the success of the IBM PC platform (and Microsoft’s OS) is down to cheap PC clones, without anything comparable I can’t see Apple dominating the market.
Apple would probably have a much larger market share, but nothing like the 95%+ that the IBM PC has today. Instead we’d probably still have companies like Commodore and Acorn competing for a slice of the computer market, just like back in the 80s.
Compatibility might be more of an issue without one software company being so ubiquitous, but maybe the extra competition and innovation would have made up for that…
AMIIIGAAAAAAAAAAAAA
Ms has only be lucky enough to have the monopoly of the OS on the open plate-form the PC was (and still is).
Apple has always been working with their proprietary hardware.
While I work on BeOS daily (as well as 2 other much bigger OSes), I always hated the ridiculous claims of the BeOS file system. In the Hacker book and in this article there is mention of TB size disk drives and 260GB files given the 64b address space.
In practice my stock R5 disk formatter can only install 4 measly 32GB partitions on a drive no bigger than 130GB. I usually put NTFS on a really big partition and let BeOS have the left overs.
I would dearly like to have seen a portable open source Tracker running on top of Windows & Linux perhaps even more so than Haiku, and with a number of stupid issues fixed.
On the Zeta front, I wonder if any of the Zeta only apps might now get ported back to BeOS or did they get locked into functions that Haiku won’t have for some time.
There are two issues that control the capacity to make use of large drives, that I strongly suspect you don’t know enough about, transputer_guy.
1. This one is definitely a bug, but non-Dan0 mkbfs had a silly limitation for partition size that was purely a bug.
2. As of the time Be, Inc. went out of business, IIRC no PC BIOS implementations supported beyond that (once again) arbitrary BIOS limit that afflicts IDE and all the older technologies that format that way that defines sectors by cluster head and cylinder. To support larger drives at the time, you would need a SCSI boot device. That’s one of the reasons I bought a machine with built-in SCSI controllers that BeOS supported, though I never needed to expand my storage space soon enough to take advantage of the (I’m guessing) >130 GB drive size support.
With a more modern BIOS, I do believe BeOS can format partitions on IDE/PATA drives >130 GB (I don’t currently have that updated of a BIOS on my main machine, though) though you may have a hard time finding a Be, Inc. created mkbfs that formats larger than that. However, there’s Haiku…. my system has some small parts added from Haiku already in my BeOS 5.03 installation, which allows me to take advantage of a video resolution BeOS was never designed for,so if not now (I’ve not checked, and don’t have a real need to: how much space can I *really* use up in BeOS with as little as is available for it?) then soon, you should be able to take the parts you want from Haiku and do a transplant into a BeOS installation, at least in some cases.
The BFS filesystem, even though it has some real quirks that limit maximum usable file size when things start getting too fragmented, can] support those huge files and filesystems. There is definitely room for improvement, though, no doubt about that. ZFS sounds like it’ll be well-suited functionality-wise to allow existing BeOS/Haiku applications to do whatever they did, transparently, once it gets ported to Haiku.
“The BFS filesystem, even though it has some real quirks”
Like how it handles many files on partition horribly bad?
Try untar’ing something with a lot files, say the X.org source tree, on a BFS drive and watch it slow to a crawl. Then try to remove all the files. It’ll take forever (like 30 minutes at least).
I understand this is because BeOS was “the media OS” (seriously, that’s one dumb catch phrase) and that it’s optimized for large files but the performance hit is nevertheless quite pathetic.
Heh. Format your partitions without attribute or index support and there you go. It will be a lot faster but you will loose extended attributes, ultra-fast index base searches, loive queries and so on. It is a matter of trade-offs.
That said, the biggest problem was the fixed size file system cache, which was addressed in Haiku with the integration of the VM and file cache.
I’d be glad to hear that BeOS could indeed format >130GB at least on huge SCSI drives and it was just a bug that never got fixed since IDE drives were so tiny back then. I did use SCSI too, way back when.
Even with no Be development, my stock R5 has also improved considerably over the years, now have twin head or crazy resolutions far beyond whats is good for the eyes (thanks to Rudolf). Also finally got Sata working with comparable disk speed to W2k, even 1GB of ram works. Some of these I never would have expected for a “dead” OS. I’ll pray for better USB2 and larger HD format support.
While I am fairly critical of BeOS, I still use it the most, I am far more critical of the other two monsters I use. They ought to deliver so much more than they do given the development resource advantages they have, I have little faith they will ever feel as right as BeOS does at least for me.
So is there a ZFS port in the planning?
Hmmmmm? The mkbfs bug *WAS* fixed. Just get hold of a copy of mkbfs for Dano or Zeta’s mkbfs. In fact, even Haiku’s mkbfs can be used to create bigger partitions.
I’ll be looking for that right away, thanks.
As pointed out by someone else, that was mkbfs bug and not a limitation of BFS itself.
“I always hated the ridiculous claims of the BeOS file system. In the Hacker book and in this article there is mention of TB size disk drives and 260GB files given the 64b address space.
In practice my stock R5 disk formatter can only install 4 measly 32GB partitions on a drive no bigger than 130GB. I usually put NTFS on a really big partition and let BeOS have the left overs.”
BeOS was developed at a time when such large hard drives were very hard to come by. This was not a well supported (by PC makers, BIOS, etc) scenario.
The BFS claims are based on number theory, but this is a company that went out of business in 2001 before >30gb hard drives were prevalent like they are today. I suspect that if Be Inc. were still in business they would have long since addressed any issues with the formatting of larger disks and as such your gripes would not be relevant.
I have often wondered about this myself, and I posted a message to that effect on here a month ago.
http://www.osnews.com/permalink.php?news_id=18321&comment_id=258135
I still think that it’s an incredible shame that things turned out the way they did. Back then, Microsoft seemed totally invincible (with Windows being pretty much the only OS that most users would ever consider using). Also, the possibility of owning a home computer with any kind of SMP was just a pipe dream.
Yet today here we are with significant numbers of computer users becoming increasingly disillusioned with Vista (and Microsoft in general) and wondering what alternative OS they can use instead on their everyday 64bit core2duo’s (or even quad core gaming rigs etc).
Be was a fantastic OS (even way back then) so after 10 further years of development (and with the computer landscape being the way it is at the moment) then this really could have been the tipping point for Be.
I liked BeOS enough back in its day to actually purchase a copy of R5 and Gobe Productive (the office suite for BeOS). I agree it was a very very fast OS and everything was very smooth and it did have some cool features.
However, I still see a lot of people going on about how it was so much better because of its “modern” design and its lack of “backwards compatibility crap”, thereby making it faster than other current OS’s.
My question is – how much compatibility “crap” in linux actually has an impact on performance? I’m willing to bet very very little. People go on about how unix is built on 30-year-old technology, and this is true, but I fail to see how this is a bad thing. Linux is built to open standards, and is a solid and mature (and MODERN) desktop OS. Its a lot more advanced now than what it was back in the early ’90s.
I strongly believe that the only reason BeOS was so fast, was because it HARDLY DID ANYTHING. Yes it could play games and do all sorts, but it wasn’t running half of the services available to a standard windows or linux desktop today. Try installing windows 95 or 98 (or even 3.11) on your current PC and it too will run blazingly fast…because it has nowhere near the complexity and advanced features of today’s OS’s. BeOS on the other hand did have some very advanced stuff, but I still think, had it been capable of everything linux is capable of now, it would be at least comparable in performance.
Linux doesn’t suffer in performance because of its 30-year old unix foundation or ideas. Just about everything the linux kernel actually does (ie. every tick of cpu time it uses) is actually needed to do what its doing…its all relevant to a modern OS.
You can lighten linux up by removing all the extra functionality – and we have examples of this in the embedded linux space…but I’d personally prefer to keep all the features it has. It wont please everyone, but it can come damn close
I actually switched to linux after using BeOS, because linux felt more professional and more rock-solid. This is only my personal opinion, I’m not suggesting anyone else do the same
The speed of BeOS had absolutely nothing to do with what it ran, but how things were run under BeOS. Try and see how many threads a typical application from the Windows 9x era had, and how many threads a trivial GUI BeOS application has.
And so was the main complaint about the Be API, that it was hard to program to and that porting stuff from other platforms was a nightmare because all of the implicit threading (and accompanying locking) issues.
I guess I’ve been blessed by the ability to think parallel, and never found it any difficult to work with threads or other parallel constructs. I even find it impossible to understand how people can grok object oriented programming but not parallel interaction.
And of course, I’m always bewondered by the scheduling efforts on the Linux world. I find it incredible that they’ve been sitting on a scheduler design that has actually been described 12 years ago in an ACM award-winning Ph.D. thesis by a researcher named Carl Waldspurger, who currently is a VMWare employee. Not only that, but this very scheduler (or part of the ideas contained there) were implemented in Linux before, in 1999, by a German (then-)undergraduate college student.
5 years of false starts since the original O(1) scheduler. I have no idea why Ingo Molnár decided to ignore life outside LKML and RedHat cubicles, but the Linux top dogs should really, really try and re-embrace the academic world.
I won’t spoil you the fun of finding it out by yourselves, but just google for Waldspurger’s name and citations to his thesis
I’m grateful we had BeOS for a while, if only so that we experienced the “now *this* is how it’s done” factor. Being a non-free OS though, I think it was destined for failure in the long-term.
As for a new and free OS to rise up and take its place, I’m not holding my breath.
Back when Be was trying to get it’s OS accepted and Apple was fighting Microsoft, they were at a huge disadvantage. Today, less so. What changed? Today, the web browser is the single most important application on a personal computer and there are two very good open source rendering engines (Gecko and WebKit) available to be ported to any platform.
This has changed the game a lot and it really giving alternative operating systems a chance. It doesn’t make it easy, but it does make it easier.
web browser yes… but you’re forgetting all the crapwares that comes with them (flash, .NET, WiMP…) that real websites don’t use but other abusively “so called” websites use in place for standards.
There are of course opensource alternatives, but neither gnash nor the wmv support in ffmpeg is totally working.
BeOS would have been awesome to succeed because it was the multimedia OS kernel that I’ve read people post recently *nix needs.
I liked BeOS on my PowerComputing machine because it showed what a laggard Mac OS was on the same machine.
Then, I needed to print something or scan something or do something that required a peripheral device and I booted back into Mac OS, so I could accomplish something other than looking at the pretty windows.
Had Microsoft not signed an agreement with IBM, they would have returned to Digital Research and explained things and got the signatures they failed to get the first time.
Since Gary Kildall and his wife didn’t really have the ambition to conquer society, we’d probably have had several (compatible) operating systems like some extension of the 1980s and Microsoft would probably still be pushing their BASIC interpreter.
You know, way back in the days of the Atari ST and it’s GEM/TOS tucked neatly inside of a mere 192Kbytes of ROM.
Think about it… resolution is all about video memory, not disk space. Bit depth (# of colors) is all about video memory, not disk space. Sure it was single-tasking, but look at what others accomplished when Atari wouldn’t. Sure, it used a 720K floppy, but would making it use a 1.44Mb floppy have added any more code to the ROM?
All in all, if companies coded the way Atari (and 3rd parties) did back in the 80’s/90’s, we’d have computers that didn’t need all the multi-trillion polygon pumping video cards just to display some neat “Aero effects” in Vista. We wouldn’t NEED a 3GHz CPU to run the latest version of an OS. We wouldn’t need hundreds (nay, thousands!) of Megabytes of RAM to run an OS. And we sure as blazes wouldn’t need disk drives in the 100Gb+ range.
Wouldn’t it be nice to be able to get back to the simplicity and quality of the 80’s? When coders were respected for really pushing the limits of hardware instead of expecting the hardware to increase, just to run the app or OS?
It takes people willing to code straight to the metal. Hand-tuned Assembly. Code to every CPU feature. Squeeze out every cycle of CPU processing power you can. Save every byte. Fine tune every single bit.
Do we REALLY need a Pentium-class computer and an AGP card to view web pages fast or play games at decent frame rates? I think not. In fact, I’d love to prove that theory… but I doubt anyone would be willing to take me up on that challenge.
I propose that a 486DX-66 and decent 2Mb-4Mb PCI card, if coded efficiently enough and tightly enough, could display today’s web sites with acceptable performance. Couple it with a really efficiency-tuned OS (I propose working with Minix 3, if it’s got a web browser; not sure). Get rid of all the legacy code. Get rid of all the unnecessary drivers.
And tweak the (!!!) outta that OS, til you can’t squeeze another CPU cycle out of it nor get any more bandwidth outta the video card, come hell or high water.
People are finding ways to make old outdated 8-bit computers like the C64 and Apple II do web browsing. Computers that are so limited by today’s standards, it’s a frickin’ miracle they can even do it!
Why can’t we take a nice 486DX-66 with a 2Mb-4Mb PCI video card and an efficiency-tweaked OS and prove just how much you can REALLY get out of old X86 hardware, if you really tried?
All it takes is some determination. And a willingness to push the envelope of what others think “just can’t be done”.
In fact, if anyone is interested, I will pay $500 to see this goal met. Just for the sake of seeing this actually accomplished. It’s not that much money, but part of the value gained is in saying… “We actually did it!”
Luposian
Edited 2007-08-12 04:33
As someone who was also around back then, I think that you are, to some degree, looking at the world through rose colored glasses. Don’t forget that we also were running 640×480, if we were lucky. 300,000 pixels. Most likely 8 bit. Or 300K. 1280×1024 is 1,300,000 pixels. At 32 bit, that is 5,200K. Oops. Just squished the performance of that 2meg video card. Oh, and try transferring 5 meg of rendered data across that 33mhz AGP bus. Oops.
But, let’s keep going… 32 bit icons take 4 times the disk space and 4 times the rendering. Text is now (unlike then) sub-pixel precise, scalable and anti-aliased. MUCH nicer than the bitmapped fonts from days of yore.
Let’s move onto gaming. Let’s ignore 2D games – I will agree that they can be done without effort on (pretty much) any machine. PocketPC machines, these days, can run 2D games. But for 3D, that is another story. Now, the OS and the coding of same have no effect on the number of polygons that you need. That is purely a function of how round and realistic you want your image to be. How nice you want the effects to be. Smoothness of the explosions, etc. Back 15 years ago, we put up with an awful lot because it looked sort of 3d and it was better than anything that we had ever seen before. Now, we want photo realistic. That’s just *NOT* going to happen on that 486. Never did. Not for $500, not for $500,000.
Finally, let’s talk about web browsers. You said it yourself “Computers that are so limited by today’s standards, it’s a frickin’ miracle they can even do it!”. The issue isn’t the OS or the web browser. The issue is the standards that said web browser has to implement. HTML, JavaScript and CSS were never designed to be what people are using them for. Not even close. How is it that the DIV tag, originally designed for a division of text within the document became the AJAX container class of choice? JavaScript is a “real” interpreted language that has to completely live within the browser with complete access to the entire DOM tree. That means that no matter how much better of an organization or data structure you THINK that you can come up with, you are still held to what the W3C came up with 8 years ago.
I will not say that today’s OS designers, especially in the Windows camp, are doing the best possible work. Nor do I think that there is a lack of optimization work that can be done in Mozilla and (especially!) IE. But I would not choose to go back to ancient hardware. Nor would I think that we should spend tens of thousands of hours hand optimizing for processors that aren’t made anymore. It makes no sense at all. Should we write the fastest, most efficient code possible within the constraints of what makes sense? Sure. But hand coding asm is a VERY hard skill. And getting harder with the newer processors. It isn’t, generally, worthwhile to do so. It is better to write the code, profile it and optimize the pieces that slow you down instead of trying to perfect every piece.
You don’t check the mayo on your sandwich with a micrometer to make sure that it is uniform. In the same way, you don’t need to hand optimize every line of code. You write, measure, improve, test.
I find it interesting, you’re adding values and figures I never even stated. Displaying a web page decently, does not mean you HAVE to run at 1280x1024x32.
Let’s assume a bare minimum of 640x480x8 (256 colors). Forgetting about whether you have to scroll around for 6 weeks to see the entire page.
Let’s not care about all the fancy glitz of today’s pages with their Flash animations and whatnot. Let’s stick to just straight, 100% standardized HTML. No special Microsoft code whatevers and no 3rd party junk. We’re only interested in viewing pages the way they *should* be viewed, not the way they ARE viewed. We’re going after “Content”, not “Complex”.
Using a Text-based browser would simplify things even further, but that’s a little too primitive for today, I think. You want SOME modern conveniences, like scroll wheel control and scroll bars, etc.
We’re talking about Web browsing. Not gaming. The goal is to see if you can get around a web page of today, with just a 486DX-66 and a 2-4Mb PCI (not AGP) video card, smoothly, without all the “frame-skipping” (where you can see the page being updated as you scroll up or down).
I would like to see if an old (nay, ancient!) computer, by today’s standards, can be used to browse a modern web page smoothly with that amount of hardware. Nothing more. Nothing less.
Luposian
I so HATE 640×480 pages even on a 640×480 screen! Keeping it the OLD way would mean no YouTube which I love and have found so many videos of old bands I have for years been looking for. No Jobs live videos! So much that makes life easy for me, Google Maps with Sat view would not be around.
I do agree that some of todays coding could be a bit more tight but lets not go back to the beginning! I have come too far to go back!!
Even on a 1280 display, the Youtube video is still only a quarter of my screen. Is there a way to blast the video content area to fill the screen?
Same thing on most news sites, the text area is pretty much only a half or even a third of the screen width. The advertisers own the rest of the monitor.
If my TV or my books did that to me, I’d throw them in the trash.
Yes, if you go to Youtube, the bottom right control on the video will make the video full-screen. If the native res of the video is quite small, and your monitor is quite large, it will look like rubbish, but generally it looks pretty decent at 1280×1024.
A lot of websites are designed for 1024×768 or smaller still, but as large widescreen monitors become more popular and widespread, that might change.
If you have a 30″ monitor at native 2560×1600, well you are going to have issues no matter what you do.
Most LCDs run standard at higher than 640×480. They look really bad if you run them at non-native resolutions.
Now, as far as web pages, where do you draw the line? CSS? JavaScript? Or are you talking about plain HTML? Do you want to be able to use Google Maps? SSL? I agree that a lot of the technologies out there are spurious, but the baby shouldn’t suffer for the bathwater.
OK, so you can build an embedded web browser, hypothetically, on a processor that you can’t buy anymore, with a video card that you can’t buy anymore, that can’t view some percentage of today’s web pages. Help me understand why you would want to do this?
Again – I *GET* why you want tighter coding and performance. I was one of those ASM optimizing geeks. But sometimes the good old days are better left back then.
“Most LCDs run standard at higher than 640×480. They look really bad if you run them at non-native resolutions.”
Did I say anything about using LCDs? You keep jumping to different ‘brick walls’ (problems) that aren’t even there! Do you use a LCD monitor with a C64 running some nano version of TCP/IP and a micro text-based Web browser? No.
I am looking to simply prove if what I have never yet seen done CAN actually BE done. Nothing more. Nothing less. With the most minimal amount of hardware possible. No 1280x1024x32 resolution. No LCD monitors. No glitzy glam graphics whatevers or anything like that.
“Now, as far as web pages, where do you draw the line? CSS? JavaScript? Or are you talking about plain HTML? Do you want to be able to use Google Maps? SSL? I agree that a lot of the technologies out there are spurious, but the baby shouldn’t suffer for the bathwater.”
Have you ever tried to view Web pages using a 486DX-66 with a 2-4Mb video card? It ain’t pretty. It stutters. It “page flips” (where you SEE the page updating as you’re scrolling up or down).
All I’m proposing is getting down as low as possible (as “featureless” as possible), to see where the threshhold lies. When we attain that, we simply add a feature and try again. We see just how far we can go before web browsing becomes “annoying”, because the pages either take too long to load and/or you can see the pages “updating” as you scroll.
I think it’s all about making use of every byte of system memory, video RAM, CPU feature, and graphics card feature you possibly can. Make full use of every bit of bandwidth you possibly can.
“OK, so you can build an embedded web browser, hypothetically, on a processor that you can’t buy anymore, with a video card that you can’t buy anymore, that can’t view some percentage of today’s web pages. Help me understand why you would want to do this?”
What part about “people make a C64/Apple II, etc able to web browse” don’t you understand? THAT’S why I want to do it… to see if you CAN do it! This is about making an old outdated POS computer by today’s standards useable for browsing the web… smoothly. We’re not just trying to WEb browse… that’s already possible on a 486… just use Windows 95. I’m talking about… web browsing SMOOTHLY, like a modern system can. I’m talking about web browsing QUICKLY like a modern system can.
And it’s a known fact the *OS*, itself, can be amajor contributor to the problem/solution. How do I know this?
I have a 233MHz Apple G3 AIO (“Molar”). I install MacOS 9.2.2 and the latest version of the “iCab” web browser. Pages sit and stall and take *forever* to load/display.
I install MacOS X 10.2.8 and Safari (this is the first version of MacOS X I consider actually the BEGINNING of it’s line; everything else before 10.2 was “beta test”, in my opinion). And web pages take seconds to load/display. Exact same hardware… but a different OS and Web Browser (Hmm… maybe I should try the MacOS X version of iCab for a true test). And the OS/web browser take up more RAM and disk space, no less.
“Again – I *GET* why you want tighter coding and performance. I was one of those ASM optimizing geeks. But sometimes the good old days are better left back then.”
There were no “good old days” as far as the PC was concerned. There was simply “usable” and “more usable” as time progressed. From Windows 3.1 to 95, to 95 w/ USB support (950b), to 98, to 98SE, to Me (oops, big “two steps forward, one steps back” blunder there…), to XP, etc.
CPU’s went from 3.77MHz to 8MHz, and on up, but CPU technology is still rooted in the XT days and the “640K limit”, to my knowledge. Everything is just vastly faster 8088 with more features. This can be tested by trying to run DOS 3.3 or earlier. Or Minix 1.7 which ran on XT’s. Or even some of those ancient EGA games, that, if you try running them, will play so fast, you’re dead before you even blink.
I simply want to see… IF we really tried, as best we could, is it possible to do some of the modern stuff we take for granted now, we thought was impossible before, because we simply allowed technology’s advaces to take up the slack of lazier programming.
And programming got lazier, because we no longer HAD to code tightly, to save disk space or get the most out of a CPU or to fit into small amounts of RAM, etc. We found out, early on, that technology was advancing so fast, we could slack off on the coding quality and the application would run about as fast, on newer hardware.
I simply want to see, if we went back to “the good old days” of really tight, quality coding… if we might not find out just how much is still to be discovered, speed-wise, in some really old hardware we’ve since long written off… because of laziness!
People are doing it with old 8-bit computers for no other reason that to just accomplish the “impossible”. A 486 is so beyond the power of a 6502 or similar 8-bit CPU, it’s not even funny. So why, again, are we not trying to do the supposedly “impossible”?
Luposian
Edited 2007-08-13 20:08
I so hear you Luposian
You could implement your idealized virtual PC on top of any usable OS or even DOS and code to the basic events and file system. There isn’t any real need to call on anything in the host OS that isn’t also available in some form on every other platform and is truly standard & open.
Build a desktop environment you want to use and deliver a much smaller concise API to its apps. You still got your host OS for the bloatware. I’m sure 95% of the platform can be plain portable C/C++ and plug into other portable open source libs and by default all your apps would be platform neutral. This might sound like reinventing JVM or .NET though.
I would forget about asm, any modern compiler will do better, asm might let you write smaller “small” apps, but it requires an order more attention to detail and is probably only useful in codecs and very low level stuff.
I tried BEos and thought it was great. Nice and fast. It could have a fighting chance today if one of the console makers used it. With virtualization you could still run another os to use apps that it doesn’t support, who needs backwards compatibility???
About a third of the way through the bottom You-Tube window at http://www.bitsofnews.com/content/view/5945/ there is a very nice demo of “Turning Pages” under BeOS. Synchronistically, the back page of the September 2007 MSDN-Magazine is titled {end bracket}/”Turning the Pages with WPF”. This can be found online at
http://msdn.microsoft.com/msdnmag/issues/07/09/EndBracket/
Following the links to the British Library, what ended up in my favorites was
http://www.bl.uk/ttp2/ttp1.html
You will likely need to install the 30mb dot-net-3.0 before running it. To turn-the-pages on Leonardo DaVinci’s “Codex Arundel” IS cool – but the BeOS demo seemed way smoother, as one might expect from their blank-piece-of-paper system.