“So what should the OS vendors be doing? I can think of five things that operating system developers should look at having part of the OS.” Read Brad Wardell’s editorial here.
“So what should the OS vendors be doing? I can think of five things that operating system developers should look at having part of the OS.” Read Brad Wardell’s editorial here.
Can’t say that I really agree with all of the points…
1. Seemless distributed Computing….
In theory, it sounds pretty nice. In reality, the latency between systems is prohibitive for farming out threads over the network, just based on distance and speed of light issues. You get into ‘ms’ instead of ‘us’/’ns’ timings. Also, this only really makes sense in programs that are highly parallel, with sufficient work to be done on each execution thread to justify the additional latency.
2. Seemless distributed file systems
This really seems like a mess waiting to happen. I guess I’m to perfectionist and orderly to like “Spotlight”, let alone a system that could require me to need it. Also, when you distribute the file system in such a manner, you run the risk of partial availability of data based on system availability. This also introduces another layer of complexity, which increases the challenges of systems admin.
3. Global User Management
This is not the responsibility of the OS. A global user management system would be a service, and the OS should be able to interact with one as part of a standards based authentication system.
4. Universal Environments
This is really just an extension of #3, with added support for preferences. The file location stuff is really a migration away from locally attached storage to network attached storage or storage area networks. (NAS/SAN)
5. Component Based Systems
DING DING DING!!! We have a winner!!
This is EXACTLY what the OS should provide. Extensive frameworks for developing applications. This includes the access to resources (memory/disk/cpu), drivers, sound & video frameworks. This also includes authentication frameworks, file system abstractions, and grid computing frameworks.
Examples….
Authentication Frameworks – This could include components for Active Directory, Kerberos, Local Authorization, LDAP tie-ins, etc. This could also include the “Global User Management” component. The end result is a layer of abstraction for the user. They don’t need to know where the authentication is taking place.
File System Abstractions – This really plays in with NAS and SAN, providing file storage resources on the network. The user does not need to know if the data is locally stored or sitting on a remote server.
The OS should provide the components to make these features possible, but not be required to be the sole provider of the features. In fact, the UI should not be part of the “OS”, but utilizes the components provided by the OS to provide the user management/interface service.
Yes, I understand alot of this is how things work today. It just needs to become more seamless.
– Kelson
Now you might be thinking, “Well if you think these ideas are so great, why don’t you do them?” The answer is, only the OS vendor can, as a practical matter, do this. If a third party makes these things and it’s successful, it’s only a matter of time (probably one version) before the OS vendor puts in one of these on their own, wiping out the “low hanging fruit” part of the market. As soon as some third party, for instance, put out a really good distributed computing product that “did it right” and started to make good business that targets consumers (DWL: Armtech is not a consumer product and isn’t what I’m talking about), you could be assured that the next version of the OS would have some basic implementation of this put in. And the OS vendor’s fans would chime in, “That should be part of the OS anyway!” In short, there’s no business case for a third party to invest the money to develop these things because the pay-off isn’t there.
Wow.
This needs to be required reading for anyone dealing with Microsoft. Next time they or one of their front groups start shilling about “innovation” someone needs to counter with a link to this article to demonstrate the chilling effect MS (and to a lesser extent, Apple) monopoly position has on third-party developers. Users, lawmakers, judges, and corporate purchasing managers need to understand that Microsoft’s monopoly and underhanded tactics have destroyed indepentent developers’ incentive to innovate. Their desktop computing experience could suck far less if Microsoft was required to document *all* of it’s API calls, and prevented from using the massive profits from Office upgrades to undercut third-party products.
The closed source vendors should be looking at Linux. It has managed to keep up on a tiny fraction of the resources. It does this through closed-source.
This is obviously a typo.
***
Interesting ideas, but when I’m not at my computer, my computer is turned off. Global identification or not I can’t actually get at my files. So to work we’d need some sort of remote network store that’s always online and can be accessed from anywhere. With that in mind it starts to sound more and more like a much large version of the SunRay idea (Or even Multics if you want to go back that far). Maybe I’m one of the “privacy nuts”, but I really don’t like the idea of my data being under someone else’s control.
plan 9. nuff said.
I don’t know about other aspects of distributed computing, but I’ve always wondered why machines can’t share memory for swap purposes. Obviously, this only works on small completely trusted networks (wired), but it really seems like a good idea.
Average seek time on a modern ide disk is probably around 9 ms or so. For all practical purposes when comparing to disks, memory latency is instantaneous (yeah, it takes some amount of time but no where near the ms range). So, if one machine runs out of memory and needs to swap, it would make more sense to use available memory on another trusted machine. I just checked and pinging another machine on my network takes about .2-.3 ms. That’s a lot better than disk latency. On a gigabit network, the transfer rate between memory on different machines is higher than disk access on the local machine too. Why doesn’t someone do this?
Oh and don’t respond with security issues. I know the issues, but there are situations where all the machines are the same wired, local network are trusted.
Because not counting all the other things, the User Shell is completely replaceable by another user level process, just as it was under Windows 3.1 16 bit. However, it’s true that things that rely on certain aspects of the expected shell may not work anymore, but the reality is that Windows does NOT absolutely require Windows Explorer to run as the user shell. In fact, Windows allows you to replace the logon module completely with your own logon module: been there, done that. so, his point there is only valid inasmuch as it comes to the payoff, in that people have something “good enough” for most cases already built in, even though it isn’t perfect, and Microsoft likely would swallow up someone’s third party idea for that.
It’s funny that a paragraph after talking about how naive everyone was in the 90s, he says the OS should fling my data all over globe. And then I should walk up to an untrusted public computer and type in the only two things needed to give someone complete access to all that far-lung data.
Would this be a great system? Sure, in a lab. Would I trust it to work outside a lab? Hell no. Even with pervasive encryption, someone can still snoop the processor bus and get you the data you’re processing.
In fact this article described Linux + LDAP + NFS + OpenMosix. See:
1. Seemless distributed Computing….
OpenMosix make linux processes be distributed transparently over the linux machines of network.
2. Seemless distributed file systems
NFS do network file system, don’t ? If you prefer something more advanced you can use Coda, OpenAFS, Intermezzo or others.
3. Global User Management
OpenLDAP ?
4. Universal Environments
Within a local network you can do this with NIS + NFS or Ldap + NFS or other schemes.
5. Component Based Systems
Linux (and *BSDs) are the kings of modularity because they are developed by pieces for multiple persons. There are no proprietary system with total modularity like linux.
>For instance, in Windows I get 5 choices on how to view a folder (icon, tile, thumbnails, details, and list). Third parties should be able to extend this (I can think of a dozen ways I might want to display data in a folder).<
Well, that’s possible since Windows 95, I think. Just write a shell-extension. By providing your own IShellView implementation you can present the data just like you want. E. g. this is done by Windows itself: The filmstrip view for folders containing pictures is such a shell extension.
> Because not counting all the other things, the User Shell is completely replaceable by another user level process
And you obviously have no clue who Brad Wardell is. He is the guy who his product replaces the user shell on Windows, Stardock’s ObjectDesktop.
All you people sum up loose parts that perform a certain (small) part of his wishes. However, his point is: it isn’t integrated as a whole.
“Linux (and *BSDs) are the kings of modularity because they are developed by pieces for multiple persons. There are no proprietary system with total modularity like linux.”
QNX. Has far better modularity than any Linux system will ever have. Why? Replacing a system component, drivers, filesystems whatever, takes no reboot. Unlike Linux. muK is far superior to monolithic when it comes to your modularity.
I more or less agree with Bill Allen. He didn’t mention KDE, though…
> 5. Component Based Systems
KDE. Kparts/DCOP/etc. means that you can offload a lot of the work of doing things to OS pieces designed for it… and of course, the underlying *nix system will have components like ImageMagick available, etc. etc.
I can’t imagine anything much more component-based than KDE over *nix.
On the other hand, I remember the Amiga well… which KDE/*nix Distributors could learn some things from.
“Well, that’s possible since Windows 95, I think. Just write a shell-extension. By providing your own IShellView implementation you can present the data just like you want. E. g. this is done by Windows itself: The filmstrip view for folders containing pictures is such a shell extension.”
Feel free to point out some third party made extensions made.
We’ve tried to add additional views such as a tree view and it’s non-trivial and flakey. It’s possible to create additional IShellviews but getting it to display as another view item throughout the OS requires immense work.
But I’m prepared to be educated, if you can point me to a site that has a third party shell extension for having an additional folder view that works in file dialogs, folders, etc. let me know.
To be fair to Microsoft, WinFS has the potential to become this
To be fair to BeOS, BeFS is this right now. It works perfectly and more intuitive than I could ever imagine something brought up by Microsoft.
Furthermore, there is no need to be fair to Microsoft, as they are not fair to their customers and competitors.
I can’t imagine anything much more component-based than KDE over *nix.
Except KDE is not an Operating System. It’s only a desktop, the author here is focusing on the operating system as a whole, not just one part of it.
“1. Seemless distributed Computing….
In theory, it sounds pretty nice. In reality, the latency between systems is prohibitive for farming out threads over the network, just based on distance and speed of light issues. You get into ‘ms’ instead of ‘us’/’ns’ timings. Also, this only really makes sense in programs that are highly parallel, with sufficient work to be done on each execution thread to justify the additional latency.”
Pro Audio is already taking advantage of this, most notably with Apple and its Logic 7 app, which can use nodes to enhance plug-in capability. There’s also an app for WinXp that does this.
#5 Component Based operating systems
Not really sure why he included MOSX in this section. Darwin is free for anybody to download and tweak. He didn’t really give any examples of what he meant for MOSX. Apple does more or less lock you out of just about everything above the core OS, but the access is there.
I more or less agree with Bill Allen. He didn’t mention KDE, though…
> 5. Component Based Systems
KDE. Kparts/DCOP/etc. means that you can offload a lot of the work of doing things to OS pieces designed for it… and of course, the underlying *nix system will have components like ImageMagick available, etc. etc.
I can’t imagine anything much more component-based than KDE over *nix.
Yes! fish:// in KDE rocks, have ssh running on your machine? go to another machine and fish://[email protected] in konqueror and you can browse, stream, copy, delete, and generally use your files. Its pretty cool, and there are many more components that can be used virtually anywhere in kde apps.
Okay, you may be right. But I wouldn’t wonder if it’s possible, but just undocumented. Take XP’s “Common File and Folder Tasks” bar for example. According to Microsoft this one can’t be extended by shell extensions, but NSELib (www.whirlingdervishes.com) allows you to do exactly that. NSELib uses undocumented shell interfaces and SFVM_* messages to do this.
It should make coffee too.
I can’t say I was very impressed with this analysis. Some of his ideas are good but they are already being implemented and the others have good reasons why they either shouldn’t be done or are very difficult to do.
1. Yes grid computing is a good idea. This is why both apple and IBM are contributing to the Globus toolkit project and why OS X 10.4 includes a grid computation client. Admitedly we aren’t up to where he wants up to be yet however it takes time to get the architecture right when you need to defend against unscroupulous people just stealing your processor cycles but whithout requiring specific user set up (like the os x client does)
Besides, the truth is home users really benefit quite little from grid computing. Most home user tasks are I/O bound and in particular spending alot of time writing things to the screen. As anyone who has run tasks remotely in X11 will tell you usually it is alot faster to run all your tasks locally than farm them out on remote machines (hacker tasks like recompiling kernels don’t count).
2. Seamless distributed file system huh? Alright so you take your fancy new computer out of the box and set it up. Now that you have a reliable computer you can start writing your novel without worrying about the old disk drive fritzing out on you. Ohh shit that seemless distributed file system was keeping the data on your old computers hard drive after if seemlessly created that distributed file system.
Of course we might seek to use redundancy to prevent critical failure like this. Now things suddenly get hard if you want to keep the varius redundant storage parts synchronized in case of crash. Not to mention the increased disk usage for the redundant data.
This does not even address the security concerns. I bet most people have some files on their computer they don’t want anyone on their local network to access. Alright you say we will use permissions on files but then the question naturally arises who is root for the local network? If there is no root what happens if you make your friend an account on your local network he saves a big video file and forgets his password. It would seem that no one else can even see this file exists much less delete it. This isn’t even considering how you prevent someone on the local network from running a fake file system client and looking through all the data.
In short both security and efficency require that one leave local data on the local machine unless the owner of the machine explicitly joins a distributed file system. To do otherwise risks data loss and a huge security hole. But this is just asking for a nice front end to any of the many NFS out there. While this would be nice it is hardly the revolution he wants.
3,4) Can anyone say huge gaping security hole? Apparently by default one has complete access to your home computer based on your online account. Viruses, worms and trojans could spread even faster with more opportunity by piggybacking on peoples desktops (opening your home spreadsheet on the remote computer presumably bypasses the virus scanner on the email server). In short it appears this puts ones entire digitized life accesible through one password/service.
No major buisness nor government customers would tolerate such a feature. The first time someone’s password is guessed and embarassing secrets revealed or malacious data changes discovered the public will panic. Also if the user profile is held on this server what happens when the server is down or you want to log on to another machine in your local network and the internet connection isn’t working.
Windows already allows one to do this inside and organization. You don’t think it was the technical problem or a pang of concious that stopped them do you?
5) Many of you seemed to like this idea though mostly I think this is because you gave him too much credit in his word choice. I do not think the author is suggesting a micro-kernel and component based architecture in this sense, though I have some doubts if this is really a better scheme anyway, but rather he is suggesting the the user shell and default widgets be component based.
Well if the author hasn’t noticed themeing has become a disgustingly common craze in the linux world and too a lesser extent in windows. While a little bit of themeing is okay the extensive component system he wants is not in the interests of software developers or users.
First of all security is once again a concern. This time not in terms of actual privlege escalation or attacks but from spyware and adware. Imagine adware that replaced your default double click with something which made you go through an ad.
Secondly, providing a consistant UI to everyone who uses your OS is both important for user familiarity and as a sales point. OS X delibrately offers only two themes to keep a consistant look and feel and while one doesn’t have to go this far having everyones machines do mostly the same thing when you double click makes things alot easier for most people and companies
What about protected memory? That strikes me as a pretty basic thing I expect my operating system to have.
I don’t think these five technologies are it. Linux can do almost all of this stuff but no-one gets excited by it. Because Linux does not do this seamlessly and also a lot of these technologies are risky in todays world. But mainly this is not stuff to get excited about. It’s is like saying hey look we can build a spaceship to the moon (already done that). These are mainly the pipedreams of yesteryear. We need the future not the past.
On a gigabit network, the transfer rate between memory on different machines is higher than disk access on the local machine too. Why doesn’t someone do this?
Many reasons. One the latency measurements of an ICMP packet are different than real data packets. Memory pages are usually 4K, 8K or larger depending on the architecture. What is the latency of sending 10 0r 20 4K pages accross the network?
Then comes the question of cost effectiveness. You need an entire machine with something like a RAMdisk to be the swap partition for the first machine. What happens if the machine with the RAMdisk needs to swap? Which would be cheaper a harddisk or a dedicated full blown machine to save a few milliseconds ( if at all possible see above)? Wouldn’t it be cheaper to by more RAM or a bigger machine with more RAM?
If you didn’t use a RAMdisk, the kernel on the other machine needs to make pages available to th first machine, again potentially inducing swapping on it’s own.
5. Component Based Systems
DING DING DING!!! We have a winner!!
This is EXACTLY what the OS should provide. Extensive frameworks for developing applications. This includes the access to resources (memory/disk/cpu), drivers, sound & video frameworks. This also includes authentication frameworks, file system abstractions, and grid computing frameworks.
This reminds me of GNU/Hurd.
Enough said as well.
Hmm… component based is the only one I’d feel a desperate need for… of course, as an ex-Amiga owner, I once had just that (when your Exec is more OO than your GUI…)
Of course, component based at the OS level pretty much means micro-kernel, you can do it with monolithic, but it’d be a poor imitation – good enough perhaps (and for server systems, monolithic probably makes more sense… desktops would be better served by a micro-kernel though…)
He knows Windows very very well and has for about a decade. Go search on his name if you don’t believe it.
LOL! Doesn’t know Windows??? He’s BRAD WARDELL. Go look him up!
“QNX. Has far better modularity than any Linux system will ever have. Why? Replacing a system component, drivers, filesystems whatever, takes no reboot. Unlike Linux. muK is far superior to monolithic when it comes to your modularity.”
Nothing against QNX…it’s great for what it is and I’d have no problem suggesting it for small platform embedded systems.
As for Linux, unless you replace the kernel itself, and the part you want replaced is a module, Linux doesn’t need a reboot either.
There was an experimental set of patches that allowed the kernel itself to be swapped out, though there wasn’t much of a practical benifit to that and so it hasn’t eneded up in the main branch.
1. maybe localy, but then with a special northbridge connection so that you can hook motherboards together so that they act like one computer. or maybe take the pci-e bus and use those 16x ports that now are aimed at graphic boards to be used as slots for addon cpus? every addon would have to have its own ram most likely so that you can trow a entire task to that cpu without haveing to worry about shared memory bandwidth.
2,3,4. sorry but i would rather use a external drive hooked up to a usb or firewire port that i can bring with me to diffrent machines. this drive would then contain my user files and desktop enviroment (as it most likely can ba just another app anyways). prefearbly able to dump the states of said enviroment to disk so that i can just frezze my work enviroment one place and bring the drive over to a diffrent one and resume without haveing to save anything. combine it with a smartcard thats holds your unique id code and you have a system where you hook up the drive, slot the card and your present. basicly the cards id, combined with a code from you or something (stored encrypted on the card) id you to any system you try to log into.
5. im all for it, but then it needed to be all the way down to the kernel, and fully&freely documented for both user and developer so that they can replace any component.
and a general comment at the end, i dont like automatic tools when it comes to stuff like shareing access over the net. and when manualy activated should turn on only the most basic of access unless the user/admin takes time to change that. anything else is a security nightmare that will make the problems of windows and ms office look like a joke.
2. Seamless distributed file system huh? Alright so you take your fancy new computer out of the box and set it up. Now that you have a reliable computer you can start writing your novel without worrying about the old disk drive fritzing out on you. Ohh shit that seemless distributed file system was keeping the data on your old computers hard drive after if seemlessly created that distributed file system.
It would never do this. By default it would keep a file local. When copying it to another machine it would keep a copy on the local machine as well. Of course, it would have a number of seetings that user can control to pick a certain behaviour.
Of course we might seek to use redundancy to prevent critical failure like this. Now things suddenly get hard if you want to keep the varius redundant storage parts synchronized in case of crash. Not to mention the increased disk usage for the redundant data.
Disk usage is not an issue since disks have large capacities and are cheap. If you have a number of large files (such as movies) you could always mark them as local only and disable copying between machines to presrve disk space.
Things do not get hard. Synchronization has been done a long time ago for databases, files, etc. There are all kinds of synchrnization algorithms implemented and working today. How does Google and other large sites keep their data distributed on a number of machines?
This does not even address the security concerns. I bet most people have some files on their computer they don’t want anyone on their local network to access. Alright you say we will use permissions on files but then the question naturally arises who is root for the local network? If there is no root what happens if you make your friend an account on your local network he saves a big video file and forgets his password. It would seem that no one else can even see this file exists much less delete it. This isn’t even considering how you prevent someone on the local network from running a fake file system client and looking through all the data.
Security is an issue for everything today. We already have to deal with it for software, e-mail, etc. DFS is not different. Of course, DFS has to include a solid security logic, otherwise it would cause more harm then good. As with everything, there are a number of details, again applicable to all software not just DFS.
Considering that networks are already everywhere and computers are already connected, it is too late to complain about security. Whether you use FTP, NFS or a distributed file system the same security issues are present. The difference is that DFS could possibly automate some security operations by configuring permissions, user keys, etc.
In short both security and efficency require that one leave local data on the local machine unless the owner of the machine explicitly joins a distributed file system. To do otherwise risks data loss and a huge security hole. But this is just asking for a nice front end to any of the many NFS out there. While this would be nice it is hardly the revolution he wants.
That is exactly the point. Us geeks can deal with NFS, but majority of users cannot. Besides NFS is a low level layer. It doesn’t allow me to bring up the document with the title XYZ, it only knows paths and file names.
What he is saying is that we should have a layer akin to NFS builtin so that we don’t have to bother with NFS setup and config which is too difficult for most users. Number one, most users don’t have the skills to deal with low level stuff such as FTP and NFS. Number two, most of the time people look for files based on their meta, such as title, author, album, etc. not file names and paths.
http://www.dekksoft.com/index.html
1. maybe localy, but then with a special northbridge connection so that you can hook motherboards together so that they act like one computer. or maybe take the pci-e bus and use those 16x ports that now are aimed at graphic boards to be used as slots for addon cpus? every addon would have to have its own ram most likely so that you can trow a entire task to that cpu without haveing to worry about shared memory bandwidth.
You mean like the Sun PCI cards. Sun has been doing the very exact thing for years now. They have a PCI card with memory and a x86 cpu that you can use to get a windows or x86 based environment on a Sun box.
1. maybe localy, but then with a special northbridge connection so that you can hook motherboards together so that they act like one computer. or maybe take the pci-e bus and use those 16x ports that now are aimed at graphic boards to be used as slots for addon cpus? every addon would have to have its own ram most likely so that you can trow a entire task to that cpu without haveing to worry about shared memory bandwidth.
You can do that with Risc Pcs They have two slots for cpu’s. you can fit in an ARM7 card(thats just the cpu), strongARM which takes extra ram, or an i486/586 card.
A bunch of computers from olde lets you put in a second cpu of differing type (e.g. zilog z80 and 8086 or 6502 and a 8088). Rarely happens anymore.
man insmod
there is quite a bit you CAN do. unfortunately, very few people do it. if they did, we would live in a world where the explorer sidebar didnt suck so hard.
I agree with #4:
Points 1,2,3 and 4 of the original article where tentatively answered by project plan9.
plan9 documentation is very enlighting reading, still nowdays.
Those ideas would be beneficial if imported into other, more current projects (dragonfly?)
The other strong point of plan9 is eradication of any problem with cross compilation and makefiles
he mentions all of these things that linux/*nix systems have had for years as no-one having them.
But when you point this out to him as 1 guy did (2nd post on that site I think) He starts yelling about how unuserfriendly linux/*nix is.. it’s obvious he’s not looked at it in a while or is expected to have a program pick and probe the computer to death so he doesn’t have to use his brain(tho is this what we need? is the world a bunch of brainless people? sadly I’m starting to think yes.)
“And so in the past few years the two major OS vendors, Microsoft and Apple have largely taken on the role of tossing in features into the OS that third parties had already provided or that the other had managed to come up with on its own. And then after that the Linux vendors then try to mimic that”
I believe it was MS that mimiced KDE’s ability to do window themes and styles. Oh.. and not to mention all the other 3rd party things other people were doing. tho I believe that was also mentioned later at the end.
“Your comment almost makes me inspired enough to write “Why Linux advocates don’t get it”. When I say features part of the OS, I mean part of the OS, features that an end user can make use of.”
My comment to this comment is.. please do.. I REALLY!!! want to know what I’m not getting..
This comment is really funny! it’s made as a reply to the previous comment I mentioned and took the above comment out of.
“Your exactly right, it would already been used today if people weren’t not so selfish and obsessed over their money.”
So if people were not so interested in an OS that THEY can customize and help develop, MS or Apple would have had the cool stuff this free OS can do?
Overall this article should be renamed to “5 things Windows Should have.”
I don’t like most of these ideas, especially the ones relating to networked files.
I want an OS to be FAST and efficient. I don’t want or need these bells and whistles.
Did you even read the article? Linux does not have any of the features he mentions, plan9 could probably do #1, and yes KDE could partially do #5, and that’s partially because even kde has it’s limits, there is no such thing as kde drivers for example, it’s only a top level DE, nothing more.
As for your claim that: “I believe it was MS that mimiced KDE’s ability to do window themes and styles. Oh.. and not to mention all the other 3rd party things other people were doing.”
1)this guy is the “other 3rd party things other people were doing”
2)There were window themes/styles/skins long before kde.
And I could turn the table around and point out that kde gui stuff like superkaramba is a blatant ripoff of samurize with the subtle difference that samurize actually has a usable configuration editor, is a lot faster and much less resource hungry than superkaramba.
So please don’t try to convince people that linux/kde/whatever is the second coming. Linux has it’s merits but not those.
Component Based Systems
Does that include things such as an HTML text rendering engine or a multimedia (audio/video) framework? ^_-
I think the approach you are proposing for # 2,3,4 is better than Wardell’s. It’s far more secure, and could be combined with some identity checking system, like fingerprint recognition or so.
… places the focus on the user and not on the OS or the programs that the OS is running.
For instance… User opens app1, then app2 and then app3. User changes the focus to app1 and starts typing some content. When app2 and app3 end loading, in current crop OS, the focus is brutally removed from the app1 and changed to app2 and app3.
The ideal OS would instead keep the focus on app1 and simply “mark” app2 and app3 as “ready to go”.
Cheers…
@raptor,helf:
kinda yes, but rather then useing a special connector one would use the normal pci-e bus as i think it have the bandwidth to handle it. this way you can either run SLI if your into 3d for some reason or use one of the slots to increase the computeing power of the box generaly.
there was allso some talk about a pci-e connector, similar to usb in scope i guess, that (with hte correct support in chipsets) could allow for chaining together diffrent motherboards. the os should see them as one big computer without the need for a seperate os running on the different motherboards (like one do today with clusters). the resources would have to be weighted tho so that higher priority tasks where allways closer to the os and the user.
@drfelip:
not a problem, see the part about the smartcard. this holds a key that need to be present to access any encrypted content on the drive. and at the same time it allso carrys a “password” that needed to be enterd by the user, either via a keyboard or as output from some biometric reader. the string is stored like passwords are stored, a one way cypher so to in theory get what the password is you have to test all the combos. and if based on a biometric reader that password can be very long! if key isnt enterd then the encryption key cant be accessed and therefor any encrypted content on the drive cant be decrypted. security in layers. and to log into a remote system the smartcard and maybe some sertificates stored in the encrypted section needed to be present. ei, the more items a person have to steal or copy to gain access, the less likely is it that it will happen.
Power cuts… I don’t want to reinstall my OS, but I want to continue working where I was a few seconds ago. We’re now in 2005… why isn’t it still possible?
get a ups, then you have about 30 min to save your workload.
still, i wonder what would happen in the world if every stick of ram had a flash element on it with equal capasity to the ram itself so that you could do a state dump to it at a moments notice. that in combo with a battery powerd system that would perform the dump the moment a power failure was detected and you basicly have what your asking for.
Maybe it’s not necessary to combine flash+batteries, 1-2 minutes of battery should be enough to save al RAM to disk, like some notebooks do in hibernation mode.
Hey Hobgoblin, I see this as good way to do #2,3,4. Is it possible that somebody is developing some similar? Call me naive or ideallistic, but I think it’s good to encourage and spread the word about good projects.
i cant say i know of any, i have seen some university papers on the subject tho. but people seems to focused on networking ability, be it useing VPN or not, rather then physicaly moving the stuff about. even with a VPN, physicaly moving it is better as a VPN can be hard to set up correctly even if you have access the any firewall or similar that you end up behind, and hell if you dont. and accessing resources remotely without a VPN over the internet is asking for them to be hijacked.
i have a small plan on building a test system for this kind if setup useing linux. that is if i can get the money and nail some problems down (like how to handle the login, linux uses a central password file while this require that the password are spread all over )…