In an almost indiscernible and confusing article filled with various scientific terms that most cringe to hear, it was described how in October of 2008 scientists successfully stored and retrieved data on the nucleus of an atom– and all for two short lived seconds. With this new type of storage, a traditional bit can now be both zero and one at the same time, but in order to understand just how this is possible, translate the article linked above to plain English. Data integrity returns after two seconds at 90% and storage is obviously impermanent, so there are many kinks to work out before atomic storage actually serves a purpose, but give these scientists a couple of decades, and it’s theoretical that we’ll one day have nuclear drives the size of USB drives today (or MicroSD cards, or why not even specs of dust?) that can hold hundreds of terabytes– even pentabytes– of information.
The more “certain” the data on an atom,
the more energetic it will be.
That is a very large stumbling block.
ah… but if you use redundancy and poll say, 100 atoms, all of which had the same bit written to them, your energy levels for high liklyhood of correctness are reduced.
Well, not necessarily. That would depend on the relation of energy to certainty. It might be more ‘efficient’ to just use one atom with high energy than a whole bunch with very low levels.
Edit: typo
Edited 2008-12-10 05:54 UTC
If we were talking about atoms in isolation, then you would be correct. We are, however talking about atoms in a collection that is considered a “Storage Device”.
Considering the need to supply an equal amount of energy to every atom which has data written to it, then as you store more and more information, more and more atoms being employed to lower energy levels used to ensure certainty of data would be a benefit.
Perhaps, what we will end up with is a system where it is assumed a full drive of data, with a 99.9999% certainty level of stored data, they then decide how much energy they want to peak at and extrapolate how many atoms clusters it will take to achieve that. They can then, begin to store information at lower energy levels with high specific energy on the electrons in the atoms, until they hit a break even point at the device begins writing redundant data to the rest of the cluster to maintain that break even point so as to not pull any more energy from the energy source.
And you thought data loss was a problem at the moment.
Reminds me of an old episode of “Maude” with Bea Arthur. It was Mrs. Naugatuck’s (the housekeeper’s) first day on the job and she found a jar that was just “full of some old dust”, so she flushed it down the toilet. She mentioned it to Maude, who replied, in that marvelously dry tone that only Bea Arthur can summon, “That was my 2nd husband”.
Edited 2008-12-09 18:11 UTC
Superpositioning exponentially expands the storage capabilities of a quantum data bit or ^aEURoequbit.^aEUR Whereas a byte of classical data, made up of three bits, can represent only one of the eight possible combinations of 0s and 1s, a quantum equivalent (sometimes called a qubyte) can represent all eight combinations at once. Furthermore, thanks to another quantum property called ^aEURoeentanglement,^aEUR operations on all eight combinations can be performed simultaneously.
Oh joy… don’t we have enough trouble with parallel computing without dealing with “parallel bitting”?
2 seconds data durability is not nearly as bad as you’d think, as long as you’re not overly concerned with it being nonvolatile. If the power requirements aren’t too bad, it can even be battery-backed.
Here’s the thing: dynamic RAM needs to be refreshed every few milliseconds or some much shorter time (that’s changed by some amount due to process shrinks and other minor things) and we’ve been getting along perfectly fine, even with those mild limitations, in terms of data storage reliability. Sure, it’d be nice not to have to deal with the time and power overhead of refreshing every so often, but that’s the tradeoff needed for the physical space that that amount of data storage takes in a system, as static RAM, while faster and not nearly as fragile, requires a much larger amount of chip real estate, and costs a lot more to manufacture.
I agree that 2s data durability isn’t a big issue if it may be refreshed, but I noticed that the electron reading was correct only 90% of the time, so you’d need also to build quite a few redundancy mechanism to get better reliability..
Yes, 90% correct is 100% wrong.
You should look at how much we rely on DVD’s, CD’s, and how error-prone all that digital storage is at the low level, and investigate just how it is that we have such reliable long-term digital storage. So, too, hard drives aren’t nearly as perfect as you’d like to think: it’s not an insurmountable problem to take 10% failure rate and making the end result be 100% (or 99.99999999999%) correct, but it won’t be quite as compact as something that doesn’t need error detection/correction methods
2s data durability is neither here nor there as it’s just a test system and the scientists working on the project fully expect to hold the data longer as R&R progresses (a possible data durability of years was a suggested projection).
Also, the 10% fall out was due to impure silicon-29 crystals (the electrons in the non-silicon-29 atoms “wobbling” the elections in the silicon-29 atoms to be a little more precise). Again, as R&R progresses this will be less of an issue.
try ‘petabytes’…
Yes, I also was thinking that we’ve moved far beyond pentabytes — I believe you could make that sort of storage at your kitchen counter out of a thimbleful of discrete transistors
And I thought we’ll get rid of need for silicon… Now we need it purer and more isoisotopic.
That’s what’s called Appropriate for the Generally Knowledgeable Individual Working in the Field or a Related Field, which is fine if we’re talking about a scientific publication (which we can suppose, given the page it’s published on).
I found that perfectly clear and rather interesting.
“Indiscernable” means that one cannot discern it: it cannot be clearly seen. The page was as visible as any other. Thus the article or page cannot be “indiscernable”.
Its point was clearly there to be discerned, too. Transfer of spin state from electrons, where it is very volatile, to nuclei, where it is (relatively) stable. In terms of quantum events, 2sec is an *extremely* long time. It’s a very long time in terms of, say, dynamic RAM refresh, too.
Confusing? Only if you don’t know basic quantum mechanics.
I think the editor has judged this piece unfairly.
Agreed.
And it’s a pity the article was summarised in such a way as I suspect it’s put a number of OSAlert readers off going to the source (going by some of the comments critisizing the test figures despite them just being early test figures rather than theoretical limits)
Uh….most people don’t, you know.
In fact, IMHO, the best is with altering the bonding angles of molecules. The treat here is that without some force acting to cause change, the state remains valid forever.
Another trick would be to combine both methods, storing a radical charge spike in the nucleus of an atom, with the bonding angle of the molecules being an immediate ECC function. Of course, we need to find out which molecule(s) will act the way we want ( hint: 1 hydrogen w/ 1 deuterium ( hydrogen isotope w/ a neutron ).
The bonding would require/could hold a free radical electron to help stabilize the molecule ( or it can be done with pressure ). The bonding angle of a two-atom molecule is not very easy to determine, adding another atom makes it easy ( like,say, heavy water (HO + deuterium ) ). With this, an extreme bonding angle would show that all bits are “on,” and a very shallow angle would say they are “off”. The ranges will be more or less exact, within probability. To overcome probability’s limits, you can test the residual charge in the nucleus – while the device has power, and while that charge is still significant enough to be measured.
A proper configuration for stability would require, naturally, a perfect substrate – but also an active layer above the storage layer to keep the free radical electrons tunneling back into their correct places. I think magnets could be used for this, but you would need to be able to create holes&tunnels in the magnets so that the radical electron can travel to the correct place, and be accelerated properly.
This will happen, in one form or another, in the next 10 years or less ( double that for time to market ). All I can say – I *LOVE* nano technology! Heck, it ain’t like we are actually doing anything that different from what we are already doing – we could use the rather simple bonding angle method and see amazing storage abilities.
I can’t wait for the NextBigThing(TM) after all this, though: imagine something like a field-constrained proton with a powerful electron attraction – prevent one of those bits from being on, and you can calculate rather accurately ( probability clouds ) which position is missing Not sure how long that would hold up, though – I don’t have the equipment needed to test such a thing
–The loon