First theorized in the 1970’s as the fourth basic circuit element, a practical memristor implementation has finally been discovered at HP Labs. If practical manufacturing can be scaled up, memristor technology could become the new standard for computer memory — memory that combines the speed of DRAM, the persistence of Flash memory, and the bit density of hard drives. In addition, memristors can work as analog as well as digital devices, and hold promise as the basis for building neural networks
Look at the number of patents covering every possible application of memristors. It’s sick. No one ever created any practical and working device, yet those greedy bastards patented every aspect of it.
And this is wrong how, exactly? The work was done in a corporate research environment, with well-paid people doing the work, with expensive equipment. Should they not protect their investment? If not, why would they do the work?
Any University would have done the same, FWIW. IP is a huge putative revenue source.
Patents on mechanical/electronic inventions are good. It takes a disgusting amount of time, energy, and equipment to research that stuff. Companies and universities need to be reimbursed for their expense or they’d never be able to research it in the first place.
These are nothing at all like software patents. Aside from software being an algorithm, software changes are a far more rapid pace than electronics and software requires far less investment to innovate with.
This is why I’ve never gotten on the band wagon with the “anti-patents” movements that far too many programmers push – almost all of whom I’d imagine lack any business or economic sense whatsoever – and instead work with the “anti-software-patents” movements. They’re different things entirely.
I can agree that software moves much more quickly, and I can also agree that a lot of software ideas are relatively trivial.
What I cannot agree to is that software patents (those of any significant complexity for algorithm) take “a lot less investment” than many of the other things that are patented. The problem is that most of the things that are patented in software are really not all that earth-shattering and important for advancement of the field: they’re ways to lock things down when the merit of the idea is not all that great compared to what’s put into it. However, there are many things that are of great practical value, that require quite a bit of investment to perfect. What a lot of people do their best to ignore is this fact: very few people become competent enough in the field to create anything that’s considered “Revolutionary” without a hell of a lot of experience, education, and self-directed deep thought and research. That’s why, really, it’s amazing as to what’s available via open source software, because people expend a huge amount of resources in the form of their time and energy first to get to the point where they’re competent, and then go on and do something like that, however much more that takes.
I would, however, definitely support a shorter duration of software patents: if you can’t find a use for a software patent in a shorter period of time than most manufactured things, chances are it really isn’t worth anything anyway, unless you’ve managed to create quantum computing algorithms that aren’t practical on current hardware long before hardware that can run them become feasible. However, that’s a fairly uncommon type of advancement in software
If this is true and really a new basic component of electronics, than this will be of huge impact. Especially the new way of using flux instead of voltage will alter the calculations and design of circuits. I’m not knowledgeable enough to understand all the implications, but I understand that this will have positive consequences for many types of devices.
A device with the density of a hard drive (hundreds of gigabytes or even terabytes), and the speed of dram, we could potentially assign some space for working memory (now named ram) and the rest for “storage”.
Think of this:
For normal everyday use, we “only” dedicate 4 or 8 gb for “ram”.
Today we are going to work with HD video, so we use 32gb of free space as “ram”, and the rest of the device as storage.
Or even better, the “storage” is fast enough that we don’t need to have “ram” in the middle.
Edited 2008-05-02 00:09 UTC
If this proves out, it is certainly likely to eliminate the RAM/disk dichotomy as we know it. However, I do think we will still have something like it for removable storage, and possibly expanding storage.
I also hope that it proves to be as fast as SRAM, not just DRAM — if so, we can rewrite the rules entirely on CPU caches — they would probably still exist to eliminate memory contention across all CPU cores, but they would no longer be needed to handle the widening difference between cache and main memory performance.
Good call. Even better than eliminating the ram/disk dichotomy would be eliminating the ram/disk/cache trichotomy (if that’s a word). It’s impressive how we can immediately think of awesome applications for something like this but I’m sure that it will take quite some time for the “killer app” for this technology to come along. When it does though, I’d imagine it is likely to cause some huge changes throughout all technology- just consider how important the switch from vacuum tubes to transistors was.
Adding to what james_parker wrote, it seems there would be no point to hold to ram/hdd dichotomy at all with this new technology…since apparently there’s not that much point as it would seem to do it already, if one would believe Varnish HTTP accelerator website
http://varnish.projects.linpro.no/wiki/ArchitectNotes
(since I definatelly can’t be described as CS-anything, I can’t really judge…other than what is under this link sounds at least resonable to me)
BTW, nice to know that HP actually is still also a research company (I thought they became just another Dell some time ago…)
While it may be as fast as DRAM ( DDR? DDR2? ) there are still considerably faster types of memory I’d love to see fill the void of RAM. i.e. The L1/L2 cache memory which can give 20GB/s or better
I would love to see the following setup:
(All memory persistent)
64 Core CPU ( w/ thread splitting and per-CPU rates )
Each core:
1GB 1PB/S L1, fully associative, versioning
50GB 500TB/S+ L2, fully associative, versioning
Global:
500GB 50TB/S+ L3, versioning, and flex-partitioned
RAM:
10TB 1TB/S+
Ultra High Capacity Storage:
18PB 500GB/S+, solid-state, w/ integrated interface-speed 10GB cache ( say, 1TB/s ).
The tiering is for cost Not to mention size concerns
This system would be very hard for Microsoft to slow down, though I am certain they would find a way.
But… I mean… Haiku would certainly be an instant-on OS even if no special work was done
Windows Vista may take 100ms or so, being human-noticeable ( albeit tolerable ).
Heck, the CPU cores wouldn’t even need to be that fast
Ahh.. just imagine the games we could write!
It MIGHT even be able to run SETI so fast we end up waiting for the data to come from the telescopes every 10 seconds or so ( that would be sweeet ).
hmm… I seriously think I need to sleep more than four hours a day, but then I can’t program for $@!#
I think the impact for electronics design may be a lot greater. For example is this really is as good as it sounds, it would mean better processors as well, because it could generate less heat.
Heat is one of their biggest problems right now for not scaling up futher.
Yes, good by transistor based procs, hello memritor based procs.
“50 GB L2 cache”
Led me to do a very rough thought experiment. A Core 2 processor at 65 nm process with 2 (might be 4) MB of cache is 143 square mm. Lets say cache is half of it, or 70 square mm.
If 50 GB ~= 50,000 MB, that’s 2 MB * 25,000 to get 50 GB of cache. At a 65 nm process, that’d be 70 square mm * 25,000, or about 1,750,000 square mm.
That’d be a square of about 1,300 mm, or a square 1.3 meters per side. Each processor die would be about the size of a small table. If there’s 64 of them, then that’s a processor 10 meters per side. I guess if you arrange them in a stack you might end up with something like a small car.
I suppose that’s only at 65 nm. Maybe with super gamma ray lithography or by pushing individual atoms around it’ll get the size down to something that would be able for a person to carry easily.
I’m sure my math is off, but I’d hope my mistakes don’t invalidate the orders of magnitude that MB -> GB includes.
“Not to mention size concerns” Indeed!
Bear in mind that current cache memory is SRAM, which because of its design takes up a lot more space than more conventional DRAM-type designs. We don’t really know what memristor-based RAM will look like at this point, but based off some of the preliminary whitepapers it is capable of *very* high data density, so it should be capable of packing considerably more storage into the same space compared to SRAM.
Thanks for the information. I vaguely recall reading some differences between DRAM and SRAM complexity years ago, but couldn’t remember any details. I’m not a circuits person at all, just basic EE course in college.
As I understand it, the short story is that SRAM uses a bunch of transistors in a special configuration in order to stably hold its state, whereas DRAM uses a transistor and a capacitor. DRAM takes less space as a consequence but needs to be refreshed to recharge the capacitor in order to prevent from losing its state. The problem being that the more complex transistor design that SRAM uses takes up more space per bit than DRAM does, hence your 2MB cache on a die taking up half the die as opposed to the amount of space 2MB would take on your 1-2GB DIMMs. A hypothetical memristor-based RAM design could potentially be noticably smaller than even DRAM, and without needing the refresh cycle. We’ll see what comes of it in practice, but the potential is there.
In all fairness, technically you need 3 atoms for the layers, thus density can get very high if we can sense the resistance of the titanium atom at that scale.
They can pack 100Gbits stuff into a die the same size as the highest density ever Flash die with only 16Gbits.. That’s over 5 times more in the same size. That’s remarkable. Also worth of note is that these memristors get ever more efficient the smaller they get whereas transistors get worse and worse. That’s a huge thing really, one can only wonders how much this will change the electronical world from on. It’ll take several years before these memsistors start appearing wide-spread in consumer products, but we’ll likely see even hundreds of terabytes on a single storage product, ever more fast processor and god knows what more. I for one am really excited!
But, this leads me to think a little…If one can craft a storage media that can hold it’s contents even after being shut down yet it has the speeds of DRAM..It sure would make sense to use that for both storing your files and using that also as RAM memory. But that would render regular PC architecture obsolete. PCs are built around separate RAM memory and storage, not a combined interface. So, what will happen? Will some new architecture rise from this new development and eventually replace PCs, or will the industry keep dragging its legs and try to maintain backwards compatibility while sacrificing the new possibilities?
They will drag their legs.. some obscure startup will come along, create something killer, no one but a few geeks will buy it (me) and then it will die. and in 30 years, the industry will *slowly* shift to it.
It would actually do almost nothing to the computer architecture we’re accustomed to today: you could effectively think of it as running everything out of a huge RAM disk. Other than (perhaps) a small driver change and a minor change for how system storage is referred to, nothing at a level higher than that storage driver (for the RAM disk) or the kernel would be any the wiser for it: all user-space applications would see (if they bother worrying about it) is that you’ve got an incredibly fast system with it being very hard to run out of address space and RAM, but don’t you worry, some bloated OS and application combination will find a way to run you out
They MUST rename is to flux capacitor!
Actually, we already have a flux capacitor. Really, the term “flux” means rate of change; many times, engineers speak of magnetic flux, abbreviating it as simply flux. We do have a magnetic flux capacitor. This is the inductor.
Edited 2008-05-02 18:31 UTC
…
shhhhh
Hi all,
Nice discussion.
Let’s say that the memristor is to electronics what the Higgs’ boson is to particle physics (which is the Holy Graal of the LHC CERN experience
In 2001 I attended a plenary lecture of Leon Chua in a Circuit Theory and Design Conference. The main subject was (as far as I remember) the possibilities that quantum devices would bring to computer power. The memristor will have some quantum explanation I suspect… Anyway, what Chua was talking about was devices that would behave like switches and would not consume any (or infinitesimal) power. (the maths involved had to do with Hamiltoneans and Lagrangeans in Physics… as far as I know Chua has not published about this subject).
So, as dynamic power in CMOS is proportional to frequency*Node_capacity*Voltage**2, and is one of the current bottlenecks associated with Moore’s law, one nice question is what about the energy balance in the memristor? (besides the integration density for doing memory.)
And will the memristor be used as a switch? In fact it behaves as a charge-controlled resistor (according to the Nature paper from the HP guys, pointed by the Wikipedia article http://dx.doi.org/10.1038%2Fnature06932 ), which means the resistance can be modulated and, thus, some kind of switch can be realized.
Final note: Chua is a very respected scientist in the circuit theory and simulation area. In fact he is (among other books) co-author (with Pei-Min Lin) of one book, from 1975! (I think…) which is still known as “the Kama Sutra of Circuit Simulation”. Of course, someone who writes a Kama Sutra has the right to discover a new fundamental electronic device :-))
Regards,
JA
Edited 2008-05-03 00:03 UTC
the biggest question for me is, does it eliminate the limited number of writes that a flash drive has?
OSAlert is not where EE chip guys hang out. Try physorg.com and sciencedaily.com or perhaps EETimes.com to get a glimpse of future technology.
Now Memristors may or may not be interesting, I will reserve my judgement on that till I do more research, but I wouldn’t bank on such a huge paradigm shift occurring too soon. In the 60s we had tunnel diodes, in the 70s we had magnetic bubbles. In the 80s some thought gallium arsenide would replace silicon because of it’s much higher mobility. None of these really panned out. Turns out silicon is to microchips what carbon is to life, it just works and keeps on evolving.
I spent 30 years in chip design, saw silicon go through huge improvements from 3um to far below 100nm and all that was achieved though literally thousands of minute incremental improvements usually applied 1 step at a time, very few giant changes.
You know what is most exciting to me even though I will never work on it, Nano technology, specifically Graphene. Just reading up on that on Physorg and Science Daily. It seems that Graphene would allow devices fabricated on that to have 100 times the mobility of silicon. Also could likely replace indium used in transparent conducting layers for LCDs, Solar PV cells, (indium runs out in 10yrs so a replacement is urgent).
Its going to be a Nano world and I really wish I could be young again. In the classic movie the Graduate, the young man was told to go into “Plastics”. Today that would be “Nano” and “Bio” in all forms for information and energy future.
some tidbits about memory design, wikipedia probably has more content.
Memory cells DRAM, EEROM, and SRAM are laid out in regular patterns by repeatedly stepping the cell image over & over in 2 dimensions.
The highest density devices, DRAM, NAND/NOR EEPROM (Flash etc) all require about 4 to 8 squares of geometry to make a 3 terminal transistor (a 1 t cell). The more the transistor shares features with a neighbor, the smaller the cell can be, but the more overhead there must be further away. In the SRAM cell, the 6t cell is most complete but enormously larger than 8 squares (varies from 100-500 squares), but that gives differential high speed sensing along with a great amount of leakage over a large enough area,
Right now Flash has the highest density (see 16GB devices) but still is limited by the ability to pattern vertical metal bit lines over word row lines, every intersection gives 1 device in the serial NAND version and for every 8 or so bits there is some overhead added to read that segment. The NOR version is somewhat less dense, it is really a 1b NAND with the read overhead per bit. Apparent density can be increased by storing 2 or more bits per Flash cell for much lower performance though.
DRAM is a little different in that the 3rd terminal is to a vertical capacitor node usually buried in some fashion and these cells are less dense than Flash hence 1GB DRAM chips.
Usually DRAM is understood to be slow and SRAM fast. Some DRAM can be quite fast if the overhead is increased somewhat, I saw IBM designs a decade ago that were 5ns and I think they can now do 2ns cycles while commodity DRAMs that sit on PC DIMMs usually cycle in 60ns. Some SRAM can be quite slow, when you build large memory arrays, the much longer interconnect limits how fast the array can be driven. DRAM can win because the cells are more densely packed but the DRAM process is not really compatible with processor processes so SRAM is used despite its limits. Putting DRAM onto a processor would allow bigger caches but the process and hence clock speed would be much slower.
DRAM is best used in large arrays for density and low power but can be used for fast L3 cache if desired in modified processes.
SRAM is really best used when lots of small memories are needed where the overhead needs to be more limited. In FPGAs SRAM gives one tremendous aggregate I/O bandwidth, example having 500 independent SRAMs all doing something useful per clock. In processor designs the SRAMs are fewer but much larger, there’s only so much parallelism available.