Intel today announced the availability of their long-awaited Optane DIMMs, bringing 3D XPoint memory onto the DDR4 memory bus. The modules that have been known under the Apache Pass codename will be branded as Optane DC Persistent Memory, to contrast with Optane DC SSDs, and not to be confused with the consumer-oriented Optane Memory caching SSDs.
The new Optane DC Persistent Memory modules will be initially available in three capacities: 128GB, 256GB and 512GB per module. This implies that they are probably still based on the same 128Gb 3D XPoint memory dies used in all other Optane products so far. The modules are pin-compatible with standard DDR4 DIMMs and will be supported by the next generation of Intel’s Xeon server platforms.
Are these supposed to speed up access to hard drives/solid state drives like the existing Optane SSDs, or can these be used as standalone storage? It’s a little unclear to me what advantages these would offer over regular drives.
The Ars article ( https://arstechnica.com/gadgets/2018/05/intel-finally-announces-ddr4… ) on the subject might help answer some of your questions. In short, memory is where Optane/3D Xpoint was ultimately targeted to be all along, having it in the form of disks was more of a stopgap. In the long term, it essentially promises to do away with the dichotomy of RAM and persistent storage that we currently have, since it offers the best of both worlds (RAM-like speeds + disk-like size and persistence). Truly realizing that will require a complete rethink of how OSes work though.
Edited 2018-05-31 01:53 UTC
Linux already aggressively caches things in RAM and can use tmpfs for storage. You’d basically just need a boot floppy / SD card for the EFI loader and kernel.
Windows does as well, since Vista, the issue is having the OSes not flush the ram at reboot or loss of power and having the ability to install the OS to RAM, so to speak, and function purely in RAM.
Well, that and the same concern as with image-based languages like Smalltalk:
When there’s no clear separation between persistent and volatile state, it’s easy to accidentally get yourself into a situation where you’ve messed things up, but your most recent available rollback point is quite far in the past.
Edited 2018-05-31 07:04 UTC
Actually, a decent percentage of what would be needed is already present in the mainline Linux kernel. It already supports running directly from a memory-mapped persistent storage device, although that’s usually assumed to be memory mapped flash storage or an EEPROM. It also has support for direct access to files stored on such devices (though this depends on the particular filesystem). The only big things are not clearing memory on startup (which also needs support from the firmware) and sanely handling the fact that all text and data will always be in the same location in memory (which is a security nightmare).
On the filesystem side, something like https://en.wikipedia.org/wiki/AXFS with write support could work. You’d probably want to store kernel modules and device firmware on the same cold boot media as the kernel image itself (so, say, /lib/modules and /lib/firmware would be symlinks to /boot/modules and /boot/firmware) so that rebooting could at least be guaranteed to forcibly reload any drivers.
Upstream ext4 has support for it through the DAX subsystem, and I think F2FS and some other do too.
The driver reload at reboot would actually not be all that hard either, just reset the execution state of the modules.
Realistically though, I somewhat doubt that there’s going to be any push for purely Optane-based systems, it’s just too useful to have volatile memory (having some traditional RAM around for, say, things that aren’t supposed to be persistent, like driver execution state, encryption keys, firmware state, and stuff that would go in `/tmp` rather handy as it means you don’t have to manually clear the data on a system reset).
Not the same thing. Current systems can blur the lines between memory and storage, but with optane it could be erased, permanently.
Closest thing to this I’ve seen is in the embedded world with custom operating systems that are booted from a ROM chip, and after that its all RAM. Its really tough to compare apples to peaches, but it made cold fires seem faster than Pentium 266s running off brand DOS.
Think of the old 8 bit computers, you turn them on and they were ready.
Yes, another good analogy. So like those, but, uhm, faster!
I somewhat doubt that that will actually happen.
First, it’s not going to happen in at least the next 10 years, as Optane is patented, and therefore it’s going to be expensive for at least that long. Expensive hardware doesn’t become ubiquitous these days unless it’s a finished product.
Second, people said the same thing about NVDIMM’s, and yet almost nobody uses them, despite them having been around for quite some time now, largely because they too are rather expensive.
Third, and possibly most importantly, it’s surprisingly useful to have at least some volatile memory. Yes, Optane would (theoretically) eliminate the need for a page cache, exponentially improve the performance of memory mapped files, and functionally make the distinctions between ACPI S1-S3 power states irrelevant, but it would actually slow things down in some cases. In particular:
* There’s a lot of hardware where it just doesn’t make sense to keep driver state persistently because you can’t make the hardware-internal state completely non-volatile (think networking hardware for example). If all your memory is non-volatile, you need to in some way restart the driver from scratch any time you power the device on. Just having it lose state each time you power off is way more efficient than having to clear out old state.
* Storage of transient secrets is much easier to do safely when you don’t have to remember to delete them. The argument against writing encryption keys unprotected to existing persistent storage media technologies applies here too.
* It’s a lot more efficient to back volatile storage areas with volatile storage devices. It makes sense to have `/tmp` in be in-memory only not only because it’s faster, but because nothing has to clean it out when the system reboots.
You still need tons of source to control hardware state, so removing one of them like page cache or process save/restore doesn’t make any significant difference. Your CPU/RAM is not runinng alone, but communicates with outside world, so “persistency” of RAM doesn’t help with anything, even “persisent” page cache – if you mount an external drive or network share, you’ll still need page cache to handle I/O in efficient manner. Everything else connected to CPU/RAM needs power state management – you can’t just unplug GPU or NIC and continue with the existing internal state of the device driver, so it’s everything or nothing. Until every single device can keep the state in non-volatile memory I don’t see any difference from volatile RAM.
When I was working on my ramdisk for Haiku-OS (not the one already in it) one thing is notice is the long startup/shutdown times needed to copy the entire ram image to/from the hard drive.
At 100 MB/s it still took 40 seconds to do 4 GByte of data and while there are faster interfaces, working memory has also grown. 16-64 GBytes is not uncommon now-a-days.
This type of memory would in the long run remove all delays in shutdowns and startups.
Hi,
Ages ago I spent time trying to figure out how non-volatile main RAM would/could effect an “everything designed from scratch without any compatibility constraints” OS design.
It turns out that (for reliability) when the computer is booted you can’t assume hardware hasn’t been changed (including being damaged while the computer was off) so you still need to re-initialise all the drivers; and you can’t assume that some other OS wasn’t booted while your OS wasn’t running and that some other OS hasn’t corrupted everything in main memory (and there’s no “RAM partitions” standard so each OS can have separate partitions like they do for hard disk/SSD) so you can’t rely on it for truly persistent storage. The only major difference I could find is that all software that deals with sensitive data would need to explicitly overwrite data so that an attacker can’t scrape stuff (encryption keys, passwords, bank account details, etc) out of the persistent RAM after the computer is turned off.
In other words, it’d help a little for file caches across reboots (if you add heavy check-summing to determine what was/wasn’t corrupted), and it could improve the speed of “hibernate” a little; but apart from that (if you care about reliability and security) it’s mostly just useless and annoying and won’t cause a radical difference in the way operating systems work.
– Brendan
I am sure there were similar concerns migrating from punch cards and spools of tape. Like then it will need additional hardware to support the new memory model, such as H/W enabled partition equivalents, encryption and device state detection. Mobile will probably adapt first. Imagine the low power modes that could be achieved in-between screen glances. Game consoles could benefit from zero lag switching between games and no need for loads / saves. What if CPU registers and memory switched to persistent – total continuity of state between power cycles. Massive elimination of headaches for embedded devices.
Hi,
It’ll either be treated exactly the same as volatile RAM (with a few tweaks to hibernation) or treated as “memory mapped SSD” (with a few tweaks to file IO). A few tweaks is not a radical change. Microsoft isn’t going to be bankrupted because all their existing software is obsolete. People won’t be burning older laptops in the streets. Application developers won’t have to return to University for extensive retraining.
Mostly it’ll be a lot of noise from marketing people followed by an almost inaudible whimper (as reality ruins the fanciful dreams of fools); then everything will continue the same as it is now.
Note that (from what I’ve heard) Intel will be targeting servers first (not mobile); primarily because Optane will give larger capacities (at a higher price) and reduce power consumption when there’s a huge amount of RAM (no need for “RAM refresh”). For this market the non-volatile nature of it is just an irrelevant side-effect (the servers are never turned off so there’s no practical difference between volatile and non-volatile).
– Brendan
Disagree. You’re assuming a specific scenario and then designing around that. A machine designed to make use of this may be able to detect hardware changes automatically ( if existing udev like technology doesn’t already work). See HP’s work on “The Machine” That’s basically what this would allow.
https://www.labs.hpe.com/the-machine
https://community.hpe.com/t5/Behind-the-scenes-Labs/How-HPE-Persiste…
Edited 2018-05-31 16:33 UTC
Hi,
HP’s “The Machine” is vapourware from about a decade ago. The last I heard they were trying to switch to Linux because they couldn’t actually get any of their hype to work; and after years of work the only thing they actually have is a shiny web page for suckers that daydream about things like flying cars and unicorns that shit chocolate bars.
Note that most of HP’s drivel is actually the same old “persistent objects” concept that dates back to Multics in the 1960s (if not earlier).
– Brendan
Edited 2018-05-31 18:21 UTC
HPE “The Machine” is not vaporware and not drivel, it’s a research project.
There already exist prototypes that were shown last year.
https://www.zdnet.com/article/hpe-unveils-memory-driven-computing-pr…
The combination of persistent memory and a high speed memory semantic fabric (which is the genesis of Gen-Z) are being fleshed out.
https://genzconsortium.org/
If fact the Amiga already did it with it’s ram system.
The only problem it had is at that time there was no memory built in to survive power down.
I know the Amiga booted fast when the OS was moved to the ramdisk.
At the time I made a 512K static memory addon to expand the memory, but if I had mapped it to the start of the memory and added a battery backup I am sure than I would have ended up with a near instant on computer.
Phantom OS (https://en.wikipedia.org/wiki/Phantom_OS, linking Wikipedia article since the main site is in Russian) may be of particular interest. For all intents and purposes, userspace there already operates as if all storage is persistent.
This puts me in mind of the RAD disk on the Amiga. A section of persistent ram that one could copy the OS into. It would reboot more or less instantly and worked perfectly well (Handy if you are doing something crashy). I’m not sure exactly how much any other OS would need to change to use this but I look forward to it becoming a thing
You can do execute in place, and instant OS and application suspend/resume. You can simplify the OS design by removing the disk cache. Ideally you would reserve a few GBs to act as dedicated RAM but not as much as in the traditional setup because you don’t need to copy stuff to RAM anymore, and let the FS handle the rest. The FS should allow the free space to be used as RAM as needed. I guess programs would need to be update too to avoid unneeded copying to RAM.
Where this can be sorta exciting is the rise of “image” based systems.
The one that comes immediately to mind is most incarnations of Smalltalk.
Here, you have a an “image” that’s loaded up, managed and maintained, and then “saved” as a whole.
“Everything” is an object, specifically, everything is a pointer to something in memory, rather than a persisted, “dead” artifact on some storage somewhere.
Consider a simple example of having your entire email inbox in RAM. Not just the headers, the entire thing — headers, bodies, attachments.
Would this be a “good thing”? Eh, it wouldn’t be a “bad” thing.
But, consider that instead of your emails stored in text/HTML, the HTML is already parsed. The DOM already created. Consider, perhaps, that the original email no longer exists — all that remains is a processed artifact and it’s internal representation.
Using the representation, you could always “export” back to the original email. But it’s read from the email server, why keep the old model around any more?
The new model is more efficient, pre-parsed, direct links to other mails in the thread, binaries already manifest from BASE64, so less overall storage cost. The email indexes point straight in to the messages and message bodies. All of the mainline processing is simply faster. Better response, lower CPU load, lower power requirements.
Computers spend an inordinate amount of time simply translating external formats in to internal ones. With persistent RAM, that’s no longer necessary.
The down side, of course, is that you never start off from a clean slate. If you want empty memory, you get to make it yourself. Restarting the computer has to return the machine to a “known state”, and, ideally, it doesn’t wipe out “everything” when it does so.
Error recovery for many systems is either restart the process, or shut down and restart the machine. Many system can ONLY be restarted this way. “Memory leaks” can now be “really bad”. Dangling pointers can have a permanent effect.
So, the concept of a pervasive persistent system has definite affects on overall system and application design.
The dream of persistent storage is much like that of the electric car…
Electric cars have been around far longer than gas powered cars, but they always had (relative to gas powered vehicles) a serious flaw, which was their limited range and the challenges of increasing that range economically. There have been numerous brief periods of time where a new more advanced version of the idea created a buzz, but everyone quickly went back to internal combustion because it just kept getting better and cheaper.
Eventually in the 90s we got hybrids, and presently we are seeing a huge surge in pure electrics. We are finally at the point where the range differences are small enough to where they do not matter for many use cases, and economies of scale are building up to make them approach the affordability of gasoline powered cars. It is not quite there yet, but there seems to be an obviousness that electric is quickly becoming the norm rather than the exception.
Meanwhile, internal combustion advancements have mostly stagnated, there doesn’t seem to be much low hanging fruit left, and even though fuel prices are currently pretty low, the consensus seems to be that the days of cheap gas are numbered at this point.
Note – I’m ignoring environmental impact issues because it isn’t relevant to my point.
I think non-persistent memory is the internal combustion engine of computing. Its fast, its cheap, and historically nothing else could realistically keep up with it. However, persistent storage of some kind is basically required in computing, you absolutely need it, so the hybrid approach was established early on. The flaw of persistent storage (regardless of technology) was that it was always orders of magnitude slower – no one could approach the performance of volatile memory.
The hybrid approach has just gotten more and more complex over time, resulting in the technical debt we have now. Various persistent storage mechanisms (punch cards, tape, magnetic media, optical media, flash, hundreds of different kinds of block devices, etc.) being grafted on over the years and beaten into usability as backing stores. The ideal storage technology, the one that meshes best with modern computing, would just be DRAM that doesn’t lose its state when it loses power. Same or better latency and density, byte addressable, just with the ability to maintain storage reliably when the power goes out and the low costs of traditional block storage like magnetic. Systems with this view of storage are not new (i.e. object stores), Smalltalk did it, but there was always this interface skew between how the software wanted to work vs how the hardware actually worked.
That is what Optane promises, but is it good enough to displace DRAM for main memory? Is it the modern electric car? Its not even close to be honest…
For Optane to actually cause a full on engineering shift, i.e. one where hardware and software engineers actually start designing their systems explicitly around it instead of just treating it like yet another technology to graft onto DRAM, it has to become far better than DRAM. We have to reach a point were the outlook appears that it will only get faster and cheaper, and DRAM improvements have to start looking poor in comparison. That will take decades in my opinion, we are no where near that point…
There are definitely use cases for it. It makes sense as sort of a middle-tier persistent cache, and there are probably more interesting applications that will develop. Its extremely cool technology, but I just don’t see it replacing DRAM any time soon.
Edited 2018-06-01 20:43 UTC