It’s no secret that SSDs suffer from performance penalties when it comes to small random writes. Even though more modern SSD try to solve some of these issues hardware-wise, software can also play a major role. Instead of resorting to things like delaying all writes until shutdown and storing them in RAM, SanDisk claims it has a better option. At WinHEC yesterday, the company introduced its Extreme FFS, which it claims will improve write performance on SSDs by a factor of 100.
The problem, in a nutshell, is that NAND flash always needs to do two operations during a write to a non-empty sector: it needs to erase a block before it can write to it. Random writes are more heavily affected because each individual random write requires its own individual erase operation.
To understand how Extreme FFS works, you need to understand that most file system drivers and operating systems expect the storage medium to be accessible using cylinders and sectors. Obviously, flash storage doesn’t work this way – it uses RAM data grids instead. To fix this problem, there is a map between the driver and the medium that maps file system locations onto the physical medium.
Instead of using a static map which is used now, Extreme FFS uses a dynamic map, allowing the controller on the NAND device and the software to work together in order to cluster related blocks together for optimal performance. In addition, random writes are cached until they can be written to disk at the optimal time and location. Extreme FFS also includes a feature that ‘learns’ usage patterns and organises the SSD accordingly. Other features include garbage collection (really emptying blocks marked as such, and marking bad blocks).
Extreme FFS will appear on SanDisk devices in 2009.
either users have to install a driver everywhere they go, or these devices pretends to be fat formated.
as this is not developed my microsoft, it will not be natively supported by microsoft (see things like rw optical media formated to act as “floppies”, no native support by the big gorilla of the market).
like it or not, fat has become the lingua franka of removable storage media. but as its showing its age (even the fat32 version is closing on retirement in computing lifetime), microsoft have rolled out exfat in a hope to corner the market. and they probably will, sadly…
sooo when you want to access your files on your linux computer from a windows box you have to install the driver to read ext3 (or whatever file system it may be)?
thats the deal with new technology, why people complain about it i dont know. you have to install a PDF reader when u have windows, same with java. people are just used to them being there these days. back in my day, if you wanted ot do something, you installed something to do it with. kids these days are spoiled. haha
yes there are drivers for ext3 on linux.
but what about the proverbial “aunt tillie”?
the ones that gets scared silly by a simple printer install?
The point is it shouldn’t have to be like this any more.
It’s all very good and well saying “people used to cope in my day”, but the fact of the matter is people aren’t using 286s and Windows 3.x any more. People expect their modern, bulky, multi-functional OS be all inclusive. They expect part of the hefty 1GB install (or whatever size Windows demands these days) to contain all the tools required to read all the mediums they use from day to day.
To take your argument further: in my day people coped without GUIs, HDDs, CDs and the internet – however I wouldn’t expect anyone go back to the desktop BASIC days, nor would I say anyone complaining about a lack of internet or frustrated with their slow / uninturative GUI was “spoilt”.
Oh, and your PDF analagy isn’t wholely accurate either as you’re effectively comparing a floppy disk (removable storage) to a word document (document format).
Edited 2008-11-07 09:47 UTC
who is to say you would have to instal something everywhere you go. its perfectly plausible that MS could release something via automatic update and BAM, supported. its just that simple. the fact is no truely new technology (that is technology that differs enough from current standards) is going to work on systems that were developed before it was. the OEM’s and OS developers have to issue patches or have the driver downloadable or inclueded with the new storage media.
I didn’t say you would have to – i’m only responding to the previous poster saying you shouldn’t have after he called users spoilt.
It’s largely because many people have unrealistic expectations of technology, due to a lack of understanding. The “I don’t understand it, so I will assume it works by magic” effect.
I dont know the exact limit of fat32. (ive only used a 400gb disk with fat so far) But ther filesize limit of 4gb is a showstopper non the less, so they need something new and as NTFS isnt as free as fat is so i guess its up to the hardware guys to make something new, probably with a fat32 comp-layer or so. Im sure those guys just make it wirk, just like they have come through for us before.
iirc, the issue is that ones one go beyond a specific size, the cluster size has to grow.
that means that if a file is below the size of the cluster, it will still take up the whole cluster, even if most of it is empty space.
still, it may be that im jumping the gun, as reading the right hand table here:
http://en.wikipedia.org/wiki/File_Allocation_Table
fat32 should be able to address drives all the way to 2TB.
Edited 2008-11-07 14:13 UTC
No. The device will just look like a hard disk. Just an array of sectors, on which you write whatever file system you like.
The device will map your logical sector numbers to physical flash pages, using what is essentially a dynamic map.
Having not read the details of ExtremeFFS (it is probably patented) I theorize it operates by collecting together temporally close sector writes in a cache, and writing them all in one go to a fresh page. The garbage collection looks for stale or partially stale pages, queues any live data for writing in the next write batch, then cleans the (now) stale page ready for use in the free list.
Of course, this may all be wide of the mark, in which case sorry for the noise, but a SSD that required drivers over and above the link layer (SATA) would have a very limited market and just wouldn’t make sense.
i read the FS part as File System. and unless one file system can pretend to be something else, its at the very least exposed to the os. and if so, it will need drivers, either built into the os or installed afterwards.
Well, TrueFFS, of which this is an evolution, provides a block mapping, plus some FAT specific optimisation. It certainly didn’t provide a filesystem in the classic operating system sense.
I can’t wait to try this out. I’m looking forward to having a laptop that doesn’t burn my goolies every time I leave it on my lap for more than 2 minutes ;-).
So basically the only overhead of flash comes in when writing to a non-empty location. It needs to erase and then write… 2 operations.
If writing to an empty sector, there is only 1 operation (write).
So basically the way to optimize flash drives is
1. To make sure they write to empty locations
2. To cache writes and then write them at a later time
Is that understanding correct?
Now I’m a little skeptical on this 100x number. I mean the most you could ever really improve is 2x by removing the extra erase cycle.
The caching aspect can basically be done on any kind of drive and improve its performance just based on the fact that RAM is faster than disk.
But I’m assuming they taking the overall system into account
increase = optimal FS with caching / raw write to disk.
Oh come on now. Stop being rational. The theoretical throughputs of PCI to the latest have always fallen drastically short of their theoretical fanfare.
How dare you come in here and poop all over fantasy!
Barkeeper! Give this man a double!
TheRegister explained it nicely:
http://www.theregister.co.uk/2008/11/07/sandisk_extremeffs_dram_buf…
Short version:
They keep a cache of free pages to write to, and mark the old one as dirty. Another thread then cleans the dirty page and puts it in the free cache in the background.
Is it open ?
Will it work in every OS I use? Or can it be ported to them?
Else there is just no use to it.
I don’t put my data on something I can’t control, period.
I’m sure they’ll be disappointed to lose your business.
I’m sure they’ll be disappointed to lose your business.
Given that flash drives could one day replace HDDs, I’d say SanDisk have more to gain from growing the market generally than from trying to secure their own position in it.
If that is the case then it would make sense to spread this technology about as much as possible, either through licensing or open source-ing it.
If they try to sit on this, then it’s always going to be a minority tech. But they may see things differently.
I’m hoping that this means more craptastic tools like anything that involving U3 throws at you. I love that stuff so much. The only way it could get better would be if it would use up some more drive letters mounting pointless volumes!
That is why you run a handy little tool that wipes that crap off the memory key.
It shouldn’t be on there in the first place. I’m buying some external storage for my personal data. I don’t want somebody else’s data on it.
If I was interested in their crapware, I’d download it from their website.
Last week, I purchased a Transcend SSD for my laptop. It’s silent but very very slow, even using the legacy FAT32 filesystem. It took an afternoon to install Windows XP. If it weren’t for silence, I would be using an old good WD Scorpio HDD
This is just a stopgap solution that probably won’t really work for most people. Just like the all the crazy memory compressor software that used to be out there that tried to give you more ram with a bunch of software compression tricks. It kind of worked some of the time, and most of the time your system was slower and more unstable.
No, the real solution as always is newer technology on the hardware end. There are new kinds of flash memory coming out that are faster to read/write and can handle more write cycles. I foresee the day when people might not buy ram for mid/lower range systems because the flash storage they use for their hard drives is fast enough to use a modest 30gb of their 500gb flash drive for ram. This is similar to what’s already being done on a lot of mobile devices now.
And I think this would help a lot with suspend/resume support. Yeah the hardware devices would need to be reinitialized, but your ram image could be on your hard drive so if you loose power, no big deal.
For a section of a flash device to be “empty,” does it just have to be zeroed? Or when a block’s been written to once, does it somehow end up forever-written (ie, it will always have to be written to twice from then on)? In other words, if you performed the following command:
dd if=/dev/zero of=/dev/sdx
…does that automatically clear the “written” flag of the entire device and make it so that all future writes are one write instead of two (ie, make it “new” again)? Well, until you’ve put it through some decent use, creating and deleting files, of course, which is when you’d end up with random data again as before.
I hadn’t thought about why Flash is so slow from a user point since I really can’t see through all the complexity involved in both the OS, PC and the Flash technology used and it also varies by OS, and Flash and Controller vendor.
Now I put my chip design hat on and something horrible clicks.
The erase time in almost every non volatime storage chip that I have been familiar with over 3 decades has always been 2 or many many orders of magnitude greater than the write time all the way back to the 1st EEPROM which is really not that different from modern Nand EEPROM Flash.
Back in the 70s, 80s, erasing had to be performed on the entire chip in one go with an external UV lamp and it could take several minutes or longer (maybe 20 mins), and all the UV lamp did was to help the trapped electron charge in the floating gates sneak through the silicon oxide interface, it used a see through quartz window.
Jump forward a few years and the erasure was done internally by a high Voltage charge pump that still had to erase the whole chip. These pumps take many clock cycles to pump up a voltage grid to power the erase cycle. Soon afterwards, it became possible to partition the entire chip into smaller blocks and route the high erase voltage to specific blocks. This still took several orders longer than individual writes.
It is almost certainly the case that erasing any block takes possibly microsecond or more based on the physics of the storage mechanism even if the SSD or Flash vendor says writes can occur at upto 100MB/s. I will have to look up that value.
If every write includes erase then performance is going to be terrible for small data writes so it makes sense to group writes into each block.
One solution if for the OS to perform background Erases on all blocks not containing data so that in the future cleared blocks will be available.
my 2c
I guess this is exactly what UBIFS (and I guess Extreme FFS too) is supposed to do: treat whole FLASH as a huge log file registering all write operations and run garbage collection in some backroung thread to reclaim outdated entries. The block map resides in ram, is written to flash device on umount time but can always be rebuilt from the scattered logs (which contain enough metadata aside from content) in case of power failure.
This fully embraces FLASH performance specifics:
Slow inplace write + 0 seek time.
And that is what they do.
If it doesn’t work in Linux, I’ll pass on it…
From a detailed performance review linked from overclockers.com, the Intel SSD entries do not suffer from the random writes performance bug ( it really is just a bug ( in the SSD core logic ) ).
Given this, the development of a new file system to deal with improperly designed hardware seems a bit… uninformed
However, even on normal disks, it is often better to group and “burst” write ( or writev[ector] ) to disks[/sectors/cylinders], simply because it is more efficient ( though can lead to integrity issues, so journaling is a must ).
In any event, the biggest advantages can be seen when the application side is designed to work with the performance characteristics of the hardware on which it is running. For instance, my LoonCAFE ( still unreleased ) tries to determine best performance for writes to optimize file system accesses – especially for the read-ahead caches.
Oh well, so long as purely generic solutions are employed to address specific problems, I will reign king
–The loon
Well the incompatible performance characteristics have to be overcome somewhere. Be it the microcontroller embedded in a SSD (on FAT devices) or a OS driver (custom FS route).
I guess SANDISK tries to make a bold move to establish a standard facilitating the 2nd route on consumer PCs. Moving the logic out of the devices will make them cheaper to produce and result in some additional cash from IP royalties.