With the AM5 platform from AMD on the horizon, five major motherboard manufacturers have annonced their flagship motherboards with the X670E chipset. Some of them are having fun with this generation’s multi-faceted step into “five”: AM5, PCIe Gen 5.0, DDR5, 5nm process, boost clocks over 5GHz, you catch the drift. But do you know what every single announced motherboard has fewer than five of? PCI Express (PCIe) slots.
Other than a GPU and the occasional WiFi card, I haven’t really had any need for my expansion slots in a long time. I just don’t know of anything useful. I doubt they’ll actually go away any time soon though.
I’d say storage would be a good third reason to use PCIe. A RAID card with multiple NVME slots would be the fastest storage option available. Not sure how many PCs would need storage faster than 4x PCIe Gen 5. Additionally, faster or additional networking considering how long gigabit has had its hold on the PC market. Still waiting on affordable switches though.
Heck forget even NVME, having a PCIe card with multiple SATA slots is also damn useful. Super large SSDs still aren’t cheap but 1Tb SSDs are so being able to just throw a PCIe SATA card in a system and flop some 1TB SSDs in there for games and scratch video editing? Its quite nice to have.
Correct. I’m using a PCIe expansion card with 8 SATA ports, and never think about storage limitations.
SATA and SAS will eventually go away though… the replacement will probably be a PLX like card with NVMe plugs. Having separate protocols for drives at this point is dumb.
Seagate is supposed to start selling NVMe hdd’s late this year for data center use and that will trickle down.
Sure, many many years from now. Won’t happen anytime soon.
Hardware raid is basically dead. It’s all done much better in software using erasure coding. Even high end SANs don’t use hardware raid controllers. Wendell did a good video[1] on it recently. Might surprise you how unreliable those hardware raid cards actually are in ensuring you read back what you wrote.
[1] – https://www.youtube.com/watch?v=l55GfAwa8RI&t=16s
cmdrlinux,
That’s a bit one sided. While hardware raid has some cons, so does software raid. Hardware raid is more robust against drive failures that would leave the OS unable to boot. Also the system bandwidth needed to do raid in software is higher than in hardware. Write caching can be a huge boon to performance in many kinds of workloads like databases that otherwise have to block on writes constantly. Raid level 6 is much better in hardware because hardware can implement write-back semantics that software cannot. This isn’t really the fault of software raid, however it is a limitation due to the fact that software has to be prepared for a system reset/shutdown at any clock cycle whereas hardware can be designed to shut down gracefully using it’s own battery/super-capacitor to save it’s state and do a graceful shutdown independently of the OS and host.
I’m not trying to overlook the cons, like hardware being a potential point of failure, but I don’t feel the assertion “Hardware raid is basically dead” as a blanket statement is justified. The truth is that there are pros and cons for each approach, the weights of which can vary for different people.
You’re comparing software vs hardware RAID, that’s not the comparison I’m making. I definitely agree that hardware RAID vs software is a totally different discussion. It is hardware RAID vs erasure coding or ZFS. I’m not arguing software RAID over hardware RAID. I would really recommend checking out the Level1Techs video, it was pretty surprising for me.
Hardware RAID can be really good for speed but it does not provide the consistency guarantees of something like ZFS. If you *really* care about your data, do not rely on hardware RAID. Or at least make sure you mitigate the issues with things like constant backups and accept some potential data loss in certain scenarios.
cmdrlinux,
I agree ZFS has some awesome features. Most hardware RAIDs (and software raids like mdraid) are static in nature, which is limiting. They have to use physical volumes of the same size and are relatively cumbersome to upgrade. Those are cons compared to being able to add and remove drives to the set willy-nilly and have the system automatically redistribute data while maintaining raid redundancy, which more sophisticated raid software can handle. It would even be possible to have some data protected by raid-1 and other data use raid-0 or raid-6 without having to define static partitions for them.
Both hardware raid and software raids have advantages over the other, and it makes me wish we could have both. Dynamic raid is not intrinsically limited to software, but the problem is that hardware raids tend to be closed/proprietary such that we can’t do much more with them out of the box. But if we had the ability to program the hardware raid we really could have the best of both worlds. The raid controller could use programmable descriptor tables to know where to load/save data to including redundancy. New drives could be added and even raid level could be changes and while maintaining data integrity.
Innovation is always going to favor openness, but alas it’s rare for hardware to be open
I disagree with this generalization. ZFS integrity checking is technically redundant This isn’t a bad thing necessarily, but it’s not an oversight either. Enterprise drives do similar integrity checks. Both ZFS and H/W will catch errors during parity scans and the raid will correct them automatically. Cheaper consumer drives might not be as safe.
Everyone should be doing regular backups IMHO. I do them daily.
cmdrlinux,
Another topic I am very interested in regarding storage is bcache.
https://en.wikipedia.org/wiki/Bcache
There are a couple things I don’t like about the way it works, but placing slower raid 6 array behind incredibly fast cache always seemed appealing to me.
I was always intrigued by this “acard”, which is a ram disk with battery backup w/save to flash feature when the power stops.
https://www.ebay.com/itm/155139780468
Unfortunately it’s very dated now and doesn’t even support SATA3. Something like this, but modernized with newer ram, faster IO, ECC ram, and with raid support would be amazing for databases. You’d have nearly instantaneous transactions! Even if the database doesn’t entirely fit on it, something like this with bcache and a raid6 array would be awesome
BTW Most NVMe drives do something very similar to this internally, using SLC cache to give a temporary boost over the performance of the main flash storage. But it’s not as fast as ram is.
The death of PCIe is because only up to 2 of the slots can be PCIe 8x… the minimum you would need for say a quad 10GBe card etc… or an SSD RAID card.
It would be a major setup up in AMD’s came if the chipset acutally fully wired all the 16x slots to at least PCIe 3.0 instead of just PCIe 5.0 4x… which if you stick and older 8x card in there will perform like hot garbage.
A dual 25Gbe card is only $200 these days.
There are video capture accelerators that are still practical to use via PCIe.
What if you acutally wanted to install a thunderbolt card or any other number of IO expansions… you just can’t with only 4x lanes in your 3rd slot.
Wow, talk about clickbait title. LOL
Home PCs are moving towards ITX as most I/O is now based on the chipset. So basically GPUs are going to be the last add on boards that use PCIe in that space.
Buuuut, the trend in enterprise/datacenter is the opposite as PCIe lanes are going up not down.
So PCIe is not dying anytime soon. GPUs, storage and fast networking are going to need as many PCI lanes and slots as they can get on the pro/enterprise side of the spectrum. Which is where margins are right now.
The alternative is not there.
Want 10 Gigabit Ethernet? PCIe is pretty much your only option (unless you have one of the very few ITX motherboards).
Want additional storage (custom NAS)? PCIe SAS cards would be a must
Want to do machine learning? A “workstation” motherboard, with lots of PCIe lanes, and 4 full sized GPUs
USB could be there. But it is unfortunately not scalable.
Have too much devices on your hub? The wireless keyboard will drop keys. The camera will flicker. And you’ll wonder why. (Actual problems I am wrestling now).
Anyway, PCIe is here to stay. At least for the immediate future.
sukru,
I had a project a few years ago where I wanted to hook up several USB webcams. I cam to the realization that it would not be viable with the camera equipment I had. I wrongly assumed that I could plug in a large number of USB 2 webcams (480mbps) into a USB 3 hub (5gbps). In fact one USB 2 webcam was the max I could plug into the USB 3 hub. I learned that USB 3 hubs physically have legacy USB 2 hubs inside them with their own dedicated USB2 wires to the host.
It’s not just cameras, but flash drives, printers, Wifi Cards, etc. Unless they are USB3 capable, they will all share the same USB2 bandwidth on a USB 3 hub. This limitation only started with USB3, the same limitation did not apply to USB1.0 and 1.1 devices on a USB2.0 hub. I was very disappointed to learn this but the USB spec confirms it.
Alfman,
As bas as the situation with USB 2 webcams is, I think you gave me some hope.
I’ll check out USB3 webcams (if they exist). It might help solve my problem.
Thanks!
The thing about SATA and SAS is that they are effectively dead ends… even data centers are moving towards entirely NVMe. Seagate has even demoed an NVMe HDD…. I suspect next gen boards AM6 or whatever wont have SATA at all.
Also cost per GB is low enough on SSDs you can build a reasonable sized NAS out of SSDs.
Anyway in the future we’ll servers and desktops with only NVMe cages… and no other option. And I mean near future I give it abou 2 years (first NVMe HDDs are supposed to ship late this year).
Yes, SATA is losing the war. However, we are not there yet. At least in terms of price/capacity HDDs are still the best choice, by a significant margin.
And enterprise nvme interfaces use u.2 / 3.5 sata form factor. Not always, but usually (chiclet is also making inroads).
Anyway, storage was one example. For one need or another, the motherboards will continue to have PCIe slots. Maybe not on the desktop (my primary is currently a NUC), but definitely for workstations and servers.
sukru,
Yet it is loosing. I think a big reason for that is that SATA 3 is old now and hasn’t gotten updated to keep up with storage developments. That’s a big shortcoming and means that high end users are forced to look elsewhere. But if we had gotten a faster SATA 4 standard, I actually think it would be extremely popular. It is easier to scale with SATA drives than NVME ones. And with many cases having a separate compartment for drives it also helps solve the problem of NVME introducing more heat around the motherboard.
Alfman,
Enterprise NVMe actually uses more traditional form factors.
There is u.2, that is exactly the same as SATA/SAS 2.5 drives (and cabling): https://www.supermicro.com/en/products/system/storage/1u/ssg-110p-ntr10 (and most of them can be converted into hybrid slots)
And of course there are “chiclets”:
https://www.supermicro.com/en/products/system/1u/1029/ssg-1029p-nel32r.cfm
This is more like the traditional m.2 placed in a hot plug enclosure.
In any case, I don’t think any server manufacturer would want to have a motherboard mounted storage. (yes, I am sure there is 1 exception out there).
sukru,
Ok. I’ve only used m.2 in consumer devices where it’s on the motherboard or PCIe riser. I haven’t built servers in a while and none of my desktop computers support u.2, even the high end ones.
I have a m.2 PCI card that came with a MB, but unfortunately that’s not good enough, I’d need to buy additional PCI card to use u.2. I kind of wish m.2 and u.2 were interchangeable, but they’re not. Oh well.
Yeah, NVMe drives directly on the MB wasn’t thought out. Luckily, Oculink is the cable of the future, and it’s starting to become more common.
@Alfman
NVMe would have probably been the direction everyone went anyway. There’s not a reason to have an intermediary with Flash the way platters and the magic necessary to keep them running needs to be abstracted.
It makes more sense to directly address Flash and reduce the number of chips involved in storage.
Flatland_Spider,
I accept the value of a more efficient host interface. although I’d push back on the notion that this has anything at all to do with the storage medium (platters versus flash chips). It doesn’t really matter whether a device is HDD or flash, all the link is really doing is reading and writing sectors. It’s the controllers on the device that figure out how to execute the r/w instructions. The instructions themselves are medium agnostic.
The biggest reason SATA can’t keep up is just the antiquated specs (550MB/s and a measly 32 instruction queue), both of which could be upgraded but never were. They found it preferable to restart with a new interface than upgrade the old. which is fair enough I guess. I just wish consumer motherboards would support u.2 instead of just m.2 that most computers have today.
Also while the lack of a controller is often cited as a plus for connecting directly to the CPU, it isn’t all roses when you want to scale up. The switches on common motherboards are all or nothing. Even if a drive is idle most of the time it wastes the PCI lanes allocated to it. On my motherboard as soon as I plug in two m.2 drives my GPU permanently reduces to 8 lanes. There are PCIe cards to expand NVMe, but on my motherboard they end up on the southbridge. It’s impossible to efficiently route the available bandwidth to the active drives that need it because it’s a static configuration. This is a con of not having an intelligent host controller.
On servers with more dedicated lanes and dynamic PCI lane switching hardware (ie pex chips), these cons can be mitigated. But on most consumer hardware NVMe just doesn’t scale effectively.
https://www.techpowerup.com/img/12-03-02/pex8747_product_brief_v1.0_20oct10.pdf
I use SATA drives in RAID-10 configuration on consumer hardware, but with NVMe the bandwidth to each drive becomes asymmetric. One rebuttal might be that NVMe specs are still faster than the SATA array. But it’s still annoying that there is asymmetry making the full performance of the NVMe drives unattainable. And being on the southbridge means that my performance will be further reduced if I’m using network and/or USB peripherals like a webcam.
Oh well, I guess that’s why you buy servers instead of consumer hardware, haha.
NVMe to SATA converters are starting to pop up. One NVMe slot can be converted to 5x SATA drives, which is more then the OEMs were giving people.
Flatland_Spider,
I’m not sure I follow what you mean by this? Both of my computers have 6 SATA connections on the motherboard (GIGABYTE Z370 AORUS ULTRA & GIGABYTE Z590 AORUS PRO).
Alfman,
Something like this:
https://www.amazon.com/Adapter-Internal-Non-RAID-Desktop-Support/dp/B09N8MCDTZ
But it is actually m.2 SATA card, not nvme to SATA (For some reason people assume nvme=m.2, which are two separate things. One is a physical connector, the other is a protocol).
Anyway,
I would highly recommend steering away from those. Good SATA chips are rare, and a bad chip is a cause of lot of headache and potential for data loss.
sukru,
It looks like an SATA expansion card, only it plugs into m.2 for PCI lanes instead of a normal PCIe slot. I’ve seen riser cables that even let you plug regular PCIe cards into m.2 slots.
I understand that NVMe is a protocol, like IDE and AHCI are for SATA.
Alfman,
Sorry, I did not mean you personally. Even the product listing mentions NVMe
@Alfman
I haven’t looked at MBs in a while, and usually not consumer boards. 4x always seemed to the be standard number. Are 5-6x SATA ports standard now?
@sukru
Yeah, something like that. I haven’t tried any, and that looks nicer then the ones I usually see.
This one is cool. I’ve had good luck with Orico’s enclosures, so it might be worth $40 testing it out with FreBSD and ZFS.
https://www.newegg.com/p/17Z-0003-00027?Description=nvme%20to%20sata%20adapter&cm_re=nvme_to%20sata%20adapter-_-9SIA1DSG3C7403-_-Product
That’s normal. Buying cheap SATA expansion cards has been a crap shoot from the beginning. I’ve rolled the dice on some cheap (sub $100) HighPoint cards for my desktop in the past. $300 for a proper HBA/RAID card was a little outside the budget and overkill for my crappy desktop. XD
It’s honestly kind of sad. Cheap HBA cards are useful when I need JBOD more then anything else.
Flatland_Spider,
I don’t really know how common it is. I only check specs when I buy them. Speaking of which, I have buyers remorse of the gigabyte Z590. All the NVMe m.2 storage comes at the expense of PCI lanes for PCIe slots. My fault for not checking, but all of my old computers including those from gigabyte had at least two slots with dedicated PCIe lanes, even if it required splitting 16 lanes between them. So I didn’t expect them to make the PCIe situation worse. Yet the GIGABYTE Z590 only has one dedicated PCIe slot and 8 of those lanes can be re-routed to extra m.2 cards, but that’s it. All the remaining PCIe slots are branched off the south bridge with only 4 lanes (and only PCIeV3 at that). I don’t know if it’s just gigabyte or what, but I was quite displeased. It’s easy to repurpose PCIe slots for NVMe, but hard to repurpose m.2 slots for PCIe (such as a 10gbe adapter).
Although just now I did a search and found this interesting adapter.
https://www.tomshardware.com/news/innodisk-m2-2280-10gbe-adapter
Interesting but I’d rather use regular cards. Live and learn.
I’ve had similar experiences with eSATA cards. I believe they were ASMedia based cards. I bought them specifically because they supported media multiplexing (ie an enclosure with several drive bays). It worked with one disk at a time, but any time I tried accessing drives concurrently it was buggy as heck! I can’t say for sure whether it was bad hardware or just bad linux drivers.
I had better luck using cheap SATA port extension cables to connect eSATA devices, but the problem was that none of the SATA ports on my motherboards supported multiplexing. So it was limited to one disk per enclosure.
Total clickbait title.
As it is I am waiting longer for more AM5 boards with more PCIe slots. The initial limited set of boards has been pretty disappointing. 2 are not enough. I need something with 3 x8 slots. Although the last of the 3 only requires to be x4 wired up for a 10 Gbps Ethernet adapter.
Of course it’s dying now we have thunderbolt over usb-c
That made no sense.
Thunderbolt is external PCIe, and laptops are the dominant form factor.
The discussion is about desktops.
Quit moving the goalposts.
Thunderbolt is not a replacement, is a complement. On top, is proprietary (Intel owns the IP), is expensive to implement and brings its share of problems.
@Arawn, thunderbolt is now royalty free standard.
larkin,
There’s a reason we don’t typically see usb-c being used inside the case. While things like eGPUs and usb storage exist and can be used in place of m.2 and PCIe x16 GPUs, it’s usually at the cost of performance. The upcoming 80gbps USB 4 is slower than today’s PCIe5.0 x16 standard to say nothing of the upcoming PCIe6 standard.
https://www.theverge.com/2022/1/12/22879732/pcie-6-0-final-specification-bandwidth-speeds
The performance advantages will continue to favor PCIe. Granted, maybe those speeds are overkill, but we shouldn’t overlook another benefit: the PCI cards are great to have for aftermarket upgrades, including the very USB-c ports that we’re talking about! My motherboard only has one high speed usb-c port, but I can use a PCIe board to add more if I want to. This is why such slots exist IMHO. I for one appreciate this flexibility.
Another reason is the reduced reliability.
A good use is TV Tuners and HDMI-in cards (optimally combined with an HDCP stripper like HD Fury).
Also, high-quality audio cards.
expansion cards are how my PC stays relevant. Need USB C? add a card