At CES 2024, ASUS unveiled a new standard for motherboards, graphics cards, and cases. Called BTF (short for Back-to-The-Future), it offers much cleaner cable management with power connectors at the back of a motherboard. More importantly, it fully ditches the ill-fated 12VHPWR plug in favor of a much tidier (and probably safer) 600W PCIe connector.
ASUS claims computers with BTF components are easier to assemble since all plugs and connectors are located at the back side of the motherboard tray without other components obstructing access to power, SATA, USB, IO, and other connectors. Therefore, “you won’t have to reach as far into the depth of your chassis to plug things in.” BTF should also make cable management much more elegant, resulting in a tidy, showcase-ready build.
Taras Buria at NeoWin
The interior of PCs effectively hasn’t changed since the ’80s, and it feels like it, too. Many of the connectors and plugs are unwieldy, in terrible places, hard to connect/disconnect, difficult to route, and so on. A lot more needs to be done than putting the connectors on the back of the motherboard and integrating GPU power delivery into the PCIe slot, but even baby steps like these are downright revolutionary in the conservative, change-averse, anti-user world of PC building.
I don’t say this very often, but basically, look at the last Intel Mac Pro. That’s what a modern PC should look and work like inside.
Now if they could only replace the ATX power cable connectors on the motherboards with edge connectors as well.
Thom, what you don’t understand is that designing a standard isn’t hard, what’s hard is getting cutthroat competitors to agree to collectively adopt it. The ATX platform as we know it achieved that (somehow), let’s be grateful for what we have.
The above was meant to be a root-level comment btw.
kurkosdr,
+1000!
This is true in so many contexts. It’s not terribly hard to design something better, but getting it adopted between competitors without the power of a monopoly can be fruitless.
Obviously, the link to xkcd should follow:
https://xkcd.com/927/
“I don’t say this very often, but basically, look at the last Intel Mac Pro. That’s what a modern PC should look and work like inside.”
You mean using 2 PCIE-powercables instead of the nvidia-garbage?
He’s talking about MPX cards… The reason nobody does it, is cables are much cheaper than beefing up PCBs to do the job.
Where things got dumb is when we started using bundles of cables to load share current this is a massively bad idea and a potential fire hazard and point of failure in the voltage supply. If a GPU needs 400W… you put a dang 8AWG superflex cable on it and molex mini-fit sr connectors its literally a solved problem.
cb88,
Maybe next gen graphics cards can feature MC4 connector,s haha
You’re right though, safety-wise the current approach is foolish and it’s already caused fire hazards IRL.
Thom Holwerda,
Yeah. It was nice kit. Obviously it was way overpriced given the mediocre specs. But price shock aside, I like that form factor for expand-ability and it was clean. It sucks that the new ARM macpros are so limited though because it wastes the potential of that form factor. Their new macpro is essentially a mac mini in a huge case, totally pointless.
The current Mac Pro is a joke. In the WWDC presentation, they wanted us to believe the point of the Mac Pro is PCI-E expandability, ignoring the fact their Mac Studio product can also take PCI-E cards with a thunderbolt enclosure (just make sure you use a powered one): https://appleinsider.com/articles/23/06/30/how-to-add-pcie-cards-to-just-about-any-mac
The real point of the Mac Pro used to be powerful discrete GPUs. A single AMD Radeon Pro W6900X still runs circles around the GPU in the M2 Ultra* despite being 2 years older than the GPU in the M2 Ultra, and you can have up to two W6900X cards in the old Intel-based Mac Pro. It makes you wonder if the reason Apple waited so long to announce the Mac Pro is so that Moore’s Law will make the comparison with the GPUs available in the old Intel-based Mac Pros seem less pathetic.
* https://browser.geekbench.com/metal-benchmarks
kurkosdr,
I am in full agreement. Because of apple’s go it alone approach, apple users have been missing out on GPU advancements for the past several years. To an extent apple can mitigate their GPU shortcoming with accelerators for specific applications. But personally as someone who’s extremely interested in GPGPU, the x86 macpro was really the last apple computer that was useful for heavy duty GPGPU applications.
Im(notso)ho, one is displaying a somewhat skewed set of priorities when one complains about cable management under an article that casually mentions a power consumption of 600W for video cards.
True… but also, the pursuit of increased graphical capabilities in gaming is _hard._ Let’s not just assume that two motivated competitors trading blows for over two decades are _completely_ incompetent. After all, when Intel threw literally _billions_ of dollars, (and likely well over 5000 engineer-years) at the problem to join the party, they still entered firmly in third-place in the performance-per-watt standings.
Kuraegomon,
I agree. They suck up loads of power, but they deliver tons of horsepower. Is it necessary for a gaming machine? Probably not unless you want raytracing. which IMHO is neat but hardly a requirement for gaming. Most of the time you don’t notice and most competitive players don’t use it. For GPGPU compute, the extra horsepower is often helpful though because compute jobs go much quicker.
I’ll be honest though, I hated that the GPGPU market was driven/consumed by bitcoin farms.
“With a VRM featuring 20+1+2 power stages, each rated to handle up to 90 amps, the ROG Maximus Z790 Hero BTF is ready for one of Intel’s top-shelf CPUs.”
Is that a backhanded compliment towards Intel?
600w is a lot to push through a PCB/ No doubt it can be done, but surely this is just adding extra resistance via added connectors and traces where a cable would be more efficient.
Don’t get me wrong, i think there’s a lot that needs fixing in the physical architecture of PCs, but i don’t think this is quite the way to do it. Obviously you have the issue of getting others to adopt it, and then you have the added issue of incompatibility with those that don’t.
I really think that taking a step back from the “single board microcomputer” architecture and returning to a backplane based system would make a great deal of sense. A passive backplane providing PCIe lanes and power, with modular boards providing all the other gubbins of a computer would save both money, and resources in building a computer. I think it’s pretty crazy that we throw away universal components like audio, USB and ethernet chipsets every time we upgrade a motherboard, just because it’s all soldered onto the board. If all that was swapped out was a CPU/memory board, it could save a lot of cost and allow much more modular designs. If the CPU/memory board acts like a “bus master”, it seems pretty logical that you could upgrade PCIe versions as well without ditching the entire backplane (though passive backplanes would be cheap enough anyway). With a backplane based design, power could be routed through a connection like this, solving cable management in one fowl swoop as well. A backplane based system could even be entirely cable-less
The123king,
I don’t think it matters so much where the metal is (on a PCB or via cable), but clearly high currents need more metal. At 600-1000watt draws, that’s ~50-83 amps at 12v. That’s a huge amount of current, yet the 8awg, 6awg, or even 4awg wires that are rated to handle such currents are hardly practical.
http://wiresizecalculator.net/wiresizechart.htm
Huge cables are unwieldy, and using a number of smaller cables is less safe for fire hazards. It would break existing standards, but maybe it’s time to go 24v power sources and do voltage regulation from 24v instead of 12v (halving the current). Another idea is just to go with full out bus bars like those you’d find in a breaker panel. They could be made out of aluminum instead of the more expensive copper.
My thoughts as well.
I like that idea, but it faces the chicken and egg problem. Without critical mass, it goes nowhere. And I’m sure you’re aware we’re generally moving in the opposite direction, manufacturers are taking an interest in making computers less serviceable.
I agree, and if we’re ever going to take up more sustainable practices, we need to look into things like this. But saving consumer costs is antithetical to modern capitalism. Profits are king and we can’t have consumers depriving corporations of future sales with easily reusable components