For decades, my perception of USB was that of a technology both simple and reliable. You plug it and it works. The two first iterations freed PCs from a badly fragmented connector world made of RJ-45 (Ethernet), DA-15 (Joystick), DE-9 (Serial), DIN (PS/2), and DB-25 (Parallel).
When USB-3.0 came out, USB-IF had the good idea to color code its ports. All you had to do was to “check for blue” in the chain to get your 5 Gbit/s. Even better, around the same time were introduced type-C connectors. Not only the world was a faster place, now we could plug things with one try instead of three.
Up to that point in time, it was a good tech stack. Yet in 2013 things started to become confusing.
USB and ThunderBolt have become incredibly complex, and it feels like a lot of this could’ve been avoided with a more sensible naming scheme and clearer, stricter specifications and labeling for cables.
It’s a tough gig for Devs to wade through this mess, and it’s made worse because most end users are oblivious to the meaning of the USB port colours.
We have NUCS in a lot of workstations, and all have the black blue and yellow ports to delineate functionality, yet as regularly as clockwork I’ll find someone’s rechargeable mouse or keyboard is dead because they have the USB Keyboard/Mouse dongle in the yellow standby-power port and they plugged the charge cable into the blue USB 3.0 port overnight, or they complain about slow external drive performance and they have a USB 3.0 pocket drive plugged into USB 2.0. Yes, of course I know, they dismissed the warning message the first time they connected, but that’s end users.
Nobody ever overtly and publicly explained to the end users what black, blue and yellow ports mean, which I think in the end was one of the driving influences behind the industry connector change, that and increasing power demand!
Of course some say make all the ports USB 3+ and standby, but then that’s a hardware resource issue. Imagine what you’d do to the energy budget of the nearest skyscraper, and the base cost of the device. For me the biggest long term issue is power, most of the new standards already have all the data functionality most people and applications need.
cpcf,
My graphics card also a USB-C port with no distinct color, only a metallic shroud and I no clue what it supports.
My motherboard has blue, yellow, red type A and a black USB-C.
Here’s how they are labeled:
USB Type A, Blue = USB 3.0
USB Type A, Yellow = USB3.0 w/standby power
USB Type A, Red = USB3.1
USB Type C, Black = USB3.1
However I see a lot of information online with answers that completely disagree:
https://tech-fairy.com/what-is-the-meaning-of-the-different-usb-port-colors-blue-teal-blue-yellow-red-black/
https://www.quora.com/What-is-the-red-colored-USB-port-of-a-motherboard
https://www.quora.com/What-do-the-different-colors-in-a-USB-drive-mean
No wonder people are confused. I think it’s a bit hopeless, haha.
Ideally our plug and play standards would have a standardized method for firmware to not only enumerate all physical ports, fans, connections, etc but also have simple diagrams. Standardized OS utilities could then display the configuration to the user regardless of the system it’s running on. This is something individual hardware vendors may be doing in their own proprietary ways, but without a universal open standard it’s not nearly are reliable and useful for the masses.
We have a “Multimedia PC Standard” which is over 20 years old, and everyone knew which audio port connector was which color back in the day:
https://en.wikipedia.org/wiki/PC_System_Design_Guide#Color-coding_scheme_for_connectors_and_ports
(green: front / stereo, pink: microphone, etc).
Nowadays, even those come in all black in many motherboards. We have completely forgotten how to sanely label ports on our devices anymore.
Yes, there are color codes for USB, too. But many laptops and motherboards bother even less than audio ones:
https://juicedsystems.com/blogs/news/know-your-usb-a-practical-guide-to-the-universal-serial-bus#6
Me? I invested in a good type-C dock, and everything runs fine (most of the time). But that means spending an extra $100+ to have functionality that should have been standard in the first place.
On top of all the cabling concerns, there have been gripes around thunderbolt’s security for a while now.
Notice in the diagram “Required Intel VT-d based DMA protection”.
https://fabiensanglard.net/nousb/tb4_comp.jpg
The thunderbolt standard has the DMA controller existing at the peripheral rather than at the host, which means that unlike most peripheral buses, the DMA controllers in thunderbolt are not trustworthy. Thunderbolt ports have been successfully compromised multiple times over the years. Unfortunately this design vulnerability was never directly fixed, only mitigated using VT-d and there are still some major caveats. While VT-d does prevent thunderbolt peripherals from accessing the full system bus, it can also pose barriers to efficient zero copy DMA. If the OS is forced to resort to bounce buffers to isolate untrusted devices, it’s going to bog the system down. It just seems like thunderbolt’s designed has left us with a subpar compromise between performance and security. I also take issue with the thunderbolt standard implicitly requiring architecture specific mitigations such as VT-d. Intel may be ok with this because intel could see an advantage in tying thunderbolt’s security to technologies it owns like VT-d. But it means that now thunderbolt’s security issues have to be solved with architecture specific mitigations or just be insecure. which is frustrating for everyone.
What could go wrong with an external PCIe port? XD
Thunderbolt is the latest in a line of laptop expansion ports which solve the same problem, and it would exist in some form or fashion.
PCMCIA –> ExpressCard –> Thunderbolt
https://en.wikipedia.org/wiki/Personal_Computer_Memory_Card_International_Association
https://en.wikipedia.org/wiki/ExpressCard
Sure, but it’s an external device that could disappear at anytime. Buffering the device seems pretty sane.
I also take issue with the thunderbolt standard implicitly requiring architecture specific mitigations such as VT-d.
It also works with AMD’s implementation IOMMU, I think.
It just happens the info comes from Intel who is pumping their trademarks. They probably call x86_64 Intel 64 instead of AMD64 too. XD
Flatland_Spider,
What happens when you spontaneously unplug a thunderbolt GPU/network card/SSD/etc while it’s in use? I don’t know how well this works actually.
If you isolate the cards via VT-d, you kill a lot of the performance that DMA was supposed to give you. It brings up something I’m not clear on, given that thunderbolt can connect external PCI slots with arbitrary cards, how exactly is VT-d used in this case? It would seem to me that some drivers wouldn’t be programmed & tested against VT-d because they were really written for “normal” PCI configurations and thunderbolt security wasn’t even a consideration. Does thunderbolt actually disable VT-d isolation in some cases? Do the drivers disable direct DMA operations? I’m really unclear about this but it seems like disabling VT-d isolation opens up Pandora’s box of security exploits (*) while forcing it to be enabled could break drivers that otherwise work.
* To clarify, it could be reasonable to disable VT-d on PCI cards you trust anyways. But how exactly would the OS differentiate between a trustworthy peripherals versus a non-trustworthy ones? If the OS whitelists certain device classes considered to be trustworthy, then this would be trivial to exploit just by impersonating a whitelisted class. Just because a peripheral takes one physical shape (an untrustworthy flash drive) doesn’t mean it cannot impersonate another (NIC with zero copy DMA) in order to compromise the host.
If anyone knows of a good reference for these details, I’d be interested in reading up on it.
@aAlfman
Go and ask on Bruce Schneiers blog in his usual Friday “squid” topic. The security issues of bad design and bubble up attacks are pretty much a feature of computing as things stand. You won’t find more than academic papers or best practice guidelines on the topic nor a manual on it. It’s just too involved a topic.
As for “what happens when…” I have no idea because various components of the system whether hardware or software may not be geared or able to be geared to dealing with exceptions like a plug pull. Pretty much the entire system hasn’t been designed from first principles from the ground up as a clean sheet so problems and hack-arounds and exploits tend to be a semi-permanent feature. As for whitelisting first take a bucket of sand… Then if you have everything running in a “high assurance” way there’s always some control freak or paranoid who goes out of their way not to use an ease of use feature because they have developed a religion about “false reassurance” on top of carpet chewing NIH.
The current state of USB naming is laughably ridiculous and the same goes for cabling. Somewhere along the line the fact that regular people who aren’t particularly tech savvy are the majority of end users. You shouldn’t need a chart to decipher what hardware you have or what you need to buy. Whoever is responsible for taking something that should’ve been simple and turning it into the mess it is deserves to win the Good Job Dumbass Award.
friedchicken,
I’ve made my own bad assumptions when it comes to USB.
There was one time I wanted to hook up several USB webcams, and given that USB2 has 480 Mbps and USB3 has 5Gbps I assumed that a USB3 hub could hook up all the webcams. Well, that’s when I learned that the USB3 bus has no backwards compatibility with USB2, which really surprised me. A “USB 3 hub” actually has two electrically separated buses: one for USB3 and another for USB1&2. This means means that no matter how you try to hook up devices using USB 3 hubs and USB 3 ports, all USB 2 peripherals are physically wired to a dedicated USB 2 controller in the host. Consequently it’s impossible to get more than 480 Mbps shared for all USB 2 devices combined on a USB3 hub. I would have needed all my webcams to be USB 3, which was cost prohibitive at the time.
Aside from causing me this problem, I find the design to be wasteful. I thought the reason USB3 cables needed more wires was because USB3 actually made use of them. Instead every USB 3 cable is forced to run USB2 wires alongside the USB3 wires. One set will go unused nearly 100% of the time
I’m not sure if there is a standard making better use of these wires today or if it continues to be wasteful?
https://pinoutguide.com/Slots/usb_3_0_connector_pinout.shtml
https://www.allaboutcircuits.com/technical-articles/introduction-to-usb-type-c-which-pins-power-delivery-data-transfer/
Alfman,
You are in luck:
https://en.wikipedia.org/wiki/USB-C#Audio_Adapter_Accessory_Mode_2
In audio mode, D+ and D- becomes analog outputs.
And if I read the table correctly:
https://en.wikipedia.org/wiki/USB-C#Alternate_Mode_partner_specifications
Getting a proper “USB 3.1 Type-C to Type-C full-featured cable” (a passive one, not active) will actually support everything type-C or Thunderbolt could offer. But I would go for a Thunderbolt marked one to be extra sure.
Interesting.
On top of that, USB3 also causes radio interference on the 2.4GHz band.
https://www.intel.com/content/www/us/en/products/docs/io/universal-serial-bus/usb3-frequency-interference-paper.html
Yup, so add shielding for that to the ever growing list of things to look for when buying a USB3* device.
I have two laptop docks. One has SATA and the other has USB 3.0. Sadly my laptop is USB 2.0 and the dock lacks USB 3.0 control circuitry so the USB 3.0 port for my laptop is USB 2.0 passthrough only. I don’t have much need for USB 3.0 but bought an Expresscard with two USB 3.0 ports just in case. (For some reason “by design” the audio jack passthrough on my dock is none functional on my laptop model so had to use a 90 degree adapter to stop the cable being knocked. I now route audio via USB on the dock to the desktop display and from there out to speakers. That cleans things up a bit so is a USB success story.)
The thing about dedicated controllers and controller bandwidth limits catches a lot of people out. For those with a desktop and spare PCI slot and a need for lots of webcams or other devices a USB card may provide more bandwidth.
HollyB,
Yes, I actually considered PCI expansion cards but the reason for using USB 3 hub was actually dual purpose: connect many cameras and do so in a separate room. The USB extension cables I tested did work, but I would have needed a lot more and I did not want to have to run so many of them. I also considered buying a separate rasberry pi for every camera as a workaround for these USB limitations, but it was just getting too clunky for my taste. Anyways I eventually ruled out USB altogether and resorted to ethernet cameras which are more expensive but have none of these problems.
@Alfman
I have projects in mind myself so kind of interested to know what the best way is to manage this. There are products which allow camera tunneling via an usb to ethernet adapter but they seem expensive and I’m not sure how well they work.
HollyB,
Do you mean the USB extension adapters that extend through RJ45 cables like this?
http://www.amazon.com/USB-Extender-Extension-Adapter-Cat5e/dp/B07C2MCJFY
I have tried these, but if you go too long they’re NOT USB compliant because there are strict timings that limit the length of the cables. To be clear, some USB devices worked perfectly fine this way, but as I recall around 20ft my USB cameras would start exhibiting corrupt frames (and this was just for a single camera). In order to reach the range I needed I had to get active extenders (there’s a lump in the cable every so often). These DID work for me, however there was considerable voltage drop, so I hooked up a powered USB hub to rectify it. This all would have worked great for me if not for the aforementioned USB limitations when hooking up multiple USB 2 devices to a USB 3 port, which was never resolved.
@alfman
After looking through the options including extra long cables or a Raspberry PI feeding to a server via ethernet, or similar I began looking for camera USB to ethernet adapters. Something like this takes the camera USB output then actively tunnels the data over ethernet to a server.
https://www.gxccd.com/art?id=422&lang=409
@Alfman
I had another look. Assuming something like a Raspberry PI to take camera input via USB and pipe it via ethernet to a server a webTRC server might be able to process multiple video streams over ethernet. Kurento Media Server is open source: https://doc-kurento.readthedocs.io/en/latest/user/intro.html
The other options including USB camera to wifi I looked at are also hideously expensive for what they are. Anywhere from lb200 upwards which seems excessive to me.
Alfman,
A Raspberry Pi + PoE Hat + Camera module + case would cost about $100. That is comparable to a good USB camera, not including all the extension work you’d need:
Pi 4 board: $35
https://www.canakit.com/raspberry-pi-4-2gb.html
PoE Hat: $21
https://www.canakit.com/raspberry-pi-poe-hat.html
Camera module (v2): $30
https://www.canakit.com/raspberry-pi-camera-v2-8mp.html
Cases: $10 + $13
https://www.canakit.com/raspberry-pi-4-case-white.html
https://www.canakit.com/raspberry-pi-camera-case-enclosure.html
Total: $109. Lower if you use a 3D printer for the cases.
Or go all in and get the higher end camera modules:
https://www.canakit.com/raspberry-pi-hq-camera-kit.html
@HollyB
Some WebRTC service which might be helpful.
* Janus WebRTC Server (https://github.com/meetecho/janus-gateway)
* Pion (https://pion.ly/)
* Project Lightspeed (https://github.com/GRVYDEV/Project-Lightspeed)
* Galene Video Conferencing Server (https://galene.org/)
I wonder if I could feed my web history into a private search engine and if that would make finding things easier. Hmmm… :\
@Flatland Spider
I’ve moved on from tech so this is getting a bit beyond me. I would only need to organise multiple cameras if I was doing something live and OBS would probably do for what I need. For anything none live I would find a way to sound sync if need be or do multishoots/B roll stuff.
I am surprised the solutions aren’t consumerised and are so difficult and expensive.
@HollyB
I thought we were talking about networking webcams? On that thought RTSP (https://github.com/aler9/rtsp-simple-server)
Anyway, OBS is really nice. Provided the computer is beefy enough to handle the encoding, and the . A hardware based alternative that works well is the ATEM Mini. https://www.blackmagicdesign.com/products/atemmini
No one saw the need since requiring people to be in person wasn’t an obstacle. 8+ hours in airports and airplanes wasn’t a big ask.
Remote was pretty niche until this year, and video tech was really neglected. There were a few companies, like Zoom, who were pushing video tech, but it was mostly an after thought.
@Flatland Spider
I have no idea what we are talking about anymore as I’m totally lost on this. The whole problem of capture and receive and proces is a bit of a mess and more expensive than need be even if fairly limited in scope to a handful of cams in different rooms.
I mostly use OBS for realtime exposure and colour grading.
I was involved with one project some years ago now which never took off but one of the ideas was packing a “virtual studio” in a box (including cameras and lights and background and props) so people could be professionally interviewed at distance and the scene matted together so it looked like it was all done at a single studio location, (The other person involved with this project was a professional broadcaster who used to host a mainstream television discussion show.) I think there is or at least was one company which began offering this type of solution to corporations and media a couple of years ago.
It is hilariously bad, and it’s too bad Scott Adams is phoning it these days, among the other things which make him problematic. This would be prime fodder. XD
For everyone who hasn’t had to get a secret decoder ring or tried to find a real USB3.2 Gen 2xN expansion card:
USB 3.2 Gen 1×1 (5Gbps; USB-A, USB-C, microUSB) –> USB 3.0, USB 3.1 Gen 1
USB 3.2 Gen 1×2 (10Gbps; USB-C) –> No predecessor
USB 3.2 Gen 2×1 (10Gbps; USB-A, USB-C, microUSB) –> USB 3.1 Gen 2
USB 3.2 Gen 2×2 (20Gbps; USB-C) –> No predecessor
https://www.kingston.com/unitedstates/us/usb-flash-drives/usb-30
USB4 seems to be the place where everything makes sense again, but they had a consistent naming convention trashed that with USB3. Who knows?
The removal of RJ-45 connectors a bit different than the other ones, and also a bit more recent.
While the other connectors were actually replaced with USB connectors, connecting to wired networks is still done using RJ-45 connectors, it’s just that some hardware doesn’t have a wired network adapter built-in anymore: wired network connections have largely been replaced with wireless connections in many cases, that’s not because of USB.
(and yes, it’s possible to connect a wired network adapter using USB, but then RJ-45 is still needed to connect to the network)
Onno,
Yeah, the RJ-45 ethernet connections are clearly still very useful and practically everyone’s ISP broadband service will terminate to an ethernet port/switch in addition to wifi. Wifi is great for convenience and good enough for a lot of people but if you want to add more access points, NAS, security cameras, desktops, etc, such things can quickly saturate wifi and it’s usually preferable to wire them up when possible. Even with laptops, I personally can’t do without ethernet ports because plugging into LAN equipment is a regular part of my job. I absolutely hate when manufacturers think that dongles are good enough.
Token ring connectors may have been a better example, haha. Also I had computers where the serial ports actually used DB-25 connectors (inverted from the parallel port). While these got replaced with DB-9, most external modems kept the DB-25 connector and we’d use an adapter or cable with different connectors on it.
The parallel port, while not particularly efficient, was quite versatile for directly connecting standard PC to external electronic circuits. I had a PIC programmer that functioned this way. I wouldn’t want to go back to such bulky connectors, but these days you can’t hookup electronic circuits directly anymore. The easiest/cheapest way I’ve found is to get an arduino which includes a USB-serial chip. Which is pretty nice, but the drivers aren’t always plug and play.
This is an old topic but it’s interesting to have a review of where we are at. So after all this progress we end up with a mess of a standard and Intel using its monopoly to sell a propriatory standard which enables Intel to flog more of its own chipsets? Well, that went well.
Here’s a question. Would people feel better about Thunderbolt if it had been developed by Intel and Dell instead of Intel and Apple or if it had been an AMD technology?
Personally, I’ll just stick with saying decent open standards matter. Giving monopolists with shady business practices a kick is a bonus.
If something is a good idea there’s no point being partisan as long as the ethical situation is sound. It doesn’t mean you have to like them or agree with them or make their life easy in other ways. If people cannot cooperate and the market fails this is where government and regulators step in. Ditto the UN and human rights courts etcetera with global politics.
Thunderbolt4 is supposed to be the basis for USB4. Is that better or worse?
Flatland_Spider,
My concerns aren’t about who made it so much as what I consider to be design flaws. This is nothing new and has been going on for a very long time, but here’s an example.
https://www.macobserver.com/news/usb-c-thunderbolt-vulnerability-revealed/
Unfortunately this compromise between performance and security is baked into thunderbolt at this point. Having the peripheral perform DMA access is fundamentally broken. Bare in mind this design was chosen before VT-d was common in many desktops. Even if it’s not “malicious”, a bug in a thunderbolt peripheral could spew data all over system RAM. What were they thinking? VT-d can only tame it by segregating the thunderbolt devices from the rest of the system. If this is necessary to keep it secure, then so be it, but this is inherently less optimal than if thunderbolt used a trustworthy DMA controller in the host that did not need VT-d in order to protect the host.
I think thunderbolt’s engineers could have done a better job had they answered the question of whether they wanted to be a PCI bus extender or a peripheral protocol. By conflating these two they’ve made a mess of peripheral security. And it’s not even ideal for an external PCI bus standard. If that’s what they were going for they should have owned that developed it into a full speed external PCI bus. I know this is against the “one cable to rule them all” philosophy of USB-C, but IMHO it kind of sucks that things like eGPUs end up compromising on performance just to share the same cable as any other USB peripheral.
Anyways I’m repeating myself now, sorry
The big problem is Expresscard was eventually deprecated when it bgan to get good and superceded by Thunderbolt.
One thing people miss is some laptops like mine which can be docked have a big fat socket on the base of the case which exposes the bus. This is very useful if you want to mod something. For an earlier model of laptop Lenovo actually sold a dock which contained a PCI-E x16 riser socket for a full length graphics card. In theory there is nothing stopping anyone modifying a spare dock to do this. It just crossed my mind I have never read an academic paper or monograph on exploits via docking ports. It’s the one big fat and completely unguarded slot on a laptop while everyone is busy fussing with USB and Thunderbolt (and to a lesser extent Expresscard) specifications and switching them off in BIOS or filling the holes with glue. Imagine the fun you could have building God knows what into plug ripped from a dock or into the case of a laptop dock. I’m guessing it would be a fairly easy social engineering task to deploy an exploited dock too especially if with a nod and a wink you gave them a Kensington lock to secure it to the desk. Ooh, you have to watch out for those thieves and hackers you know. Can’t be too careful, wot wot.
http://www.thinkwiki.org/wiki/ThinkPad_Advanced_Dock
HollyB,
I’m not really familiar with it. To be honest though I don’t find proprietary solutions interesting. I don’t find a dock compelling unless it’s standard and I can actually swap laptops and still use the dock. I don’t expect all laptops to have a dock, but ideally those that do ought to be compatible.
I don’t know if there’s a way to do it but it might make sense to disable the expansion port until the user logs in to enable it. It could compromise a bit of convenience for more security, it should be an option for security-conscious users especially away from home.
The problem with thunderbolt as opposed to a dedicated PCI expansion port has to do with typical use cases. In normal use, you would not connect peripherals like thumb drives and cameras directly to a PCI bus extender interface. It would raise suspicion and standard peripherals wouldn’t even have a PCI connection interface in the first place. But thunderbolt changes these norms and now ordinary peripherals like cameras and thumb drives with standard USB-C ports do in fact have a path to attack the PCI bus via thunderbolt, This puts users at risk of connecting peripherals they don’t necessarily own to their PCI bus in the course of their daily work.
Under an “evil maid attack” adversaries have physical access without you seeing and this means they can open screws, get to minipci, install bugs, etc. In short, if a skilled adversary gets possession over hardware, you can be compromised. But if you can pull this off by plugging in innocent looking peripherals, your system could be compromised even while under your direct supervision without noticing anything amiss. I’m going to call it the “evil magician attack”, haha.
@alfman
I’m all for standard docks too as well as docks which exploit the bus to add useful features. I found Lenovo quite good for supporting a range of models with their docks but like you say there isn’t a general standard. I picked my docks up used and in as new condition for about lb15-20 off Ebay so not complaining. Others would have more need to be cautious but that’s another ballgame.
Whitelisting things which interface with an exposed bus solves a lot of problems although there is then the issue of cloning. Apple have been OTT about components but there may be a workable protocol in this area which covers everything from components to devices which interface with the bus to more benign stuff. Varying degrees of assurance exist depending on how tight you are with your supply chain but it should stop opportunistic attacks.
I remember that dock. I think a friend of mine bought one. I’m not sure he really used it, but he might of had one.
One could argue, the proprietary nature of the dock sockets increased the complexity and re-usability of the attack, but ultimately, they had same security problems as Thunderbolt.
I don’t miss those docks. Thunderbolt docks are more reuseble.
Mod the docks and put them on Ebay.
@Flatland Spider
I’d like the option of a dock with a built in PCI-E slot just to keep things tidy. According to Thinkwiki the dock expansion port was deprecated since Thunderbolt was introduced.
I may already have bought one. lol
@HollyB
USB3 docks which required drivers replaced the dock port. There are/were DiskplayLink based and didn’t work that well. (https://www.displaylink.com/) I had a very expensive Dell version which I didn’t particularly like.
Some Thunderbolt docks have PCIe slots on them, like the Sonnet 750ex.
https://www.sonnettech.com/product/egpu-breakaway-box/overview.html
The ecosystem is finally starting to catch up.
I’ve got some questionable gear floating around too. XD
@Flatland Spider
I was joking about iffy supply chains. My docks aren’t modded. Bog standard from a reseller of corporate cast offs from what I can tell.
The first egpu boxes with PCI-E slots for graphic cards I looked at were Thunderbolt docks because that was what the search engines threw up first. They seem to be adding more ports on and beefing the specifications now. They are all overkill for my needs.
MiniPCI Express and the successor M.2 are all exposing the bus and then Thunderbolt came along. Mixing the PCI bus and displayport isn’t the first thing which springs to mind. I can kind of rationalise it but the whole thing of cables for buses, displays, and networking is a bit of a nightmare. I guess I’m ticked off nobody did an Expresscard to Thunderbolt port but then the Thunderbolt eGPU boxes are overkill for my needs. If I did build my own box I have the Expresscard to PCI-E adapter and can always cobble together a replacment shielded cable if I really want to out of a Thunderbolt cable as it’s purpose built for carrying PCI-E signals?
That’s kind of the thing. Security takes cycles and reduces performance because performance shortcuts aren’t necessarily the most secure way to do things. OpenBSD is security focused, but it’s not as fast, or as feature rich, when compared to other OSes with looser security requirements.
Certain classes of devices don’t work that well without being on the system bus, and there was a need to be able to extend laptops the way desktops are extended?
There were predecessors to Thunderbolt with the same problems, but they didn’t get nearly get the same amount of hate. It happens to be Thunderbolt is closely associated with Apple while PCMCIA and ExpressCard are more associated with the x86 clones (Dell, HP, etc.).
Flatland_Spider,
Well, I’m not against bus extensions, but it really should be a dedicated port that’s not confused with standard peripherals. Obviously peripherals should never have access to system resources. I really do hope they’ll make thunderbolt secure with each thunderbolt standard, but they never do, instead opting for architecture specific mitigations. So pretty much have zero faith that they will at this point and I expect they’ll just keep passing buck to others for their own insecure design. Oh well, I didn’t build it and there’s nothing I can do about it. On the plus side though, thunderbolt could one day be used by owners to jailbreak vendor restrictions. Jailbreaking is one of the few areas where weak security is a bonus :-/
Indeed. Any PCI-like bus is going to have these security risks, but you generally don’t find yourself swapping PCMCIA/ExpressCard/etc devices with other users as you do with media.
When you plug in a PCMCIA card, the expectation is that it can (and does) connect directly to your computer bus. But when you plug in a thunderbolt camera/printer/storage media/projector/etc the expectation is that the computer accesses the peripheral and not the other way around. Thunderbolt ports completely subvert the user’s expectations of what peripherals can do. I would argue this was always a bad design for something that was going to be used to connect external devices to a host computer and that this could have been better engineered up front. If it had been I wouldn’t feel the need to object to it now, haha. I wish they had addressed this during the migration from firewire to thunderbolt, but they didn’t and now that so much money has been sunk into this thunderbolt standard and hardware designed around peripherals writing into host memory, it’s unlikely to be fixed at a hardware level.
I can’t find it anymore because all of the posts about the naming are now ” Why is it so terrible what’s wrong with them…” But the reason behind the crazy versioning has to do with the standard and how hardware companies that licensed the standard were behaving. Basically with the sensible naming scheme. They would buy the USB 2.0 standard and never buy the USB 2.1 standard. So their devices would always be 2.0 and never benefit from the changes made in the newer revision. So the idea with 3.1 is that a licensor just buys 3.1 and they can see the old 3.0 as well as the new better version and make a decision of which one to use because they are in the same standard document. The logic is that manufacturers would then use the better version and be compelled to market it in such a way that the benefits were clear. You know market it with terms other than the standard names that USB had advised..
This is to say, very dumb and hurts consumers badly.
From that perspective, it makes total sense. I would think there would be some sort of deprecation process in the event of future revisions.
For example, an integrator buys the USB2 standard, and they get access to 2.N as part of the deal. Once 2.N+1 is released, 2.N is only certified for 6 months after 2.N+1 is released, or something. Kind of like how Linux distros only support releases for certain time periods.
From a consumer perspective, integrators are cheap and lazy. They will do the bare minimum, or some random combination without strict rules.
I noticed from reviewing stuff on Expresscard it had been rolled up by the USB Implementers Forum (USB-IF) and deprecated then superceded by Thunderbolt. I’m not actually sure what PCI mode my laptop Expresscard supports and too lazy to download and run a utility to discover this. It’s either 1.6 or 3.2 gigabits. Most likely 1.6Gbit/s. It’s really irritating my laptop was at the end of one standard going out and new standards coming in like Thunderbolt 2.
If I have my eGPU plugged in I suppose an exploit could dodge the OS and meddle with the now quite old graphics card driver and have a field day with the PCI bus as it has zero protection. That said I don’t normally have it plugged in and if I did my software and internet habits are quite conservative. Other than this it’s generally in a controlled environment and not left unattended in public facing environments so not a huge problem in the real world. Hands up what opportunistic hacker carries an Expresscard expliot on their person? Pretty much zero.
I’ve also got one of those data filters (with the data lines visibly missing through a window) if I ever need to USB charge anything off a public socket.
If it makes you feel better, the PC manufactures, minus Apple, banked on USB to be the peripheral connection of the future. Even today, most laptops only have USB available, and it’s the high end workstation laptops which, selectively, get Thunderbolt.
ExpressCard was pretty niche, and it disappeared pretty quickly. I have never seen an ExpressCard in real life. I did almost by a ExpressCard serial adapter for my Haswell-era T420, but that’s the closest I’ve gotten.
I was lucky buying my Thinkpad T520 when it was in excellent condition and under lb200 used off a corporate cast-off reseller so not complaining too much. The chipset supports 16GB and I have a 1o80p 15″ display which cost me about lb55 new in the cupboard waiting to be fitted and I bought spare keyboard just in case for lb35. I’m really pleased I got it. I doubt I’ll be able to get a Thinkpad P series for this amount as and when they become available at the end of the leasing periods. By then who knows what standard we’ll be on?