There’s a story going round that Lenovo have signed an agreement with Microsoft that prevents installing free operating systems. This is sensationalist, untrue and distracts from a genuine problem.
With that solved, let’s get to the real root cause of the problems here:
The real problem here is that Intel do very little to ensure that free operating systems work well on their consumer hardware – we still have no information from Intel on how to configure systems to ensure good power management, we have no support for storage devices in “RAID” mode and we have no indication that this is going to get better in future. If Intel had provided that support, this issue would never have occurred. Rather than be angry at Lenovo, let’s put pressure on Intel to provide support for their hardware.
As someone who tried to move his retina MacBook Pro to Linux only a few weeks ago – I can attest to Intel’s absolutely terrible Linux drivers and power management. My retina MacBook Pro has an Intel Iris 6100 graphics chip, and the driver for it is so incredibly bad that even playing a simple video will cause the laptop to become so hot I was too scared to leave it running. Playing that same video in OS X or Windows doesn’t even spin up the fans, with the laptop entirely cool. Battery life in Linux measured in a 2-3 hours, whereas on OS X or Windows I easily get 8-10 hours.
Except for this: [As most of the time] Would bet Engineering not being the stumbling block
Edited 2016-09-21 23:23 UTC
Let’s just say I have never seen, and never thought I would see, a fake RAID driver be required by default to access a single disk. How on Earth this helps anything like power management is utterly beyond me.
I would also imagine if you took the disk out of the laptop you wouldn’t be able to read it unless you were using the same interface and drivers.
“…I would also imagine if you took the disk out of the laptop you wouldn’t be able to read it unless you were using the same interface and drivers…”
Agree if file-system virtualized|scrambled.
Retired now. Still friends and family disturbing this aged bones with the recent, cozy China ‘itch’ for dominion of the Cyber-From-Always-Fucked-Space.
If only Governments had not adopted this Mil Toy. To late, anyway. Just commit and start cleaning the $h!t.
All of the ‘stack’ concept is wrong. Making one thing vitally depend from a previous one. And that previous from another. And so. And so and so on.
Don’t know how inter-weaved is ‘stack’ with the Von Neumann concept. [But suspecting nothing at all].
At this time this environment is so fucked. And yes, also had an account at Yahoo
I was thinking of trying a Linux distro on my 2013 MBP, but maybe I won’t even bother now.
Thom do you have any drivers issues with Windows? The moment I install Realtek’s audio drivers I hit BSODs, and Apple don’t appear to have bothered updating their bootcamp driver suite in a while now.
Apple don’t appear to have bothered updating their bootcamp driver suite in a while now.
This is the sign of inevitable switch!
You won’t lose anything from trying. 2013 is old enough that power management should be slightly better. My 2013 HP ultrabook runs fairly cool and quiet.
I did spend some time tweaking it a couple of years ago, though.
Seems Lenovo’s Signatures render 100% Control & ‘Trust’ of the whole boot stack to Microsoft. Which I see as good, if sold as such.
Why? Those unit will be stronger at remote diagnostics.
Edited 2016-09-22 19:42 UTC
Judging by how difficult Windows 10 makes remote admin tasks, somehow I doubt it.
But not for themselves, at least 8 and 10.
Low End Intel Graphics drivers for Linux are quite energy decent, compared to their Win cousins.
Intel allocates its resources to development of custom proprietary interface they don’t really bother to support in a more or less reasonable manner: no Linux drivers, default Windows driver does things wrong. Sounds like a bad idea that was not dismissed until it was too late. Fine.
But in what kind of universe the proper workaround is to disable standard interface and force the aforementioned custom proprietary vendor-specific interface onto innocents? And more so, how the hell did someone get an idea that removing a firmware setting for unbreaking hard disk access is a reasonable thing to do?
Sure, this story tells a lot about Intel’s decision making, but Lenovo’s responce is much more idiotic by far. And on top of it there were reports of Lenovo stuff bullying people out of reporting the issue on Lenovo forums…
Sometimes better to think that PR is -by design- a race to the bottom.
Edited 2016-09-22 01:13 UTC
Just, could just be a slight hint of uncompetitive behavior? Of course, resource allocation more than enough to dismiss any regulative attempt.
The general rule is that if something takes effort a manufacturer won’t do it. This took effort and there is a certainly reason behind it.
This is a perfectly valid reason for defaulting to Intel’s proprietary implementation. This is not a valid reason for removing a knob from firmware though. It almost looks like Lenovo did something stupid to AHCI mode on those laptops and tries to hide the problem instead of working it around in software or recalling hardware. I won’t be amazed if people flashing custom firmware with knob reenabled will find some interesting error patterns in disk access, or maybe even fried disks.
Edited 2016-09-22 11:04 UTC
If you make it possible to use something, you need to test it, create fixes when people notice problems, etc. Those take resources. It’s easier to disable/not build/whatever they do with that function.
Well, considering that the number of people that will be running Linux on this laptop is probably significantly less than the number of users that will accidentally break their systems or unknowingly kill their battery life by disabling the RAID setup, disabling the standard AHCI mode in favor of RAID is perfectly valid.
Drumhellar,
Most users don’t even know how to get into the bios, especially these days with fast boot. Even if AHCI performs worse, they could simply add a warning in the description of the field to make it clear.
The rest of the industry doesn’t have an AHCI problem, and even Lenovo’s firmware itself doesn’t have an AHCI problem when the firmware is flashed with a hacked version, so the need for proprietary raid seems implausible to me. If blocking linux was unintentional as they claim, then the right thing to do is to remove the restriction. If they dig their heals and continue to block alternatives even after they know about the problem, then it’s no longer plausible for them to say it’s unintentional.
https://forums.lenovo.com/t5/Linux-Discussion/Installing-Ubuntu-16-0…
Lenovo can do what it wants, but if I were a customer and discovered this restriction, I’d be demanding my money back.
Edited 2016-09-26 02:49 UTC
Power Management:
http://www.webupd8.org/2013/04/improve-power-usage-battery-life-in….
Fan Speed Control:
https://github.com/MikaelStrom/macfanctld
…and if you have a discreet Nvidia GPU as well:
http://bumblebee-project.org/
Note: These may or may not work depending on your MacBook Pro model, but not having at least TLP on a Linux MacBook is 100% guaranteed not to work
You probably won’t get 8-10 hours, but 6-8 is doable if you configure everything right.
Edited 2016-09-22 00:58 UTC
I haven’t tried to use a recent lenovo but I can can say that some Asus motherboards do not support anything but Windows and Linux for GPT boot even with secure boot disabled. I am unable to boot two different BSDs off of GPT with a recent AMD AM3+ Asus motherboard but MBR works.
Every computer and motherboard sold should be allowed to boot an OS off GPT with an unsigned boot loader as long as secure boot is off.
As for power management, Macs are weird sometimes about fan control. Apple has their own fan management stuff and they don’t use stock UEFI found in PCs, but EFI based off an older standard. It’s not an apples to apples comparison if you forgive the bad pun.
Haven’t tried anything else [adding FreeDOS] on recent years. Has the x86 grip-on already reached the ‘Great UEFI Black-Out’?
Edited 2016-09-22 01:34 UTC
It’s not really good enough that “several free operating systems” are signed by microsoft’s key. IMHO an owner should be entitled to install ANY free operating system even if it is not signed by MS/UEFI keys, and even if they’ve compiled themselves!
Anyways, it seems that neither secure boot nor microsoft are responsible for this particular problem. Apparently Lenovo added a single jump to the firmware to force RAID to remain on. The comments mention that a user was able to reverse engineer the firmware, remove the jump, and boot just fine without raid.
Garrett, the author, wants to shame Intel into providing a RAID driver, I agree entirely with that. However I’m at a loss to understand Lenovo’s motivation for going through the trouble of making the tiny change that caused these problems? This is entering pure conspiracy theory, but was this breakage an executive decision rather than an accident? We’ll have to see if Lenovo is forthcoming with a firmware fix. If not, then it seems likely this is the outcome they want.
Edited 2016-09-22 01:35 UTC
No, and that will gradually be coming as critical mass is reached. Until then drivers are a good way to lock out other operating systems, or make sure people can’t install older Windows versions or upgrade to new ones.
MBPs are special snowflakes, and the best thing to do is to run MacOS on them. I’ve been thinking about getting a MacBook just for the battery life and a hobbled Unix environment that isn’t the really nerfed crap on Windows.
MBPs are known for having terrible Linux support, and they use EFI rather then UEFI. (https://support.apple.com/en-us/HT201518) They aren’t 100% a standard x86 laptop, so there are lots of things that don’t work as well on a MBP versus a Dell Precision.
On the Linux side, the power management is behind MacOS and Windows. It’s gotten better over the years, as my T420 will attest to, and people are paying attention to it more.
Also, setting tuned to desktop mode might help.
I’m just a newbie in this regard but my personal experiences seem to support these findings.
I’ve put Linux on several various brand “Windows Laptops” and I find a modern Distro gives me great performance and battery life is good. I make most of my PCs dual-boot these days, and I extend the life of old hardware with Linux.
By comparison I’ve had nothing but trouble trying to do the same on Apple hardware, even when using Dstros that are supposed to be “Mac Centric”. Nearly always there is some third party driver issue or specific hardware combo required to fix the situation.
I’d be inclined to put the hard questions to Apple.
Edited 2016-09-22 03:55 UTC
Hi,
From Apple’s point of view, they’re selling “hardware + software” as one product (and are not selling “generic hardware” as one product and “OS” a separate product). For this reason they have no reason to care about any other OS whatsoever (in the same way that Toyota has no reason to care if you can put Volvo engine in their cars).
If an alternative OS want to support Apple’s systems, then hardware support issues (drivers, etc) are purely the alternative OS developer’s problem; not Apple’s.
– Brendan
Hi,
The problems here are (in order of importance):
* There is no standard “RAID controller” hardware interface specification that OSs can support. This means that (unlike AHCI where there is a standard/specification) every different RAID controller needs yet another pointlessly different driver that’s been specifically written for it.
* Apparently, AHCI has power management limitations that should be fixed (?). There should be no reason to require RAID just to get power management to work, and power management should work for all AHCI controllers without special drivers.
* All OSs struggle to provide drivers (especially when there’s no standardised hardware interface specification and you need a different driver for every device, or when something is “very new” and there hasn’t been time for people to write drivers yet). This doesn’t just affect Linux, and will never change.
– Brendan
Edited 2016-09-22 05:09 UTC
The RAID thing isn’t really an issue on Linux though. You have tools like LVM and MD, and these days BTRFS and ZFS, which provide the same functionality with much greater flexibility than a hardware implementation, and quite often more efficiently and reliably than a hardware implementation could. With limited exceptions for really big storage arrays, I know nobody who uses Linux who doesn’t run whatever RAID controllers they may have in pass-through mode these days, and even when they don’t, they’re usually running those smaller RAID arrays as components in a LVM or MD based RAID set.
As far as the power management, the issue is just as much the drives as the controllers, and even more, is an issue with ATA in general.
ahferroin7,
Can you explain what the problem is?
I won’t dispute that MD and LVM based arrays often use more processor time, but I will comment that:
1. In quite a few systems I’ve seen, this extra processor time actually results in less power consumption than using the hardware RAID controller (this includes a number of server systems with ‘good’ RAID HBA’s we have where I work).
2. I quite often these days see LVM based RAID arrays (which uses MD code internally for RAID these days too) outperform a majority of low end and quite a few mid-range hardware RAID controllers.
The other thing to consider though (and this is part of the reason we almost exclusively use our HBA’s in pass through mode at work) is that even on Windows, it’s a whole lot easier to get SMART data and other status info out of a disk that you can talk to directly.
For the power management stuff:
With SCSI drives the issue is that not all of them even support power management. Most of the FC based ones do, but you can still find SAS drives without too much difficulty that have poor power management support.
With SATA drives, things get more complicated. There’s roughly 3 relevant standards, and most drives only support a subset of them. Many good drives these days support APM based power management, but those that do don’t always implement it correctly. Most desktop drives don’t support Link State Power Management (the standard that lets the controller and drive power down the link when idle), but most laptop ones do, and it’s hit or miss with enterprise drives. Some support AAM, which isn’t actually power management but can be used in a hackish way for it, but those that do don’t support using it as the same time as APM usually.
In a more abstract sense, part of the issue is with Linux, but it’s also an issue with other OS’es, just to a lesser degree. Linux doesn’t handle nested power management very well on x86. If any of your drives can’t enter a low power state, it blocks the controller from doing so on Linux. In some special cases, it’s possible to have the drive remain active but idle while the controller or even just the PCIe link enters a low power state, but Linux doesn’t do this.
There’s another advantage too: data recovery. Most hardware RAID controllers I’ve seen use a proprietary RAID scheme. To recover data from a downed machine by moving the drives to another machine is not possible unless the RAID schemes match, which usually means the controller has to be from the same manufacturer at the least.
With software RAID be it md, MacOS, raidctl (*BSD) or whatever, you can easily put the array back together regardless of the underlying hardware. All that’s necessary is to use the same software system on the destination machine and the array can be put back together right away. To me, this safety is worth the slight CPU cost. It guarantees that you’ll be free from vendor lock.
darknexus,
Yes, in theory. But I was caught off guard when I tried to move an MD array on an ESATA enclosure from one computer to another. It turns out that version 2 of the MD disk format ties the array to the host, which is incompatible with my use case.
There’s a second reason I think this MD “feature” is very shortsighted: on my system, the hostname is stored on the array itself, but mdadm won’t load the arrays correctly prior to the hostname getting set, a catch-22. I ended up having to patch mdadm source code to get the expected behavior. I haven’t checked if they’ve fixed the design.
I’m not actually referring to backups. I’m referring to getting that system back up and running, as is. Sometimes that’s what needs to be done, asap.
Yes, you should have backups. Full backups, in multiple locations. Sometimes though, the focus is “GET THIS THING UP AND RUNNING NOW!!!!!” WHICH IS WHAT i WAS REFERRING TO.
darknexus,
I know what you were saying, but that scenario implies a hardware failure of some kind, which is going to take time to fix. If you planned for it, your backup system could be up and running even before you manage to fix the primary system.
In my case I can start using the backup file server in place of the normal file server just by turning it on and changing the IP because it’s already set up. My file server actually did die once (after an SSD upgrade of all things), and that’s exactly what I did until I was able to fix the primary server. You dislike HW-raid, ok…fine. But for some people raid controller failure is not as big a problem as you make it out to be. It’s never even happened to me, and even if it did I would handle it the exact same way.
Conceivably the backup server could fail at the same time, then it would take long while getting all that offsight data back quickly. But keep in mind that my backup server doesn’t use HW-raid, so in this scenario, HW-raid wouldn’t be at fault anyways.
Please understand I’m not trying to take an elitist attitude here, I know it’s not for everyone, I’m just sharing a different point of view
Edited 2016-09-22 22:03 UTC
Unless the software you have to deal with is, shall we say, full of licensing restrictions. Then it’s not a matter of simple backup servers unless you want to pay for another license. This way, a backup server can be standing by and all I have to do is move the drives in, fire up live media to double-check the md array, and boot up. Five minutes, at significantly reduced cost without violating any licensing. I should add that I’m not the one who necessarily made these decisions in the past regarding software and licensing. Sometimes one just has to make the best of it.
Fortunately I don’t have to deal with that situation anymore (that startup company is gone now) but I remember it all too well. Now I deal with Windows Servers… hmm, I’m not sure I got the better of that particular trade.
darknexus,
Haha, this is quite tangential to the technical merits I was thinking about, but sure I guess if costs are an issue then you may have to forgo some redundancy both with hardware as well as software.
If you aren’t allowed to backup your software (commercial or otherwise), then that’s a far bigger problem than these issues we’re talking about. Granted it’s been a very long time since I’ve read any software licenses, but between the licenses and copyright law’s provision for backup, it would be alarming if good backup practices were actually prohibited. If true, that’s a very compelling reason for businesses not to use commercial software.
Well it was ten years ago. In any case, sometimes there just isn’t a “free” alternative. Ease off on the agenda a bit. In any case all of that isn’t my problem now, and we’ve got proper redundancy. It’s just too bad it’s Windows Server. That os always gives me the impression of a stack of blocks waiting to topple at the slightest disturbance, coupled with obscure error codes that even Microsoft forgets to document half the time. That’s another topic though.
darknexus,
Back when I was a windows administrator, I didn’t have too much trouble with windows itself, and to this day I still like windows domain management better than having many heterogeneous unix boxes on a network where UID/GID management is a headache especially in conjunction with windows shares. I’d ditch posix conventions in a heartbeat but so much of the unix ecosystem depends on it.
Oh god did I hate ms-exchange though, I definitely made good use of those backups because it became corrupted several times. Then you needed to roll up the database using transaction logs or something like that. It seems like a reasonable approach for an actual disaster, but using it to recover from it’s own instability was pathetic.
Edited 2016-09-23 18:53 UTC
ahferroin7,
I’ve definitely encountered PM issues with APM/ACPI (sometimes even with windows), but these are incompatibilities in the host, not the devices. I’d be very surprised to see a compliant/non-faulty SATA device be incompatible with linux. If you have any evidence to the contrary, of course I’ll look at it but far more likely is that the incompatibility lies between linux and the host controller.
In any case, even if this is all true, I still don’t see an argument for Lenovo disabling standard AHCI – a compliant AHCI controller should work equally well regardless of the vendor, which seems to be confirmed by the user who hacked it back in.
Edited 2016-09-22 17:43 UTC
I don’t understand why they might completely disable it, but I can understand why they may not want to use it. AHCI is not particularly efficient, and it’s usually not MP safe (which hurts efficiency even more on most modern systems). If you take the same set of SATA disks and test performance on an AHCI controller and a good SAS HBA, they will almost always get better performance on the SAS HBA. In the same manner, Intel’s ‘RAID’ mode for their SATA controllers is usually more efficient if you can use it.
ahferroin,
I had trouble finding benchmarks for this even though there seem to be people posing the question ( http://serverfault.com/questions/297759/anyone-seen-a-meaningful-sa… ). Maybe I’ll try to benchmark it myself. However this still does not sound like a good reason to disable AHCI.
I don’t have any solid numbers myself since I don’t really have the means to properly benchmark this, but we see measurably improved throughput on the file servers we run this way at work compared to just using a regular SATA controller.
As far as performance being a reason to disable AHCI mode, I agree, it’s not an excuse for completely disabling access to it, but it is a valid reason to not use it by default. Dell, for example, has been shipping systems set with RAID mode as the default for years now, including laptops where there’s only one drive bay, for exactly this reason.
ahferroin7,
I’m failing to understand what you are referring to. I looked in the SATA spec, but the only firmware listed relates to the host’s controller. Can you identify the byte you are referring to in the SATA spec? It would help to get us on the same page.
http://www.intel.com/content/www/us/en/io/serial-ata/serial-ata-ahc…
Are you referring to one of the fields in the SATA command structure?
“4.2.1 Received FIS Structure”
Regardless of this, SATA incompatibility is extremely unlikely in practice, I’ve never experienced and I can’t find a product review stating that others have experienced it either. A consumer can buy any AHCI mainboard, any SATA disk drive, and as long as the capacity/sector configuration is supported then the consumer can be very confident that it will work just fine.
If Lenovo has a problem with AHCI (it’s not clear there is a problem beyond them disabling it), then I think it’s safe to say the problem is on their side. I hope you can agree with me here, but if not then why isn’t Lenovo able to get the same level of compatibility that the rest of the industry has managed to achieve?
I don’t even know if it’s in the spec or not, I just know that the drive itself has a setting (which is implemented in the drive firmware) that controls power management and is labelled in almost all tools as APM and happens to be one byte in size. You could probably find what specifically it is by looking at the smartctl or hdparm sources (both of them can read it, hdparm can modify it using the -B option, and smartctl can modify it using it’s -g option).
ahferroin7,
Aha! This “APM” has no relation to the BIOS “APM” that I was thinking of when you said it. Wow, how confusing is it that they would reuse the same acronym.
http://crystalmark.info/software/CrystalDiskInfo/manual-en/AamApm.h…
http://superuser.com/questions/555400/what-do-different-values-of-h…
I’ve always left them at their defaults without a problem. I guess it doesn’t really matter at this point because the lesson doesn’t change: linux users need to avoid these Lenovo computers until AHCI gets fixed on them.
Edited 2016-09-23 15:59 UTC
“(almost nothing that costs less than about 15USD is actually USB compliant for example, except for cables)”.
Not to forget ’embedded’ DRM issues. To play it safe I have to scale back to USB2. Better success rate with e-Sata, but that a bit more cable&energy messy.
Brendan,
My SATA and even IDE drives can enter power saving modes just fine, so I’d like to understand what the problem was that was enough to justify ACHI being forcefully disabled? If there was a bug with Lenovo’s implementation, then it should have been fixed.
And I thought Intel was Linux-friendly!
The only idea I can come up with then: would they prefer us to use their (cough) commercial (cough) Linux distribution exclusively, on their hardware?
What do you all think about this?
Intel is extremely Linux friendly compared to most hardware vendors. The issues with running Linux on a MBP are Apple issues, not Intel issues. Newer apple hardware (since the siwtch to EFI and x86) is notoriously hard to get Linux working correctly on.
As far as the Lenovo stuff, that I’m dubious of too, I’ve got a Thinkpad L560 I bought less than a year ago that I run Linux on daily, and have had absolutely zero issues with (at least, zero Linux related ones, I ‘ve had a couple of problems with Windows, and with the BIOS, but that’s a separate problem). I actually get better performance and battery life on it using Linux than running the same workloads under Windows 10 Pro which came on it, and the system usually runs cooler too.
Thanks for the hard-lining, ahferroin7. I have not used the new Graphics Engines from Intel. And the Low-ends doesn’t give an uncompetitive energy-saving over Linux.
I bought a Lenovo Ideapad 100s netbook a few months ago. The bootloader was locked. I’m 99.99% sure that MS is responsible for the chicanery because they know many people would simply install Linux.
Intel’s hardware is usually known for exceptional compatibility with Linux (particularly laptops). Hardware RAID is a non-issue for 99.9% of home users.
Lenovo make single HD consumer hardware which can’t use RAID. So the entire premise of the original article is complete nonsense.
Edited 2016-09-22 07:04 UTC
unclefester,
From what I understand, even systems that don’t use multidisk “RAID” are still being forced to use the raid-controller and Lenovo has modified the firmware to prevent it from being disabled.
I’ve got a late 2015 Macbook Pro Retina 13″ here. I don’t have power management issues. However there is a nasty bug in the Intel wifi chipset where the 2.4ghz frequencies interact with the video chipset making the screen flash. The only fix is to switch to 5ghz band. OSX and Windows have the same bug. Intel needs to step up their driver game on the storage controllers/wireless side. I think Intel should provide more doco, their programmers are great but they seem to have an Intel way instead of a Linux way.
A word of warning about running linux on macs.
MAKE SURE that you have mbpfanctl (or macfanctl etc) running.
The SMC does NOT take care of the system on it’s own in the way that it should.
I’ve ran linux on my 2010 mac pro since pretty much the day i got it, and it’s very easy to run it up to a tcase of 80+ deg without the fans doing much at all. I don’t dare take it any further than that.
Ironically enough I often recommend buying Lenovo laptops from their B-series because they’re the only models I can find in Europe which can be found w/o Windows pre-installed. Without the Windows license you can have a decent entry-level laptop for less than 300^a'not.
So, Intel supposedly has horrible support for Linux…
I find this claim, combined with your complaints, rather interesting.
Regarding energy efficiency, I’ve actually never seen any issues with it on Linux. Every x86 system I’ve ever had (both Intel and AMD) has gotten better energy efficiency under Linux than it has under Windows running the same workload. On the Thinkpad L560 I use daily, I actually get 8 hours average battery life (best I’ve ever gotten on this system was almost 15, but it was mostly idle) on average when using Linux as compared to about 5 on Windows 10 when doing essentially the exact same thing (in comparison, best on windows was about 7, but it was also mostly idle). This of course requires a small ammount of effort, but at least Linux lets you put in the effort, and even without doing so, I get equivalent battery times on Linux and Windows on this system.
For the ‘RAID’ operational mdoe for their SATA controllers, it’s crap to begin with. You can do the same things from Linux (or Windows if you use Storage Spaces), and they will almost always run more efficiently They have near zero support for it in Linux not because they don’t care about Linux, but because the functionality they’re trying to provide with it is already provided in a more configurable, more reliable, and often more efficient manner by existing tools available on Linux. All the fake RAID stuff originated because Windows provided no sane volume management, and is still around because Windows still really doesn’t have good volume management (it’s possible to do with Storage Spaces, but it’s not easy).
As far as the GPU drivers, that’s easy, Intel’s GPU support in Linux is a bit crappy, but it’s not anywhere near as bad as AMD or NVIDIA’s. I can’t comment on the Iris GPU’s, but for the traditional HD Graphics branded ones, I actually get better performance on Linux on both my i5-6200 based Thinkpad, and my Xeon 3-12xx-v4 based workstation which I use as a desktop, expecially for 3D stuff (on both systems, I get 10-20 fps better frame rates for OpenGL performance tests under Linux than I do on Windows).
On top of all of that, did you know that Intel actually develops most of their Linux drivers upstream in the mainline Linux kernel? They actually have pretty amazing levels of support compared to many ARM SoC’s and a lot of other platforms, although a lot of the default PM options are somewhat poor choices (they’re targeted towards servers, which makes some sense, but is still annoying). If you make sure the correct options are set (and in most distros they are), you should have near zero issues on most Windows OEM systems getting decent performance and energy efficiency out of Linux.
Now, as to your specific case, as has been mentioned by other posters, MBP’s have crap Linux support, but it’s more of an issue with Apple than Intel. Their EFI implementation is broken, and they have an insane number of odd ACPI and SMBIOS bits that only work with OS X. If you boot Linux on one of them and then boot it on an equivalent Windows system and look at the hardware on both, you’ll see that about the only similarity is the CPU , the PCH, the RAM, and some of the very minimalistic bits of firmware that Apple can’t make proprietary. They also design the OS and hardware in concert with each other, so they can do pretty much whatever they want and it will work fine for their software. When you’re buying a Mac, most of what you’re paying for (other than the brand and the customer support) is that integration between the hardware and software, which is something no PC manufacturer can do, but also means that other software doesn’t run as well on that system. Poor performance of Linux on a MBP isn’t an indication that Intel has poor support for it, it’s an indication that Apple has no support for it, and the only reason it runs at all is that Intel has above average support for Linux.
Linux don’t work right on most hardware because is a piece of SH*T, wish the linux nightmare ends someday.
Yet another Linuxero blaming GPU vendors for crap Desktop Linux graphics. No sir, it is not the fact X.org is a horrible piece of software which has stayed mostly the same at its core since the mid-90s while Microsoft and Apple have gone through multiple graphics and windowing subsystems since then.
No sir, those GPU vendors should devote time and resources to hack around X.org just to get that lucrative 1-2% of the market. Oh wait, there is such a vendor: Nvidia.
zOMG their drivers aren’t FOSS!!!111 You see, those basterds at Nvidia had the nerve to keep secret the drivers they paid dearly for, employing full-time developers to hack around X.org and have the only working Desktop Linux GPU drivers in existence. They must devote full-time developers to support our little 1-2%er operating system and its broken X.org AND release the code for everyone to see.
PS: Of course, Nvidia wrote those GPU drivers because they have lucrative contracts with rendering houses using Linux (the price of paying Nvidia to write drivers for it was less than the cost of buying Windows), not for regular users. But you see, Nvidia made the mistake of releasing the software to regular users, instead of giving them the usual broken open-source driver that Intel and AMD give to regular users. Now they are the most hated GPU vendor by the Desktop Linux communitah…
td;dr Desktop Linux does not deserve good GPU drivers, and neither does its community.
Edited 2016-09-22 17:09 UTC
Really? NVIDIA drivers get good performance? That’s odd, because I get better performance on the Quadro K620 in my desktop just using it as a framebuffer (no NVIDIA drivers, no nouveau, nothing but the regular framebuffer driver for NVIDIA cards) and doing all the work in software on the CPU than I do running the official NVIDIA drivers. I get even better performance just ditching the Quadro and using the cheap integrated GPU on the Xeon E3-1765 v3 CPU I have, and that’s with FOSS drivers officially supported by the upstream vendor which are only provide an ancient version of OpenGL and happen to work perfectly fine on the most recent mainline kernel the moment it gets a new release.
I will not dispute that X is a stinking pile of excrement, but that’s not by any means the only issue. The biggest problem has nothing to do with X, and is the fact that none of the hardware vendors are willing to pull their heads out of their arses and realize that people do use Linux outside of pre-built embedded systems and servers.
Yeah, those vendors have their heads in their asses for not spending truckloads of money to support OSes with horribly borken, “stinking pile of excrement” graphics subsystems like X.org, which are also 1-2% of the market.
Seriously dude, get real. A vendor -any vendor- will support an OS if its popular or if the OS makes it easy for the vendor to support it. A vendor will never support well an OS that is not popular and is hard to support too. It’s like someone asking you to code in a very difficult programming language you have to learn, only for some small contract job that won’t pay well. You are not gonna do it if you are a professional software developer. Right?
I am aware that Linuxeros have wet dreams of Intel, AMD and Nvidia spending lots of man-hours to make good Desktop Linux drivers, working around X.org’s flaws and such, just to offer it to that 1-2% of Desktop Linux users, but it ain’t gonna happen when they can devote those man-hours to Windows (and maybe OS X) which bring 98% of the income to the company.
Intel, AMD and Nvidia are not charities. I repeat, not charities. Not supporting Desktop Linux and its X.org is not a case of “having their heads in the asses”, it is a case of making decisions that are sound from a business perspective. Just like not getting a programming job that is both hard and low-paid makes sense for a professional developer.
Don’t like this? Get an OGD-1 board or code your own drivers.
PS: I love it when Linuxeros ask “what keeps you to Windows?” and some time afterwards say “I wish Desktop Linux had as good graphics drivers as Windows”. The fact Microsoft paid full-time developers very handsomely to develop WDDM so those gpu drivers are made possible (and that is one of the reasons Windows costs money) never crosses their minds (other reasons Windows costs money is not having a bad audio subsystem like PulseAudio or ALSA and not being infested with something like systemd)
Edited 2016-09-23 01:35 UTC
kurkosdr,
Actually in all seriousness linux/other-os devs are perfectly willing to do the work themselves, the main issue is the lack of h/w specs. FOSS devs are forced to resort to reverse engineering and trial/error.
I don’t know if you remember this, but a long time ago it was quite normal for hardware to come with full schematics, bus layouts, instruction cycles, etc. Programmers needed to access the hardware directly and manufacturers wanted to encourage them to support their hardware. Oh how times have changed.
Methinks there’s something else behind all this linux rage you have. Oh well, if you don’t like it, don’t use it. Problem solved!
Yes, but I am sick and tired of being asked “what keeps you to windows” for the billionth time, literally minutes after (or before) the same person lamented the state of Desktop Linux graphics and the suckiness of X.org. And I am sick and tired of the endless flaming of GPU vendors in an attempt to shift the blame. STOP IT!
Edited 2016-09-23 17:52 UTC
kurkosdr,
And are you referring to me? This seems completely disproportionate in response to anything anyone has said.
Edited 2016-09-24 02:07 UTC
Simply not the truth, at least for Server MB, kurkosdr. They do care, and reasons not related to ‘charity’.
AMD has been opening up their docs as fast as the lawyers can sign off on it, they have paid the salaries of several devs to help speed up their OSS drivers advancement (and have stated their goal is to get rid of the proprietary driver for the open driver) and as you can see by the link below they are spending a ton of money making tools that are open to help the community..
http://developer.amd.com/tools-and-sdks/open-source/