Microsoft is shifting to a new engineering schedule for Windows which will see the company return to a more traditional three-year release cycle for major versions of the Windows client, while simultaneously increasing the output of new features shipping to the current version of Windows on the market.
The news comes just a year after the company announced it was moving to a yearly release cadence for new versions of Windows. According to my sources, Microsoft now intends to ship “major” versions of the Windows client every three years, with the next release currently scheduled for 2024, three years after Windows 11 shipped in 2021.
Windows’ release schedule and system have become so incredibly obtuse I honestly have long ago lost track of what, exactly, has been released, which features are widely available and which are only in one or more of the testing releases, and so on. The continuously shifting plans from Microsoft do nothing but muddy the waters.
As it turns out, Microsoft wasn’t lying about needing 11 and recent CPU requirements due to increases in attacks directly on the hardware.
https://arstechnica.com/information-technology/2022/07/vulnerabilities-allowing-permanent-infections-affect-70-lenovo-laptop-models/
dark2,
Your link points out that a lot of computers aren’t running up to date firmware, nothing there about microsoft’s artificial CPU requirements though.
Then we get into the problem of the TPM not being physically part of the older CPU, which has been proven a viable attack route to steal the codes and decrypt the system.
https://arstechnica.com/gadgets/2021/08/how-to-go-from-stolen-pc-to-network-intrusion-in-30-minutes/
Also this from security researchers that are paid to stay up to date on this stuff.: “Firmware-based rootkits, once a rarity in the threat landscape, are fast becoming lucrative tools among sophisticated actors to help achieve long standing foothold in a manner that’s not only hard to detect, but also difficult to remove.” Sorry, attacks on the hardware are simply more common now, and trying to write it off as “artificial requirements” is just denial.
https://thehackernews.com/2022/01/chinese-hackers-spotted-using-new-uefi.html
dark2,
Who’s in denial about vulnerabilities? It looks like you just started this topic out of the blue. Even though firmware exploits aren’t good, actually exploiting those types of vulnerabilities often means the OS is already compromised, which is pretty much game over for user security.
TPM creates a barrier for evil maid attacks, and this may be more important for some people than others. But I would say that for an average person’s risk profile having an up to date OS to protect you from network intrusions is so much more important than TPM! You could even run hardware from 25 years ago and still be relatively safe today as long as you have an up to date OS.
Shocker, newer computers are overly complex with what amounts to an entire operating system fitting into chips on the motherboards, and manufacturers historically not having to care about security and generally speaking only support firmware updates for specific models for a year, if you are lucky, some better ones do it for several. They would rather have a forced upgrade path…
Funny thing is, RHEL’s installer already warns you that SMT is turned on and recommends turning it off for security. Hardware features that are paid for are no longer viable to support if you care about being secure…
Lenovo and Asus as far as I have seen have the best firmware support, I am still getting updates for my P52 I bought ages ago.
leech,
Yeah, there is merit for the keep it simple approach.
Indeed. And SMT is just one of many architectural optimizations that leaks statistical information. We have many features that are designed to improve performance like cache, TLB, branch predictor, multithreading, etc, but the fact that these resources are shared throughout the system increases the risks of successful timing attacks across a security boundary. This is how we end up with meltdown/spectre. In principal we can neuter optimizations and/or introduce delays to render these attacks infeasible, but it defeats the purpose of the optimizations in the first place
The existence of these leaks is certainly troubling for targeted attacks, however IMHO it doesn’t automatically mean the optimizations always need to be turned off. For example having two SMT threads sharing the same core wouldn’t be a security problem when those threads share the same security context anyways. So rather than disabling SMT entirely, redhat could conceivably adjust the scheduler to ensure that a core is never shared across security boundaries. I imagine they’ve already considered this, but maybe it became too complex or there wasn’t enough of a performance benefit to justify it.
Another solution could be to pin processes onto different cores based on their security clearances. This would be a hassle but would likely mitigate many known and unknown vulnerabilities.
Not only they lied about that but they also lied Windows 10 will be the last Windows version. I used the word lied as you used the world lied. To me personally both are not a lie or surprise. It’s just how this companies do business. There needs to be planned obsolesce involved and there needs to be a new version every now and then. To sell more expensive licenses and to sell more hardware. It’s as simple as that. Like with lets say Android/Fuchsia phones. You will never get more then lets say 5 years of support. Companies involved would never allow that.
@dark2
MS also wants to move most of their OS codebase (mainly windows/Xbox/azure stuff) on newer x86 uArch to not pay the mitigation price for the older revisions.
They also probably want to lock linux out of their OEMs PCs, which kind of sucks IMO.
I had called it back in 2015: The whole “last Windows version” was always a lie. Were we really supposed to believe that a person who bought a Windows license in 2015 acquired an entitlement to infinite upgrades, updates, and support? Especially in an era when an old computers stay usable for half a decade at minimum?
Microsoft was trying to copy the Android business model of giving the OS for free and making money on online services, but everyone who was remotely aware of Microsoft’s position in online services knew the plan wouldn’t work.
Now they are trying to copy the Apple model of making most of the money by selling new devices, hence the whole planned obsolescence deal with CPUs. And anyone who knows how well Microsoft’s device department is doing relative to Apple, or how Windows users have no problem sticking to older versions of Windows, knows it’s also bollocks.
Meanwhile Microsoft could just charge for upgrades like they did with Windows 7 and leave us alone, but nope.
kurkosdr,
I agree. Microsoft wants to be more like apple in terms of planned obsolescence and also having much more control over 3rd party software. Alas I think it’s harder for microsoft to do this under given its user’s expectations. While they could add more apple-esque restrictions, they’d risk alienating a large portion of their developers and users who expect backwards compatibility with existing unrestricted apps. Complaints over existing apps breaking would be widespread and demand for new versions of windows would crater.
I agree with this too. Most consumers are already happy with what they’ve got and the market isn’t growing like it used to. Alas tech companies still want more money and that’s what’s driving changes like ads and planned obsolescence.
Microsoft isn’t all that important in the hardware market either. I don’t think their own device sales is the reason for “obsoleting” older hardware. It could just be that customers don’t care to upgrade if new computers at the shop still bear the same OS that their five-year-old laptop also has. Upgrading seems less important when the old laptop works “just fine” and even the newest models still have the same software.
And, Microsoft did never seem to go for the Android software model. Android gets a new major release once per year whereas Microsoft was going to stop major upgrades altogether. They are quite radically different models.
sj87,
Their own hardware sales aren’t stellar. However microsoft still has a large interest in obsoleting hardware because 3rd party hardware sales are proportional to MS OEM sales. OEM sales make up a huge share of microsoft’s income (please share more recent numbers if you find some).
https://www.computerworld.com/article/2915314/microsoft-posts-second-straight-double-digit-downturn-in-windows-oem-revenue.html
Microsoft has been trying to shift windows to services but many customers have been frowning those efforts. Frankly I think most consumers just want a traditional non-invasive desktop OS. Coincidentally the link also talks about microsoft making the move to services…
Alfman,
Microsoft’s greatest asset is also becoming their greatest obstacle.
I am talking about legacy software. They are very good at supporting old programs, like no other major operating system out there. (There are videos online where people start from Windows 1.0 and upgrade to each version released in between, and still run the program from 35 years ago). Even Linux cannot do it, and MacOS is not even on the same architecture anymore. I think we could all agree on this one.
But… They have a vision for a new kind of OS, which can no longer “hide the clutter”. This is akin to having a tidy house vs hoarding decades of stuff.
sukru,
Exactly! Legacy software provides a great deal of fuel for their monopoly.
I do agree. It’s not perfect because there are exceptions, but consumers generally expect to be able to run their old preexisting windows software as is.
I think metro highlighted what microsoft wanted this new OS to be: fence legacy applications away into an isolated legacy desktop where they would run as 2nd class citizens. Legacy applications were meant to be replaced with metro applications as first class citizens. The kicker is these would only be available through the microsoft store. Had it succeeded it would have eventually turned windows into a walled garden and legacy applications could eventually be phased out or relegated to pro/enterprise editions.
Luckily the market reacted very strongly against the metro changes. I think we dodged a bullet that could have resulted in far more closed computing for the masses.
On the one hand I’m not against replacing win32s with something new, but on the other hand I’m very glad that win32s have been grandfathered in as a requirement for consumers because as long as we have them it guaranties a degree of openness. IMHO openness needs to be protected at all costs on all platforms. Loosing control over our hardware/software is the dark age of our industry.
Trying to be smart only confused the customer base. They’re simply going back to where they were.
Back in 2015, Microsoft called Windows 10 the “last Windows”… Its lifecycle was barely longer than that of Windows XP. Basically Microsoft only skipped one release, since Windows 11 came about six years later. I guess, for Microsoft, six years is an eternity.
I think metro highlighted what microsoft wanted this new OS to be: fence legacy applications away into an isolated legacy desktop where they would run as 2nd class citizens. Legacy applications were meant to be replaced with metro applications as first class citizens. The kicker is these would only be available through the microsoft store. Had it succeeded it would have eventually turned windows into a walled garden and legacy applications could eventually be phased out or relegated to pro/enterprise editions.
Luckily the market reacted very strongly against the metro changes. I think we dodged a bullet that could have resulted in far more closed computing for the masses.
Fencing legacy applications wasn’t a bad idea. The bad ideas were relegating the desktop–and mouse/keyboard interaction by extension–to 2nd-class-citizen status and handcuffing metro apps to a touch-first UI. The store also wasn’t/isn’t a bad idea, especially for consumers. It doesn’t have to be a walled garden; it can be just a curated and vetted source of software.
9five4,
Well it depends. I’m a big fan of sandboxing. I cannot remember the name of it but there was an awesome tool that did this for windows browsers and other applications to isolate them from the rest of your data. But there’s a significant difference between fencing/sandboxing as a tool to empower owners and give us control over applications & data versus fencing/sandboxing that is used to grant vendors control over us (like IOS). In the case of metro it was the latter being used to drag us closer towards a microsoft controlled walled garden.
I agree it was awkward, but that was kind of the goal. They were trying to design the OS to make “legacy” applications subpar and not fit in. Ultimately that plan didn’t work out because legacy applications were simply too important for users and the “Dr Jekyll and Mr Hyde” experience was too jarring.
I also agree that stores don’t have to be walled gardens in principal. But at the same time we can’t let our guards down because unfortunately monopolistic companies do have walled gardens on their corporate agenda
That’s a good point about sandboxing to grant vendors more control. I was thinking more of the user (and developer) benefits, but I agree with the concerns around vendors using it to give their own apps more OS privileges. There were a few, but I think the tool you’re thinking of was App-V (formerly SoftGrid), which was an app isolation and streaming solution that Microsoft gained via acquisition.
Arbitrarily making something a worse or lesser experience just to drive users to your preferred experience is risky. Sometimes Apple can get away with stuff like that, but Microsoft? Slim chance when Office had not yet been metro-fied and the company’s own metro apps were half-baked.
Agree with the risk of stores serving as walled gardens!
Microsoft missed a very good opportunity by going too far with “greed” (for lack of a better term).
I agree, modern applications can live in sandboxes.
Browser based ones already do this out of necessity, and systems like Qubes OS provide workable platforms for isolating applications, or even the network stack (“Chrome will run at lowest tier and can only access TOR”, etc).
In a hindsight they made many mistakes when pushing the new API:
1) The underlying system behind “Metro” was .Net (metadata at least) and WPF. They could already build desktop apps with that. But the new API forced them to be full screen, tablet style apps.
2) They wanted to force a single store, even though many platforms can work with user loaded application in addition to main store (Android, MacOS, and of course Windows), or alternate stores (Steam).
3) They pushed all these before ecosystem is ready. Strangely, they are supposed to know these lessons by heart. But decided to push the cart before the horse that time.
Anyway, this is all history, and we have the advantage of hindsight to criticize the past.
Yet, a modern Windows API, where old programs are isolated in a container (or separate containers as a “pro” feature), with filtered access to hardware would solve a lot of issues. Worst case, for dedicated or unsupported applications, they can take a note from Windows 95, and run them in a special boot session.
Not sure there is sufficient project leadership that will steer Windows in this direction, though.
Spot on. Microsoft and Sinofsky were stubborn: they were going to push the Windows ecosystem into this new direction whether users and developers like it or not. And part of that push was, as Alfman put it above, relegating the desktop to a 2nd-class-citizen status.
Companies with “Apple envy”–and Microsoft certainly had this in the Sinofsky era–can be very myopic or selective with the aspects they want to emulate. Did Apple relegate Mac OS to a lesser status? Did Apple go to lengths to “simplify” Mac OS by removing functionality? Did Apple shoehorn a touch-first UI and app development framework into its desktop OS?
No, they didn’t. So, if you’re going to try and emulate Apple, at least look at their model holistically first.