Ubuntu is going to be trying to switch over to using Wayland by default for the current Ubuntu 21.04 cycle to allow sufficient time for widespread testing and evaluation ahead of next year’s Ubuntu 22.04 LTS release.
Canonical engineer Sebastien Bacher announced today they will be trying again for Ubuntu 21.04 to enable Wayland by default, four years after they originally tried but reverted back to using GNOME on X.Org for Ubuntu 18.04 LTS and since that point. Ubuntu with GNOME Shell on Wayland has been available as a non-default choice but the hope is now in 2021 they are ready to comfortably switch to Wayland.
I try to use Wayland wherever possible, since the performance gains and battery life improvements are just too good to ignore. There’s still two major blockers, though – first, NVIDIA support is problematic, at best, so my main computer will remain on X until NVIDIA gets its act together.
Second, my desktop environment of choice, Cinnamon, does not support Wayland and has no support coming in the pipeline, which is really disappointing. GNOME can be made usable with extensive use of extensions, and I’m seriously considering switching to it once the NVIDIA situation is sorted. My laptop already runs GNOME for this very reason.
Wayland can’t come fast enough. I can’t believe it’s 2021 and Linux still has problems like inability to V-sync on videos (leading to tearing artifacts). I mean, Windows 98 didn’t have such issues, I can remember DVD playback working on it without V-sync issues.
As an aside, the binary Nvidia driver replaces some X.org code with its own, so it doesn’t have these problems, As a result, it can stay on X.org for a while without the user experience being greatly impacted. It’s the open-source drivers from Intel and AMD which play by the book and use X.org as-is that have issues with V-Sync and increased CPU usage. Wayland should have been a thing in 2001, but better 20 years late than ever.
Ha, I started reading your comment and was thinking ‘weird, I haven’t had screen tearing issues for a long time…’ then you said nvidia and then it made sense. There are many reasons I have stuck with nvidia cards for so long. This is one of them.
Wayland for me holds no benefits and just has annoyances for me.
Both Intel and AMD have the TearFree option in their xorg drivers, it’s even enabled by default on my Radeon HD 7700 (AMDGPU).
I’d be happy to use it once they’ve fixed wayland’s limitations. IMHO a big problem with the wayland project is that they haven’t been pragmatic with regards to use cases. There are tools and capabilities I need to do my work that, so far, don’t work with wayland. I am on board as soon as I see them address these bugs & limitations, but if their attitude is “no you can’t do that, we don’t allow it” because their own justifications are more important than our needs, sorry but that’s lazy and the community deserves better. Progress has been very slow for things like vnc because unfortunately wayland refuses to add generic support for screen grabbing, even under security privileges. Consequently every single compositor needs to implement the missing functionality, which doesn’t offer any greater security and creates a ton of new portability issues…
https://github.com/TurboVNC/turbovnc/issues/18
There are proposals to hard code support for RDP & VNC into the compositors themselves,
https://www.hardening-consulting.com/en/posts/20131006an-overview-of-the-rdp-backend-in-weston.html
https://lists.freedesktop.org/archives/wayland-devel/2013-October/thread.html#11536
IMHO we’re looking at poor designs steming from wayland’s limitations. The refusal to support a standard capture API for purely dogmatic reasons leads to more bloat and cruft downstream. I don’t want VNC/RDP to be hard coded into the compositor. What if I want to use VLC to stream my desktop? Now I have to petition to add a VLC capture protocol to the compositor? Hogwash! Obviously it makes more sense to have one generic API for all applications to use. If we need a security mechanism to lock down the privileges, that’s fine, but at least then we’ll get a standard solution that works everywhere. I absolutely hate that these basic features are going to be broken/unimplemented/incompatible depending on which compositor one is using for their desktop. This is a poor design IMHO.
Gnome, one of the first to get a lot of the wayland support, still can’t get the reload function to work under Wayland. (I am referring to being able to hit Alt+f2 and r).
It is actually I found out that Debian seems to default in Bullseye to Wayland if it detects an AMD gpu. Broke Tomb Raider for me too. Seriously dislike having less capable, unfinished versions of software being set to default.
Also hate the fact the Xorg devs are doing this as most have switched to developing Wayland before it is quite as capable. Leaving all the functionality up to the compositors is pretty stupid, and having a generic API for such things is rather useful…
leech,
Yeah, it gets frustrating when devs don’t have respect for their users’ needs. Honestly they’re “close”, but they need to do the rest…
https://www.swalladge.net/archives/2019/10/14/are-we-wayland-yet/
https://superuser.com/questions/1221333/screensharing-under-wayland
Shifting functionality into compositors will probably result in a patchwork of support depending on what desktop software you are running, which is lousy. But most users don’t care how it works, just that it works. So fine then. While I find it unfortunate, I can live with an inferior design as long as it does in fact work. But the thing is it’s still not working today. I just tested my screen sharing applications and they all produce black screens under wayland
These days this is NOT a niche use case. My wife and kids who are not in tech rely on screen sharing teleconferencing. This is not optional, and it isn’t acceptable to suggest these features may be available in the future as a justification for an incomplete deployment today. They are mission critical must-haves for day one, otherwise people are going to fall back to windows or X11 desktops that didn’t have these obnoxious limitations.
Realy this is everyone failing to ask the right question. Does wayland need screencapture at all? When you ask this question the answer is surprising.
libdrm that everyone bar Nvidia supports it is possible to implement screen capture using it directly. This is different X11 screen capture as it works with you Linux and Freebsd text based terminal mode as well. There is a way to screen capture on Linux and Freebsd that is generic does care if you are running X11, Wayland compositor or on the text terminal just it does not work with Nvidia binary driver because Nvidia does not support it.
Reality here is all those screensharing programs you tried would have been useless to use in a case that Linux text has started but graphical has failed for remote fixing the problem.
Yes its possible as well to generic feed input under freebsd and Linux by in fact userspace being a input device.
Hard reality here is screensharing should not depend on X11 or Wayland compositors at all. Yes X11 screenshare has a obnoxious limitation of only covering X11 not the complete platform.
Please note DWM.exe under windows the Microsoft compositor has nothing todo with screensharing instead direct to gpu driver libraries that are windows equal libdrm do the heavy lifting.
Now if you were talking about targeted sharing of individual application windows that is a feature the compositor could be providing.
Think about this pure independent screen capture will work past a crash.
oiaohm,
I knew you’d be here. And, yes, it is something that many many users would benefit from it.
Yes, we all know it doesn’t work on wayland (with or without nvidia) because wayland devs failed to place this functionality on the roadmap. Many of us feel this was a big mistake.
As far as blaming nvida, that’s not a good excuse for why it doesn’t work in other drivers. Nvidia deserves plenty of blame, but when it comes to the shortcomings of wayland the buck stops with wayland devs. Trying to pass responsibility for an incomplete API to someone else who isn’t involved with wayland development is a cop out.
Yes that’s what we’re saying. In the worst case, even if it has to go through compositors for whatever reason(s), this functionality is important enough that it should have been included in wayland’s base spec rather than something everyone’s going to implement independently in incompatible ways.
I’m not sure what you meant here but this statement has me laughing. Of course X11 is limited to X11. How else would it be? We can blame the devs at mid-1980’s X11 HQ for the limitations of X11. But blaming X11 for the limitations of wayland is silly. It’s project wayland’s responsibility to make wayland a good standard, no one else’s. Honestly though I see them making these preventable mistakes that are already causing problems.
Regardless of how you’d like to justify wayland’s systemic inaction on these items, their refusal to get their act together may lead us to a patchwork of incompatible APIs plaguing users & developers for a long time. Features won’t be able to work everywhere. Wayland is very close to being functionally complete and a good replacement for X11, it’s just very unfortunate that wayland’s devs are insisting on practicing dogmatism over pragmatism, their actions are making wayland worse.
>>Yes that’s what we’re saying. In the worst case, even if it has to go through compositors for whatever reason(s), this functionality is important enough that it should have been included in wayland’s base spec rather than something everyone’s going to implement independently in incompatible ways.
No it should not have been for a reason and it DMABUF. Nvidia gpu in a laptop with output going to a intel graphics uses DMABUF to hook that up. Yet Nvidia binary driver has not support DMABUF in the userspace usage case.
The idea that the wayland base specification has to do everything is wrong.
https://flatpak.github.io/xdg-desktop-portal/portal-docs.html#gdbus-org.freedesktop.portal.ScreenCast
The hard reality here Alfman you need to stop this line that Wayland specification has todo this stuff. The freedesktop portal ScreenCast interface does not care if you are running X11/Wayland/TTY the backend can use libdrm path if it there.
Alfman there are a few very important things to remember here.
1) Wayland protocol is designed to be light weight as in not processing permissions.
2) You are asking for features that need to be protected by permissions
3) You are asking for features that should be generic interfaces for X11, Wayland and TTY applications.
Dbus has permission system.
The wayland developers 1 year into development said to Nvidia that wayland compositors need all the features of DMABUF and GBM so libdrm by extension. Remember Nvidia said that eglstreams would provide all the functionality Wayland compositors need and has failed to deliver. This failure to deliver has stalled a lot development.
Alfman the reality here you can screencapture on X11 and Wayland desktops using the portal system. Stuff using the older X11 interface end up with black screen of death.
By the way Nvidia did promise libdrm support
https://docs.nvidia.com/drive/nvvib_docs/NVIDIA%20DRIVE%20Linux%20SDK%20Development%20Guide/baggage/group__direct__rendering__manager.html
Then only delivered it for their embedded platforms so a big up you finger to its desktop users.
The reality here is Nvidia promised to deliver something that would mean that screencapture in the wayland protocol would have been pointless. Then decide to fail to deliver it.
oiaohm,
Like I said you can justify it however you like, but the hard reality is if wayland refuses to create the interfaces users need and expect, linux will suffer. Users will be forced to revert to X11 or even boot up windows. Way to go.
@oiaohm
There’s no point trying to argue and defend Wayland with technobabble and blame games. at the end of the day 99.999% of end users aren’t interested and will tune this out in less than 5 seconds let alone spend hours on timesinks like Wayland youtubes or Linux journals or anything like this. They just won’t. Any developer with half a clue should know this. Ultimately it’s the difference between shippable product to a quality and enthusiasm.
Yes, I agree. It seems that when Linux people are purists on something, it’s more often than not something that’s really stupid like this (other examples would be not implementing a stable driver API or kernel-level audio mixing).
I consider myself a purist on some things (e.g. in the OS that I am writing there will be strict policies against adding any kind of non-file-based primitives and against adding anything to the microkernel unless there is no other way to do it), but purism should serve usability, not hinder it.
I know it will hurt, but it has to be said: You, the Desktop Linux home-server user, are not the target audience for Desktop Linux anymore.
The powers that be (that is, RedHat and Canonical) have decided that GNU/Linux will run on command-line servers (or more accurately, EC2 instances and EKS-managed containers) and also on developer laptops. Remote desktop is not on the feature list. I am doing DevOps on Linux as part of my day job, and we use centos EC2 instances for our product and centos EKS-managed containers for our internal build services, and then I personally use a corporate-issued Ubuntu laptop for developing things. I never needed a remote desktop. If are running a GUI on your Linux server, something has gone horriby wrong: you are opening up a security concerns and increasing resource usage for no reason. And it doesn’t even work on EC2 and EKS anyways. Also, RedHat doesn’t care about your home server that you manage with a Raspberry Pi using remote desktop.
So, RedHat are cutting the entire remote desktop feature out of the feature list so they can get what is an underfunded, minimal-investment product, out of the door. They don’t want remote desktop to become a cross-cutting concern that complicates and delays everything, like it happened with X11. RedHat cares about remote desktop on RHEL as much as they care about VR support. Same for Canonical and Ubuntu. If someone else wants to try it out, they are welcome to, but no.
As an side, I predictTM that the traditional Desktop Linux crowd will grow increasingly disappointed with the direction RedHat (and Canonical) are taking Linux, while we corporate-employed users will grow increasingly happy. RedHat already killed the Taco-Bell model for distros that the traditional Linux crowd liked so much (where you can randomly pick components to build your own homegrown Linux distro) when they rolled out systemd (which is a thing we corporate-employed users actually like), and now they are killing the home-server thing with Wayland.
But, what can you do? They have commit privs and you don’t: https://lwn.net/Articles/616455/
Have you tried Windows as a home server lately?
Who runs home servers with a windowing system anyhow? These wouldn’t be servers in the traditional sense, but maybe a shared browser station or something.
Though I will say X11 forwarding over ssh has been extremely useful in the past for me, I haven’t used it in years as I simply got better at CLI out of necessity.
This does not mean that we should ditch all of the things that X11 had because someone just doesn’t feel like implementing it in the replacement.
RH at least does still care about the employee level workstation / desktop, otherwise they wouldn’t sell it! Nor would they fund moat of the Gnome development.
But for sure they do not consider any system running xorg or wayland a server. But I am pretty sure you can run vncserver with just a terminal as well at this point. But why bother as every OS has an ssh client now (finally)
kurkosdr,
Personally I use linux both for servers and desktops.
Here’s what doesn’t add up. If I take your explanation as 100% true, then why even have wayland at all? I personally think we should have wayland (ideally minus its problems), but for someone who thinks linux is only good for headless servers then neither wayland nor X11 make sense.
Maybe. People are already disappointed in various aspects. I don’t see IBM/RH further investing in desktops, at best just maintenance mode. I’d like to give Canonical credit, I think Mark S. really did have a grand vision for linux on desktop, but profits from desktop linux have proven elusive and much like redhat they are shifting towards business services. For the most part desktop linux hasn’t paid for itself, it is being subsidized.
Ain’t that the truth. It doesn’t matter who’s right, the people with commit access are the people who decide where we go. Although it’s kind of a dick move when they actually gloat about it, haha.
Let me put this in simple terms: There is no use case for remote desktop in the kind of corporate environment that RedHat and Canonical are going for. The use case for the modern corporate environment is GUI-less EC2 instances and EKS-managed docker containers, with the code for them being developed locally in an IDE on a laptop running Wayland but shipped over to the EC2 instances via scp, or to EKS as a docker image. So they are not implementing remote desktop in Wayland at all. They have commit privs and you don;t.
BTW when I say “shipped via scp”, preferably packaged into deb or rpm with a service file, but you never know…
kurkosdr,
I think this is the exact same point you already made…? Anyways, I’m not disagreeing with you that linux has a problem with devs going on power trips.
I would like to see some actual proofs of this statement.
It’s fake news. Performance on Wayland is still worse or at best equal to Xorg, and battery life is as well pretty much the same or slightly worse. No benchmark ever has suggested anything to the contrary.
On my computer there seems to be mystical overhead with e.g. video playback that simply does not exist on Xorg.
This.
Modern Xorg is not really X11 – while the core server-side drawing functionality is technically still there, all modern applications either require or perform far better if they are run locally. It is all about manipulating a local frame buffer now, either directly by a CPU or via a GPU. Performance-wise, there are no meaningful technical differences between Xorg and Wayland, except that Xorg has better drivers and Wayland has more developers so it has more future potential.
What I am the most concerned about Wayland, though, is a lack of attention to the compatibility between clients. Wayland has no single dominant implementation, it doesn’t have a WM separation that would enforce an API (this is supposed to be an advantage), and common user-facing interfaces are either undefined, poorly defined or just implemented N times in multiple compositors, each with different bugs.
Defining all the missing interfaces and sorting out differences in implementations may easily take >10 years (there is always a lot of friction around such issues, especially when developers take a hard-line “my way or the highway” position). As much as I wish these issues to be ultimately sorted, it is far more likely that Gnome/KDE/other Wayland compositors will be incompatible in non-trivial ways and X11 (in form of Xwayland) will remain a compatibility layer for running any applications that are non-native to the desktop.
The sad case is that everyone is swilling to spend the next five years rewriting the same stuff on multiple fronts. After the dust settles and everyone has more or less achieved their desired result, they will start to worry about interoperability and the next cycle of rewrites will commence.
Not the first time Thom presents his opinion as a fact, sigh.
OSAlert used to be a news website but then it became a personal blog.
https://www.phoronix.com/scan.php?page=news_item&px=GNOME-Xorg-Wayland-AMD-Renoir
I think it depends on what you do, but mostly you’re right. Thom’s use case is his use case, and its true for his set up. Obviously he cares more about how it works for how he uses it on his hardware. Synthetic benchmarks don’t tell you the whole story either. It really depends on your usage, your mileage may vary. Not fake news, just a different benchmark.
Bad design, bad communication, and blaming someone else seems to be the problem. Most users in all honesty do not want to know. They just want a product which works and it’s amazing how many developers forget this. The people causing issues with dodgy ABI and drivers models, pulling in irrelevant functionality and ignoring core functionality, blaming IHVs who ultimately have products to ship and their own standards to maintain. I do not see this ending well at all and it will drag on and on.
I had a dual Windows Linux setup and was acclimatising myself to Linux Mint until the maker of one key utility decided to get it in their head to patronise users because they thought they knew more about security than users did and put dogma before useability. (Yes, I know useability can undermine security but this was not the case.) This was ultimately resolved but too late. Linux came off and I have a Windows only installation without the headaches. Yes, Microsoft are to blame for half the useability issue but the Linux camp is responsible for the other half and I went with what I was most comfortable with because it works and I don’t have to spend my time chasing down obscure forums and obscure configurations and meddling with and using things I don’t understand and shouldn’t have to.
From the meta view one of the things Windows has designed in from the beginning is a directory scheme which makes sense and is readable. I still have no clue what Linux directory scheme is and keep forgetting because it’s jargonistic. This is just *one* thing before all the other things.
Wayland isn’t just another irritation but a whole irritation in itself and I have very little energy left for it after dealing with all the other irritations!
HollyB,
You would really like gobo linux then.
https://www.gobolinux.org/
I think they really did a great job cleaning up the messy unix directory conventions. Of course it’s not going to go anywhere, because gobo doesn’t pull any strings…but IMHO they deserve praise.
As for windows, I have to disagree. Rather than a good organization, windows started out with system32 as a dumping grounds for all kinds of garbage: DLLs, executables, configurations, host files, etc. The addition of “Program Files” was good (a bit of a shame they choose such a long path with a space since it made using the command line a pain, but they wanted to show off LFNs, so whatever). All and all things were reasonable until AMD64, then microsoft got stupid and decided to break the whole hierarchy in a confusing way with no pros at all. They created a new SysWOW64 directory for 32bit resources and designated System32 for all new 64bit code. Since this inevitably broke 100% of existing software so they needed to create new virtual FS with mappings to overwrite the real System32 with SysWOW64 (along with other mappings) so that the view of the file system depends on whether your program is running from Program Files or Program Files(64). Speaking of which, there are 32bit programs in Program Files(64) and there are 64bit programs in Program Files. Splitting them up serves no purpose and just adds to the frustration of finding programs.
And we haven’t even gotten to the registry, which was always a mess but got even worse when MS repeated some of the same mistakes it made in the file system. So I expect most people would agree that windows is objectively also a mess.
Pick your poison!
BTW this has been discussed before here in the comments:
http://www.osnews.com/story/28776/why-windows-10-sucks/
Conceptually Windows directory structures made more sense as an understandable thing so I’m running with this. Backwards compatibility was good to then they added shims etc so yes I can appreciate the design was compromised and as you say got worse. But Windows really just makes more sense in an ordinary way most people will parse without cognitive issues. Linux by comparison is Gobbledeegook. There’s lots of design issues but we don’t have all day nor are being paid to sort the mess out so I get very severe brain fade on all this.
I think most OS tend towards daftness like most nation states or organisations over time. You do need a sound initial structure to keep the ever expanding and byzantine mess on the road and it’s as much by luck as design the whole thing doesn’t crash.
HollyB,
Except it wasn’t technically necessary for backwards compatibility in the first place. SysWOW64 for 32bit DLLs was likely the biproduct of poor planning.
I’m still sticking with the scheme that Windows directory structures make more sense. They’re just more understandable to humans. I have no clue what is happening with Linux directory schemes and quit caring years ago. When a scheme requires you to learn jargon and it’s not self-explanatory there’s a problem. Brought to you by nerds who say the code is the documentation. Sigh.
HollyB,
I believe you are wrong and that if you went out on the street and asked normal people what each system directory held, I suspect most would get it wrong because it is counter-intuitive.
@Alfman
Step back and take a look at the design points I mentioned not the re-framing you’re talking about. As for not understanding things who was the one who goofed up, precisely?
I’m not the one who designed Windows x64 files structures or Linux, or who tripped up on or misunderstood backwards compatibility. In fact if everyone had paid attention to the *design* (which includes logical structures and HCI elements like readability and comprehension etcetra) we wouldn’t be in a mess now with either Windows or Linux.
The whole point of mentioning design schemes wasn’t to have nerdy drill down discussion of filing systems but to grasp the essential point of design schemes. A lot of people mostly in Linux kernel and Wayland don’t get the subject or the knock on or ripple effects so after reviewing their design scheme it lacks integrity. Cue NIH positions and partisanship kicking in and layers and layers of mess and kludges as Linux/Wayland/etc stays in permanent beta. It’s a real energy drain from getting stuff working and building in a good user experience.
HollyB,
The answer is microsoft, they goofed.
Well, I didn’t design it either, but if I had I do think I would have planned ahead better. 64bit resources should have used their own proper path from the beginning rather than having two directories essentially time sharing the same path for different processes.
I’m not making any excuses for linux, I suggested you take a look at gobo linux because I think they’ve improved the things you are criticizing. Yet I get the impression you only want to lob grenades at linux and get out without allowing windows to be similarly scrutinized. That’s too one-sided for my liking.
Not for nothing but windows created genuine confusion when a program would complain about some missing DLL like “C:\windows\system32\msvcrtXY.dll”. Meanwhile the user would confirm that the DLL is there and wonder what the hell is wrong with windows because the fact that “C:\windows\system32\” is actually “C:\windows\syswow64” isn’t obvious at all.
I’m not asking you to like linux, there are things I don’t like either and I think you can see that from some of my posts, but do you concede that some of the design choices in windows are NOT clear?
You don’t understand Windows back compat. There are applications out there that hardcode “%Windows%/system32”, and you know their developers will take the src and recompile it for 64-bit. So, the native apps (64-bit), get to use the system32 directory. Whose contents are a shim on top of WinSxS sometimes.
kurkosdr,
Except I do understand it (I’m pretty sure you know that ). By changing the path, microsoft introduced an incompatibility and then they introduced a hack to fix that incompatibility. It would have made far more sense for 64bit resources to go into System64. This would have broken 0% of 32bit software, and would have broken 0% of 64bit software (because non existed).
I don’t think microsoft planned to make it so convoluted, instead I think it was an unintended by product of how management handled the port.
They likely needed to get windows working on 64 bit as part of phase 1. Once this phase was done they moved on to adding 32bit legacy support for preexisting software in phase 2. However at this point they had already used System32 for their 64bit dependencies, so they created new file system mangling with SysWOW64. This is my theory for why it ended up this way.
But technically all of it could have been avoided if microsoft had planned for the 64bit dlls to go into System64 from the get go. And not for nothing, but windows is the only platform that requires this hack. Linux and macs can handle multi architectural binaries without file system kludges. Windows could have as well with better planning.
My objection with Linux is you need a map and a dictionary to know one end of the file system from another.
HollyB,
It’s the same with any platform. Things can feel foreign when all your experience is on one platform and then you move on to another where things are completely different. It can be extremely frustrating to have to relearn how to do things. I had decades of experience with windows and linux yet I walked into one company where all they had were macs and boy did that make me look incompetent in front of them, haha.
Most people don’t have a reason to change and should use the platforms they are most comfortable with. I have no interest in pressuring people to change if they don’t want to.
@Alfman
Stop patronising. I can tell the difference between good design and being a noob thank you very much and it has *nothing* to do with comfort zones. Maybe actually I know something about the topic (like reverse engineering faded colours) because I have a lifetime of experience and multi-domain expertise and a sense of curiosity. So when I am talking about design I am talking about design not having a hysterical fit or parroting what I read in last weeks newspaper.
HollyB,
Great, I’m not dismissing your experience and I respect your opinion, but at the same time it’s fair to ask you to respect mine. Given my experience, Linux is the better tool for me and given your experience windows is the better tool for you. Who cares? I’m not here to argue, just to have fun and talk
No, you don’t really understand Windows back compat, and how changes that make perfect sense from a computer science perspective might in fact be major compatibility departures that break commercial applications.
When an application runs “natively”, all paths have to be exactly what the application expects them to. This is a problem because apps might have things like %Windows%/system32” hardcoded but be compiled as 64-bit apps anyway. So, 64-bit apps get the original directory names and 32-bit apps (which run in a Wine-For-Windows sort of compatibility layer and don’t have access to the true filesystem) get the new directory names.
kurkosdr,
It doesn’t take a computer science degree to understand that NOT changing the path would have been 100% “backwards compatible”. Seriously I expect everyone here to be capable of understanding this unless they adamantly insist on being in denial. There isn’t even a technical requirement for 32bit executables to run under “Program Files(x86)” or 64bit executables to run under “Program Files” (many are actually in the wrong place btw). This did not have to happen and no other platform suffers this problem. There was no innate “backwards compatibility” problem that required microsoft to change the path for 32bit executables and DLLs. This is completely self-inflicted.
Only 64bit software, which obviously had to be ported and rebuilt for 64bit windows anyways, would have needed to be tested against the new path. Most applications don’t need to hard code anyways because they rely on windows itself load DLLs (windows executable don’t contain the path for DLLs).
Now that it’s done, it’s done, but the simple fact is it was never necessary for backwards compatibility. Today it’s necessary to keep this file system munging for “backwards compatibility”, but at the time of the 64bit transition it most certainly was not!
If anyone continues to insist that changing the path was necessary for backwards compatibility of 32bit applications, then I request proof that legacy 32bit applications would not have worked without changing their path and adding new FS kludges to map it back to the original path.
Eh. While Windows provides %SystemFolder% to correctly find the System32 directory, a ton of software locates it with %WINDIR%\System32, and moving the system folder for 64-bit software creates an extra burden on porting to 64-bits. Since there is already an extensive directory redirection system in place to accommodate old software that wants to write to its own program directory, it makes sense to use that existing tech to accommodate software that wants to peek in System32.
And as for SysWOW64, well, it makes sense when you consider that the 32-bit subsystem is called Windows on Windows64.
But then again I’m also apparently one of the few that isn’t confused by the X11 client/server relationships (So many people are confused by it!)
Drumhellar,
I don’t believe that anybody would be complaining about porting to 64bit on the basis of having 64bit DLLs in system64 rather than system32. How would this go exactly? “Oh man, microsoft really screwed us developers with the 64 bit windows port. Now to build 64bit windows software they want us to use the system64 directory? What the hell does microsoft expect us to do? Search for system32 and change two characters? Or worse yet use an ifdef? This is total BS I tell ya.”
When did old devs become millennials? J/K.
Seriously though, in an alt timeline where microsoft did the logical thing, nobody would be complaining about it because it makes sense and follows the KISS principal.
Just because you can doesn’t mean it’s a good design.
I understand where it came from, but bare in mind this came up in the context of a discussion about the layout being innately understandable without explanation. Also from a technical standpoint 32bit is not a “subsystem” any more than 64bit is. Obviously for technical purposes the names can be arbitrary (c:\windows\Khrj3 and c:\windows\Ju89jknw), but for readability it makes the most sense to label each folder for what it contains and by this measure “SysWOW64” is objectively a bad label. Since system32 was already being used by 32bit resources, using system64 for 64bit would have made perfect sense.
In terms of the two “Program Files” directories, I wouldn’t have bothered. Segregating executable isn’t terribly useful and most operating systems don’t bother. An operating system can have a mix of x86,x86_64 and even other architectures like arm32,arm64 running side by side even in the same folder with no issues at all. Even on windows itself you can find some 32bit software in the “Program Files” instead of Program Files(x86)”.
Anyways, what’s done is done. There’s no changing it now. Life goes on