While many Linux enthusiasts like to cite Linux’s stellar support for older hardware platforms, in reality that isn’t always the case. For instance with many old X.Org user-space mode-setting drivers for powering old graphics cards at least for display purposes, they can no longer even build with with modern toolchains/software components. Given the lack of bug reports around such issues, there are very likely few users trying some of these vintage hardware combinations.
Longtime X.Org developer Alan Coopersmith of Oracle recently looked at going through all of the available X.Org drivers that aren’t in an archived state and seeing how they fare — with a goal of at least setting them up for simple continuous integration (CI) builds on GitLab.
This is the inevitable result of hardware that was often already obscure and rare when it was new – let alone now, decades later. All we can hope for is a few people still carrying this hardware to donate either time or hardware to aid in keeping these drivers building and running.
This is one reason why the portability layer matters. All that “waaah waaah” and comments I was talking about old stuff and new compilers are better and blah blah? Then there’s the abstractions… Gosh darn that portability layer covering C/C++ compiler quirks would be really handy just about now.
I read through the comments under the link. People saying old cards and old code should just be ditched (just like the lazy thinkers justifying their buying into Windows 11) really have no clue whatsoever what they are talking about. For a start new stuff today will be old stuff tomorrow and they will be going through all the same issues in 20 years time having learned nothing and never having once sorted out their portability and abstractions and scalability issues from day one.
HollyB,
Everyone and their dog has portable abstractions, Yet you have never specifically answered why yours were better. It’s all talk.
Can you knock of the lies and office politics? Your stalky attacks are unpleasant on a personal level and unwanted.
You have no clue what I’m talking about and it’s all there in the post I wrote. If you do not know compilers or how C/C++ works just admit it and move on. Honestly, the way you’re acting is like some office boor drifting past my desk and thinking he has a God given right to an audience when maybe I’m busy with my own work or perhaps you’re not even the intended recipient? You blow off about enough in here your and don’t find me biting your ankles so please do kindly show the same level of respect and manners.
HollyB,
No lies, no politics. It’s just a fact that you’re not the first or last person to write portable code to deal with quirks. So what specific reasons do you have to claim your portable framework is better than the ones we have today? You evade the question every time you repeat the claim. it’s a simple innocent & reasonable question.
Just scroll past and don’t bite. It’s the best way.
@Under-phil
There’s always some troll on every forum. It’s just ours is a woman. I mean, it’s good that females are breaking into the world of sexist trolling. After all, it’s an almost entirely male-dominated field. HollyB is doing a great thing for equality.
Sometimes it’s fun to poke the bear. Just remember not to take the bear too seriously, and remember that the bear is good a flinging shit about. Otherwise, sit back and grab some popcorn, and let someone else poke it.
No it’s a question you keep asking when the answer is already known and you should know it. Someone else in this thread has heard of compiler quirks while you seem to putting on dub boyfriend act on purpose.
No. People do not abstract as they claim. There are many who do not bother and you know it. Of those who do they think they are abstracting when at best all they are writing a wrapper not an abstraction that covers drop-in replacements or cross platform alternatives or future modifications. It’s less of a problem now but many (almost all) would never even think of handling 32/64 bit in a portable way. Oh, whoops.
As for yourself Mr Open Systems why do you code only in CUDA? Like, what??? Where’s your abstracting? Hello OpenCL? Please do tell us what your excuse is.
And yes I have noticed you buttering Thom up and slyly fluffing up drive by attacks. That is pure office politics so Mr “We’re All Peers” mask slipped.
So do shut up. I do know what I am talking about and have been there, and worn the tee-shirt. And you can tell that from the handful of comments scattered in different topics which basically repeat exactly what I have said. Keep your projecting and wilful blindness and unprofessional behaviour to yourself.
HollyB,
From where I’m sitting, it doesn’t seem like you are able to answer. It’s a simple reasonable question regarding your claims.
This is a public forum, you don’t get to tell others to shut up here.
Pretty much the majority of comments here in one form or another agree with my position on the broad issues whether technical or managerial.
The only one trolling here is you alfman. Your behaviour is an embarrassment to the profession, OS News, and most of all yourself. The fact you don’t have a clue about abstraction yourself nor are able to answer questions yourself does shed a different light on things. Please do sit down and be quiet for your own good.
HollyB,
Ad hominem attacks are irrelevant. You still haven’t given answers for very reasonable questions about your claim. What is your specific reason for claiming that your abstractions are better than our modern abstractions? You keep making this claim but you are never prepared to back it up. You can attack the skeptics for being skeptical but at the end of the day if you cannot back your claims then it seems we were right to be skeptical of them.
I’m in an agreement with Alfman on this one. You make quite a few assertions, but you can never back them up with actual facts, Personally, I am beginning to wonder if you have simply read a book about abstraction and now you are an expert on the subject. If you have actual and physical proof of this magical frameworks you talk about, please show it so we can put this to rest.
@The123king
I have to admit I chuckled throughout your post. You make a fair point. I’ll bring the snacks
–Gosh darn that portability layer covering C/C++ compiler quirks would be really handy just about now.–
This really has nothing todo with it Hollyb
Majority of the code breaks with the older drivers break down into mostly 1 issue.
Primary feature that have been removed for over half decade because they were major flawed and that the drivers code base was never updated.
Yes the fact modern compilers are smarter than their historic versions and compiler quirks over 99.9 percent of the issue yes he found over 1000 issues was not the compiler and only 1 was a compiler issue and that was the compiler refusing to build because it was a 64 bit compiler and driver had 32 bit assembly and the driver is for a 32 bit only platform this is not really compiler quirk problem its just a incompatible architecture problem..
Reality this problem is truly what is called bitrot where no one is maintaining or building the code base. So minor API/ABI change that result in the driver no longer building go not noticed. Also there turns out to be a lot of merge requests on different ones of these drivers that will fix the bitrot code that had not been processed some not for 8 years because those drivers had no maintainer.
Bitrot is largely a myth or an artifact of bad abstracting. Bad abstracting is people thinking a basic function wrapper is enough when you have to think through the architectural issues, avoid feature creep of random API’s which add no meaningful value or lead you up blind alley’s, and forward plan. Also yes mind your 32/64/whatever bit lengths and use portable definitions. It’s less of a problem now but a lot of people got caught by this during the 32/64 bit switchover. If you get your abstracting right you can have code last for a lifetime.
API/ABI changes are a sign of bad abstracting. This has knock on effects with people deciding to arbitrarily delete chunks of code. Resources are not infinite so you have more knock on effects with code not being maintained.
I don’t know about the drivers but one of the big pluses of Solaris is every API was guaranteed going forward. They could do this because they abstracted properly from day one.
HollyB,
Except that even portable abstractions can ALSO suffer from bit rot. You need active maintenance to support new hardware, which constantly changes and improves. Or if you’re building an abstraction layer on top of an operating system hardware abstraction layer, the main objective reason that you’re abstraction will continue to work over time without interruption is because you’re already benefiting from someone else’s abstraction. Their work is keeping the APIs you use consistent. But if ever that OS API gets deprecated, then your abstraction will needs active maintenance, or else it too will suffer bitrot.
You throw around the idea behind an abstraction as though you’re the only one to think of it, but practically every developer is taught to create abstractions. It is not unique to you in any way. If you disagree then please elaborate with specific details as to how the industry has got it wrong.
Have you heard of this wonderful API called Win32. It largely keeps backwards compatibility with software that’s nearly 30 years old. In fact, i heard the vendor has just released the latest version of their software called “Windows 11”
Sound’s perfect for you.
This is about driver code, which is the opposite from portable code.
What do you suppose the use case is for this? I can understand folks dabbling in this hardware to run DOS or =< Windows XP environments but what would draw one to use a current kernel on such old hardware?
Under-phil,
Old hardware can still be invaluable for running old software, like you say.
However that’s not the case in this article where they’re talking about new kernels. Well I guess in theory there may be people who still want to run modern linux on an ancient S3 graphics card or other hardware of that epoch, but it hard to imagine many people who havn’t upgraded at all in all these years, haha. I wonder if it would still work under vesa even without a driver. Funny thing is I might still have some S3 cards too, haha. It could be useful if I ever needed to run dos/windows 98 again, not that I ever will.
Yeah, understood. I’m certainly not commenting on whether or not it’s worth the effort, just trying to see a picture of where modern versions of X11 and aged hardware intersect.
Well, these older drivers are not in the Linux kernel.
The X11 is a literal server running on background, unlike on Windows, where most of graphical subsystem exist on the kernel.
Before KMS, they all where user space, a module loaded by X11 when called either manually or by your OS service/init manager.
Indeed, the beauty of these older driver os that whey where fully multi-platform!!! They would run on any BSD (FreeBSD, OpenBSD…) system for exemple.
It’s from a different era, before Linux kernel began to swallow more and more graphical infrastructure on itself (and breaking compatibility with everyone else).
Those on Linux kernel are unlikely to ever break. But those on X11… they are dead. Because X11 itself as a project is almost dead, with all the Linux folks on Wayland bandwagon now. The folks at all these BSD project can be hardworking people, but they don’t have the manpower to donate to keep X11 in shape.
Fair point, my wording was certainly off. But as someone who recently tried an ultra light X11 GUI on a P2-300 with 128MB of RAM I’m still unsure. It was unusable.
“Those on Linux kernel are unlikely to ever break.”
That’s pretty much the opposite of reality…. Linux drivers often break every few releases as they aren’t “userspace” interfaces. In fact there is so much code churn that it makes it difficult to port any Linux drivers to any other system
Also lesser known drivers don’t just keep working… they break as soon as someone makes a change that didn’t get tested…90% of the people making changes don’t have the hardware or time to test.
Well, unless someone has an old system they’re still using, the likely use case is just to have fun pushing old or odd hardware to its limits just to see if they can.
There are not use cases significant enough, thus why the support gets dropped.
There’s no reason to keep some of that part of the codebase around that is not going to be exercised much, so pruning it out makes perfect sense from a reduction of complexity perspective.
This is one of the things that keep me to Windows: Thanks to Microsoft keeping ABI interfaces stable between Windows NT major versions, devices are either supported or they are not. An exception is made for GPU drivers (due to Microsoft changing WDDM substantially in Windows 8.1), but the general rule is that GPU drivers from Windows 8.1 and later will work on latest Windows, and other drivers from Windows Vista and later will work on latest WIndows.
The whole “get your driver’s the source in the tree and we will maintain it for you” promise always sounded suspicious to me, considering neither Ubuntu nor RedHat care about anything that isn’t an Intel GPU. Or that in general driver code for some old hardware someone contributed 10 years ago is hard to debug and most people just don’t care. I prefer ABI compatibility.
… and sound drivers. They had made my Sound Blaster with extensive features set into a simple PCIe device when they virtualized the sound system.
That being said, yes Windows has been very good on ABI and API stability. You can still run programs from 30+ years ago, and you can use most of the hardware even if very old.
That being said, there was some effort on Linux to do better. Anything in the kernel within the DRM tree should work for graphics cards. Capture cards were okay in VFL, but I think some support might be lost in VFL2 migration. FUSE works okay with file systems. And CUPS has made printing very stable.
Wish someone put a lab, like Microsoft’s hardware support one, and made sure new versions did not break old hardware. But that would be a huge undertaking.
What Sound Blaster is that? What is the model name and what driver did you use?
Just curious.
Obviously, you should visit your PC manufacturer’s website or the peripheral manufacturer’s website (in that order of preference) to find the most recent driver, but I have been using Vista/7 drivers for things other than GPUs and they just work. The only exception I have come across was an ATI Radeon Mobility X1600 (latest driver was for Windows 7), which is a GPU. And even then, the only issue I had was that you had to put the laptop to sleep and wake it up to get around a black screen issue during boot.
It was something like this:
https://www.amazon.com/Creative-Sound-Blaster-Audigy2-External/dp/B000ST8TLK
The main advantage was it had 5.1 input and output. So that, I could use my PC as a cheap receiver for my gaming console. And it would play Bluray/DVDs with surround sound without an external sound system. Basically it had 3D audio before 3D audio was popular.
I was able to get it running, with some limits, using a hacked driver on Vista.
I think it completely stopped in Windows 7, and I discarded the thing.
Until Microsoft dropped support I think you could get some Windows 3.1 drivers supported by Windows 9x and later versions. Some NT 4 drivers worked on later versions much to everyone’s surprise.
I forget the cut off versions for driver support but a large part is arbitrary. If you have a good driver model (i.e. one designed for forwards compatibility and with good abstractions and a good DDK which does half the work for you) driver support can effectively be infinite. Dropping support or cutting code out or changing interfaces is a political decision, or down to sloppy practice.
HollyB,
I agree Microsoft could have done better. But they made a decision to change the almost 15 year old ABI to better support modern needs. Linux had done a similar thing with ALSA.
Yes, of course they could have added a compatibility layer, and have all previous features supported. Vista was a time of bad decisions for Microsoft.
That being said, Creative also dropped the ball. Them along with nvidia, were very bad with Vista drivers. I think they would think something like “it is better to have customers to buy a new device, than us supporting our existing stuff”.
That was the last time I purchased Creative branded hardware.
So that was an XP device with no official Vista drivers. Windows XP is NT 5.x and Windows Vista is Windows NT 6.x, so you jumped major versions.
The Vista breakage had to be done in order to fix crashes due to bad drivers. Since the industry was moving to x86-64 at the same time (Windows 64-bit won’t even load Windows 32-bit drivers), it’s good we got one breakage instead of two.
Yes, we all lost hardware during the XP -> VIsta transition, but I would never go back to Windows XP. A common joke in Windows cycles is that WIndows 7 is it’s the service pack and performance upgrade intended for Vista, but under a new name, since people had let go of their old devices until then.
Mainstream OSes with a heritage back to the 90s have at least one “bad practice”, so sometimes you have to stop supporting old habits. This is what VIsta did.
Nope, the cut off points are well-defined and were done for very clear reasons. There are three cut-off points
Windows 3.x -> Windows 95 (move to the fully 32-bit Windows 9x, 16-bit relegated to compatibility only)
Windows 98SE -> Windows 2000 (move from Windows 9x to Windows NT, I think I don’t need to explain why dropping Windows 9x was necessary)
Windows XP -> Windows Vista (cease support for bad practices of driver vendors in NT 6.x)
There was also a “mini-breakage” in Windows 8.1 for GPUs only, but since post-Vista GPUs are well-supported, almost nobody noticed. The ATI Mobility Radeon X1600 is an XP-era GPU and even then the breakage was tiny.
Keep in mind, this is during the span of the past 30 years, going from Windows being a lame UI for DOS to Windows juggling multi-core CPUs, adding tons of security and handling multiple GPUs with shaders, HDR and ray-tracing.
The only arbitrary breakage was Windows 98SE -> Windows Me, with Windows Me originally intended as a feature pack for Windows 98SE but ended up breaking compatibility with it somehow. But it was a wholly unnecessary version of Windows anyway, so nobody cared.
kurkosdr,
I’ve had a lot of peripherals stop working after windows upgrades to be honest. But at the same time I’m not a fan the continually breaking ABI in linux either. At least with linux, most of the in-tree breakages aren’t exposed to the user as they get fixed behind the scenes. But for out of tree drivers (ie nvidia), out of tree file systems(ie aufs), or local kernel patches, the linux ABI instability can be a regular source of frustration, far more often than the occasional breakages on windows.
I still think that open source is much better than closed source, but in cases like android where we lack driver source, the lack of ABI stability has been detrimental. It has stripped our ability to run and support the linux kernel independently from the drivers that we cannot control.
Alfman,
To be fair, Windows is also going in the wrong direction, but not too far. This happened after they discovered “crowd sourcing”.
They used to have a “lab” with all different hardware and software configurations. You’d see cross combinations like “English Windows XP SP1 with Turkish version of Office 2007 on such and such machine”.
Then they dropped that for a more streamlined testing. I think most of it is in virtual machines now.
And “skip ahead” tiers on Windows Update.
They realized that any update breaking a subset of users can be tested by those users instead of in house labs. Hence, much more breakage.
(This is of course based on hearsay, and outside observation. I don’t work at Microsoft, and I would probably talk even less if I worked there)
One particular news article on this:
https://www.ghacks.net/2019/09/23/former-microsoft-employee-explains-why-bugs-in-windows-updates-increased/
(Edit: Agree on Android. It would be in a much better state, if drivers could work across versions).
@Alfman
To be fair I’ve had this as well, especially early in the release phase, but I’ve been very pleasantly surprised at how much of it gets resolved in later updates. I work with a lot of obscure custom commercial hardware, and there is hardly any that I’ve had to bin in the long term. Actually, in recent times due to improvements in VMs I’ve been able to rebirth a lot of it.
cpcf,
That can depend on whether the manufacturer considers the product in support or in end of life. I usually intend to keep using hardware until it’s broken, but if a driver update is needed that isn’t forthcoming, then it may be the practical end for that device
You know what, I haven’t tried using VMs to continue using old hardware. Like the scanner, I guess it could run in a VM. It might be a bit annoying, but as long as it shares a mapped drive it might not be so bad. A non-technical problem is I don’t have any retail licenses to use older windows operating systems in a VM and MS stopped selling them.
Yeah, well that GPU exception also exists in Linux, it’s called X11 -> Wayland.
That’s all this is.
Indeed. Torvalds is an excellent programmer, but he has a terrible track record for the more business like decisions like providing a stable driver ABI, or telling the community that hundreds of competing desktop environments were better than unifying on one. The meme of the “year of desktop Linux” actually makes a pretty good case study of why your managers shouldn’t always be programmers. The Linux community continues to look for magic bullets to solve their problems, but most of their problems are rooted in bad decisions that simply need reversed.
Torvald’s business decisions make perfect sense once you realise that the target market for Linux is servers, not desktops. Servers need a performant kernel and also tend to be made of common hardware powered by open source drivers. So, the ability to innovate on the Linux kernel without ABI compatibility constrains is an asset on the server space, not a liability. In fact, Windows Server, whose kernel is constrained by the need to maintain ABI compatibility with printer drivers from 2008, is the OS carrying liabilities on the server space, not Linux.
It’s the same business decisions that in the past caused the Linux kernel to ship with severe laptop power management regressions.
People who use Linux on the Desktop are using digital hands-me-downs, basically a repurposed server OS with a graphics and audio stack tacked on top. And even those graphic and audio stacks kind of suck.
This also showcases the folly of the concept known as “one OS to rule them all”. Most people will use whatever OS is best for the job instead of displaying loyalty to any particular OS, which basically boils down to:
Servers -> Linux
Gaming rigs -> Windows
Desktops and laptops -> Windows or Mac
Phones -> Android (or iOS, if you don’t mind being restricted to a single app store)
The whole “one OS to rule them all” concept is something that exists more in the heads of PHBs as a wild fantasy than something based on technical reality. It’s why any attempts to move OSes “up” or “down” the stack that is shown above have generally been failures. It’s also why Tim Cook won’t merge iOS and MacOS, not in technical terms at least. It’s also why Microsoft’s decision to let Windows CE rot and then push for “one core” (even on Windows Phone) was the biggest mistake the company ever made.
kurkosdr,
That’s an interesting take. Although personally I find breaking ABIs frustrating on the server too. Anyone who needs to maintain out of kernel code/drivers has a maintenance burden regardless of if it’s for server or desktop. I would think that LTS releases, which target enterprise, would go hand in hand with stable ABIs, but it really depends on whether or not the kernel itself needs long term support or just the userland. On the other hand if someone views linux as a turnkey project that they take a snapshot of and then incorporate into a product/service until EOL, then the ABI stability is hardly a concern.
I’d like to see a compromise between fully stable and fully unstable ABIs, but it obviously won’t happen without buy in from the top.
I think it could benefit from better planning and upfront designing, but this not the way the linux community is organized. Linux subsystems are by and large a bunch of pet projects that get promoted by the mainstream distros. By then the designs are already chosen and we loose the opportunity to make more cohesive design choices up front before code is written.
Most server hardware is surprisingly common. In fact, most servers don’t even run a graphics or audio stack. So, all that’s needed is support for ethernet adapters. RedHat and Canonical also support Intel GPUs, Intel WiFI, and Intel High Definition Audio so developer laptops can exist. That’s all RedHat and Canonical care about from a business perspective, everything else is volunteer-maintained stuff that they just allow in.
That’s why crying for ABI compatibility is fruitless. It’s not what the Linux business is about (and yes, Linux is a business). It’s also why I use Windows, where all kinds of diverse hardware is either officially supported or not, instead of being “supported” by some volunteer-maintained driver that may or may not be maintained.
I get what you are saying, but there’s also raid, IPMI and other BMC stuff that is often proprietary, like the OpenManage suite from Dell. Although it’s since been fixed, I had a Dell server that needed a kernel patch to use the broadcom NICs. Another point is that It’s relatively common for smaller businesses to use off the shelf computers for server use cases.
I agree, a stable ABI is unlikely to happen. Most enterprise customers have outsourced kernel support to RedHat/Canonical/Dell/etc and as a result they don’t deal with unstable ABIs directly. Even a company the scale of google has struggled to push linux support for stable drivers for android, and I believe that it could be a motivating factor behind Fuchsia.
That’s not totally fair, you can find vendors that explicitly support linux, but you have to pay niche prices because they don’t have anything close to the scales of economy or competition of windows vendors. You cannot walk into a best buy and expect random hardware there to work, it’s hit or miss unless you’ve done the research first.
It doesn’t need to be this way, you can actually get good local support for linux in the form of chromebooks for example, but that’s because google put in all the marketing work to make them popular. It’s a problem of critical mass more than anything else. Without a corporate behemoth pushing linux desktops in front of consumers, schools, businesses, etc, the linux desktop will remain the domain of more ambitious DIY users.
“This is one of the things that keep me to Windows: Thanks to Microsoft keeping ABI interfaces stable between Windows NT major versions, devices are either supported or they are not.”
I have kind of had the opposite experience. You can run “old” Windows on old hardware but trying to keep current Windows working on old hardware can be a real pain. I have had lots of old hardware not work and have had the drivers that were released for that hardware not work on later Windows versions. One of the problems is that getting Windows installed to begin with can be a nightmare if the hardware is not supported. In contrast, it is very seldom that I run across hardware that does basically just work out of the box using a recent Linux distribution.
Where Windows shines I find is for very new hardware that does not yet have Open Source support, in specialty hardware that is too niche to have Open Source support ( more of an amplified version of the first point I suppose ), or where the manufacturer seems to want to avoid Open Source support on purpose ( looking at you NVIDIA! ).
One of the reasons I avoid Windows on older hardware is to skip having to track down drivers for everything. Windows tends to be too heavy for older hardware as well although I would say that this has been getting a bit less true over time. It is certainly easier to strip Linux down to perform better on limited hardware although people with more time than me have certainly found ways to trim down Windows as well.
The worst is OS X on old hardware. Apple is actively hostile to supporting old hardware ( which makes sense since they make money selling new hardware and mostly give the OS away ). As I stated elsewhere, I am typing this on a old Mac ( early 2008 ) that I am using while my laptop is out of service. I tried running OS X on it first. Very little recent software would run on the last version of OS X that Apple supports. There is “community” support for a newer OS X release ( I went with an older release for performance reasons — Mojave ) and I was able to get it running. Unfortunately, the system slowed down tremendously when running even a few apps. With Manjaro on it now, I have to watch the RAM a bit am running totally up to date software and interacting with my colleagues in a rich way without them even noticing ( video conferencing, sharing office docs, group messaging, email, booking meetings, and a few collaborative web applications ). Oh, and wasting time on OSAlert. As an aside, I am loving the 4:3 aspect ratio ( 1920 x 1200 ). It is especially great for video conferencing as, when others share their screen, there is room above and below for controls and headshots. I can easily read their content and still have room to run applications on the side. I am probably goign to continue to use this ancient machine as part of my daily workflow even after I get my laptop going.
I was just cleaning the garage yesterday and dusted off an old Sun 10 Ultra. I was thinking of installing Linux on it over the weekend. It was sitting on top of an SGI Indy. I am sure that I have an S3 card kicking around somewhere too.
I am actually typing this on an early 2008 iMac that I have installed Manjaro Linux on. My laptop died and I have actually been using this machine for work over the last couple of days. It has actually surprised me at how well it has held up. I had to submit a presentation and LibreOffice 7 worked great. Outlook webmail, Zoom, and GoToMeeting all run fine in the latest version of Microsoft Edge. I have even done a bit of .NET 6 development.
Anyway, I am apparently the kind of person that runs modern kernels on old hardware.
There is an element of this is the kind of behaviour that should be encouraged. Perfectly viable hardware is sent to landfill which is Perfectly capable. Its always nice to get more life out of old systems without having to compromise yourself with out of support and vulnerability ridden software.
A write once and ship and patch mentality tends to encourage bad design and a constant hardware-software upgrade treadmill. I know there comes a point where things simply have to be retired but I’d rather it was done for the right reasons.
Over 90% of security problems are the fault of software applications or user error, or connecting things to the internet which shouldn’t be.
Some people still use typewriters…
I don’t see such a big issue if somebody discovered some compiling issues Linux has regarding graphics such as lets say S3 Graphics. There are two reasons for that. First one being this is now really old hardware and more or less unusable for any practical tasks. And second if somebody doesn’t agree with that they can invest some time in it and make it work again. Just like recently i seen somebody doing that for better floppy drive support. This is still light years ahead compared to what any other platform offers.
The way I see it is design and build things properly the first time so stuff doesn’t fall over 20 years later and need intense maintenance.
In software world you always need intense maintenance. That is if you want to support something of a general purpose for lets say 20 years or above. Linux being a monolithic kernel written in C, having built in FOSS drivers, having a good policy on things like code quality and updating all related code when introducing progress. It’s currently the closest thing to what you ask for. Nobody has achieved more.
Geck,
Well, arguably nobody has beaten the mainframes in terms of software longevity, haha. But even there the maintenance burden is very intense with changing times. Even if someone had created perfect software (whatever that entails) for the year 2000, it would still need to adapt to remain viable in 2020 or risk becoming obsolete. IMHO change is inevitable. Individuals can sit it out if they’re so inclined, but the world around them will keep moving.
How much work do you reckon would it take, to add support for S3 Graphics to z/OS. And to do that twenty years later? Would IBM be interested in doing that? Likely better to just use Linux as a basis to achieve that? As it turns out Linux is a rather popular choice on mainframes nowadays. If we move the debate from hardware to software support. I would say that if you wrote some software for Linux in C, Python 2 … around 20 years back. In my opinion you shouldn’t have much issues maintaining that code still in 2022. And to run it on a latest popular Linux distributions. Here people usually believe this is hard to achieve on Linux. But the reality is the opposite. What people are basically asking is on why can i create an universal package for Linux. To distribute my software. And to expect it will still work 20 years from now. Now the main question i have here is would anybody really feel all that comfortable in running 20 years old blob on the latest hardware and latest Linux distribution.
Geck,
I don’t follow what you’re getting at. Why would IBM bring S3 to the mainframe? IBM does support linux as one of the mainframe environments, but it’s not really targeting typical linux users. It’s for those with mainframes who might need ancillary linux services. Technically you could buy a mainframe and just use it for linux, but that’s not really IBM’s selling point.
I more or less agreed with what you said earlier: “In software world you always need intense maintenance. That is if you want to support something of a general purpose for lets say 20 years or above.” Of course you can choose not to update and continue running legacy, but then you eventually loose long term support.
My point was more about 20 year old software becoming irrelevant if it’s not maintained and doesn’t evolve. Of course there will always be enthusiasts who run old stuff because they enjoy it and they can. I’m all for old-school enthusiasts as a hobby, but in general normal people don’t look to use legacy software and hardware as a daily driver.
@Alfman
Beats me. On why somebody would want to bring S3 to the mainframe. People tend to get all sorts of ideas. Like lets say some would like to use Rust on a mainframe. Note that in general we agree it’s just that people need to be pesky in an internet debate.
Have just read on Phoronix on how 21 years after the last 3DFX card released Mesa will likely now get a (modern) Glide API. For frontend purposes Rust to be used. This just answers so many questions asked here! I decided to mention it here.
You guys are making a mess of this news. These drivers are NOT part of the Linux kernel project.
They are part of X.org’s X11 server reference implementation that are used by almost every UNIX based/influenced system out there. Some of these drivers that don’t work anymore are for stuff made in early 90s! For example, the “newport” driver, for SGI Indy, a workstation from 1993. Or the driver for DEC TGA from 1994. And these fine “Sun” driver stuff as well.
These drivers are used on Oracle Solaris, OpenIndiana, FreeBSD, OpenBSD, NetBSD, OpenVMS… and goes on. Even GNU Hurd has a port of most of these as far I know.
The issue is: with Linux kernel incorporating more and more of the graphics stack on itself instead to keep on the user space (and part of X11) plus the decision to move to Wayland, made the entire X11 project fell in a kind of “maintenance mode”, nothing new really going on there.
Thus all operating systems other than Linux and that uses X.org X11 are in a cross roads: either they don’t know what to do and risking a future without graphical GUI, or (like FreeBSD and their LinuxKPI drm-kmod subproject) attempting to make their own kernel level graphics stack as compatible as they can with Linux so they could port the drivers from it.
CapEnt,
Of course you are right and I think most of us know that X11 is not technically “linux”. I don’t think most people really care about these semantics though. It’s like using “GNU/linux” to signify that linux is just the kernel. That’s technically true, but pedantically pointing it out every time becomes redundant given the surrounding context clues. In this case the article properly pointed out that these are userspace drivers.
If your point is that Linus Torvalds isn’t responsible for these drivers because they’re in a different project then that seems fair.
That’s a good point. In the past a lot of the unix projects were designed to be as portable as possible between unix flavors. but now they’re increasingly resorting to linuxisms that only work in linux. I do wonder what the future holds for alternative FOSS platforms if linux decides to go everything alone.
I think those roads were crossed long ago.
Of the 3 remaining commercial unixes, 2 of them; HPUX/AIX became headless long ago, and the 3rd one, OSX, has it’s own display architecture.
Isn’t this the basis of open source? People focus on what interests them and contribute towards it with their time and skills, even then they might not be able to afford to spend the time over working. If something doesn’t galvanise interest then it gets bitrotted into archival status. If you want or need this functionality, but don’t have the personal skill you can sponsor a project to do it. But many will be surprised at the $ cost of that. In this case Oracle have funded someone to look at this which doesn’t directly benefit their business model in any way (as far as I can see). So good on them! I would be in no way surprised if this task cost $10k+ in chargeable t+m effort.
I think too many people confuse political decisions with technical decisions, and their personal worldview and preferences with best practice. Another thing which worries me among all the arm waving is not just sound principles but that the end user almost always gets trampled under the thicket of agendas. Discussion forums aren’t really the place to solve any of this.
Actually, one thing I heard someone say the other week stuck in my mind. Does someone always help themself or do they help others?
I also heard another comment about people who adhered to safety not selfish irresponsibility during the pandemic. They may have saved the life of someone they will never meet.
Something to think about.
@Alfman
To be fair I’ve had this as well, especially early in the release phase, but I’ve been very pleasantly surprised at how much of it gets resolved in later updates. I work with a lot of obscure custom commercial hardware, and there is hardly any that I’ve had to bin in the long term. Actually, in recent times due to improvements in VMs I’ve been able to rebirth a lot of it.