I just want to emphasize that if you install and run Windows, your friendly provider is Microsoft. You need to contact Microsoft for support and help with Windows related issues. The curl.exe you have in System32 is only provided indirectly by the curl project and we cannot fix this problem for you. We in fact fixed the problem in the source code already back in December 2022.
If you have removed curl.exe or otherwise tampered with your Windows installation, the curl project cannot help you.
Both Windows and macOS have along history of shipping horribly outdated, insecure, and unsupported versions of open source software, and it seems that hasn’t changed.
Fundamentally, the real problem here is that the internet has done so a good job at spreading disinformation through pseudo-complaints and techno-gibberish that millions of average and below average users think they can “manage” their Windows machine updates better than the OS author. I wonder how many of these new found “experts” downloaded a dodgy copy of curl, failed to check the hash after the download, and installed a bot for which they will blame Microsoft.
Even this site and the comments on this very article contributes to the pain!
Of course this is a website for “experts” and experts know better!
They’re not entirely in the wrong though, IMHO it’s hard to make a convincing case that microsoft deserves none of the blame here.
As much as I don’t like MS, this doesn’t seem like the fault of MS, and I can’t make a case why it is their fault.
There is a case to be made that the security industry is involved with scaremongering and also kind of bad at their jobs.
I’ve worked with vulnerability scanners, and they are super dumb. Point one at a RHEL/CentOS/RH clone box, and watch the problems bloom. RH backports many security updates without bumping the version, and many vulnerability scanners only check the version number. Rapid7 is notorious for this.
Flatland_Spider,
Shipping software with known memory vulnerabilities…how isn’t that their fault? Sure, the memory corruption vulnerabilities may have been marked as low severity, but this often means an exploit hasn’t been found, not that one doesn’t exist. These kinds of bugs that software companies are too lazy to fix are exactly the sort of thing covert hacking agencies might use to exploit systems in the wild.
To be clear I’m not suggesting microsoft are the only ones at fault. Customers who broke their own systems are at fault too, but to suggest there was no fault by MS doesn’t sit right with me. It was microsoft’s responsibility to fix it yet microsoft allowed several patch tuesdays to come and go and continued to fail to fix their copy for months even after it was already fixed upstream. Not for nothing but if I were a windows sysadmin I’d be disappointed in microsoft for putting users in that position in the first place. It wouldn’t have been a problem if MS had taken their job more seriously.
I’m not familiar with it, but do they really update the software without bumping the version? Honestly that sounds like a bad practice.
Yes, they do update the RPM version so the typical dnf update updates it, but typically the software will report its the same version that the RHEL started with. So yes security vulnrabilities on RHEL can’t be tracked by the version of the application. You can search Redhat by the vulnerability and figure out what RPM version fixed it. I think there is a CLI version of that utility as well, I can’t remember off hand. Is that bad? I can understand arguments for and against it.
Bill Shooter of Bul,
Thanks for the info, I’ve learned something new.
I don’t know. it does seem a bit surprising that multiple builds would have the same version number and it not be by accident. But then it depends what version numbers we’re talking about. With windows applications you’d get a version number AND a build number as part of the executable, which is automatically updated by build tools. But that’s not the case with linux elf binaries.
https://unix.stackexchange.com/questions/58846/viewing-linux-library-executable-version-info
So I actually don’t know the policy for most repos in updating the custom versions that are self reported by applications. There’s no standard way to update or read these. Are these synchronized with repo versions? When I look at version numbers, I also use the repo version number, and as long as that’s changing I guess it might be enough, but I concede it’s not something I’ve given much thought to before.
Reported version depends on the software, really. Some (many) will report something like “version 1.2.3-44-el8-p12” or something (there’s a whole RH patch versioning scheme that I don’t know the details offhand). Some will report just 1.2.3. And yeah, scanners will often just look at the base version and flag stuff. I can’t count the number of tickets I’ve put in to get a High vuln removed from my systems. It seems to be better (at least with the tools my company uses) than it used to be.
RHEL has a lifecycle describing what things *may* change between minor releases (8.3 -> 8.4), and major releases (8 -> 9). They’re providing a stable distro that someone can install the same binaries between releases and have some assurance the libraries haven’t changed. Curl falls into Cat 2, no API or ABI changes for the major release (all 11 years of RHEL8 in this case). So they can’t just jump to a new version of curl of libcurl willy-nilly. So, they look at the issue, create a patch that only fixes the problem (and doesn’t add/change/remove anything else), and that’s what you get. It’s good if you don’t want stuff changing, but not as good as the release ages and some FOSS stops working because the libraries are too old. They’ll start backporting some stuff, or nowadays you build in a container.
https://access.redhat.com/articles/rhel8-abi-compatibility
FWIW, on a RHEL8 container:
[root@94ead952821a /]# curl –version
curl 7.61.1 (x86_64-redhat-linux-gnu) libcurl/7.61.1 OpenSSL/1.1.1k zlib/1.2.11 brotli/1.0.6 libidn2/2.2.0 libpsl/0.20.2 (+libidn2/2.2.0) libssh/0.9.6/openssl/zlib nghttp2/1.33.0
Release-Date: 2018-09-05
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IDN IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz brotli TLS-SRP HTTP2 UnixSockets HTTPS-proxy PSL
[root@94ead952821a /]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.7 (Ootpa)
Looks like they cut out some stuff in EL9:
[root@94102a48dcb9 /]# cat /etc/redhat-release
Red Hat Enterprise Linux release 9.1 (Plow)
[root@94102a48dcb9 /]# curl –version
curl 7.76.1 (x86_64-redhat-linux-gnu) libcurl/7.76.1 OpenSSL/3.0.1 zlib/1.2.11 nghttp2/1.43.0
Release-Date: 2021-04-14
Protocols: file ftp ftps http https
Features: alt-svc AsynchDNS GSS-API HTTP2 HTTPS-proxy IPv6 Kerberos Largefile libz NTLM SPNEGO SSL UnixSockets
MattPie,
You’re talking about linux scanners? Which specifically? I just can’t imagine this is how any scanner would work given that there’s no standard mechanism to encode version numbers in elf binaries. It just seems extremely flimsy to me. You’d need some kind of custom mechanism to identify version numbers for every executable. What would a heuristic even look like in trying to decide what mechanism to use to query the version? Typically this involves running the binary with a special parameter, but surely a scanner isn’t going to do that, right? Some applications and shared object files may not even have any version numbers at all.
It just seems far easier and more accurate to take a hash of the files and flag them that way.
Yes, this is my experience as well.
RHEL/Ubuntu/Debian/whatever do bump version number, but not in the program itself but in the version number of the package. The package has changelog (with notes of security fixes and with related patches) and checksum for every file and is digitally signed so it’s easy to verify what you have on your disc and if it is original version or not.
RHEL used to include the individual patches in the source RPM and everything was clearly labeled, but then Oracle linux became a thing and RHEL just started shipping everything pre patched to make Oracle’s life worse.
Let’s see, It’s not Microsoft’s fault a 3rd party company got into contracts saying they must patch vulnerabilities in 30 days. It’s not their fault 3rd party security vendors are flagging system files, and it’s certainly not their fault users are ripping them out or updating them with unofficial files. Meanwhile, Microsoft has apparently correctly judged the vulnerability is very hard to exploit, and I don’t see any known attacks in the wild for it. It also appears Microsoft pushed an update to fix it on April 11th. Conclusion: Microsoft is personally at fault for bothering people like you over nothing. Those few system administrator’s it did effect, likely would have just deleted the file and run sfc.exe later to fix it and be complaint with their contract. Regardless of OS, they are very likely familiar with doing such shenanigans due to said contracts as a “fix” doesn’t always magically appear because their contact says it has to. You are basically just taking a cheap shot at Microsoft because you don’t like them, but they’ve done nothing wrong here other than trigger some compulsive behaviors in people that already don’t like them.
dark2,
But it really is microsoft’s fault (and exclusively their fault) they’re shipping software with known memory vulnerabilities that have been published and fixed upstream.
It’s good they eventually did it, but not great that they delayed a security fix for months even if was considered low risk. Why wait it out and put users through the risk at all when a security patch is available? Why give attackers more time and opportunity to develop exploits for known memory vulnerabilities? Zero day exploits are a real thing, and moreover many exploits are kept secret by the agencies that use them.
Not to blow things out of proportion, but objectively speaking if this is a common and deliberate practice at microsoft, then they’re unnecessarily putting users and corporations at risk. That’s not my opinion, it’s just statistically true.
That’s a cheap shot and to be honest you’re being hypocritical as hell. If it were a linux vendor you’d be taking every opportunity to flack them over it.
If I were running a windows network, I would expect a better response from microsoft even for low level vulnerabilities like this, but maybe you can convince me that my expectations are too high.
Also the security industry.
If they do that because Microsoft sucks at managing the updates for its components, well its Microsoft’s fault. Microsoft sucks at many many things, requiring users to do really dangerous stuff to fix basic problems. They’ve taught users that over years. Its Microsoft’s fault. Not the end user trying to keep their system safe.
They’re point-in-time operating systems. Breaking stuff in base is a cardinal sin, and why RH spends so much time backporting security fixes.
Apple has started stripping stuff out of base because the most secure code is code that isn’t shipped, and people install MacPorts or Homebrew anyway. People are mad about this.
The same criticism has been leveled at Debian and RH, at one point or another. Typically by people running a rolling release distro, but sometimes by other people.
On the opposite end, Fedora gets dinged by commercial software because it’s semi-rolling release, and the package churn bothers people. CentOS Stream gets dinged for this too, and it’s only a few weeks ahead of RHEL.
Then there is Gentoo and Arch which are full rolling release distros.
Let’s break this down one by one…
– “horribly outdated”
If the library has all the functionality needed, it’s fine. If it doesn’t, then an appropriate version should get bundled with the application.
One of the reasons statically linked binaries made a come back.
– “insecure”
Fair enough. Binary with vulnerabilities is a binary with vulnerabilities. This doesn’t mean the latest commit on main needs to be pushed out. It means the the minor without the error needs to be pushed.
Older version doesn’t necessarily mean insecure. There have been instances where an older version wasn’t vulnerable because it didn’t have the buggy code.
– “unsupported”
Windows and macOS aren’t unsupported, and a library shipped by them is supported by them. MS or Apple sharing the same priorities as the user may or may not happen.
Unsupported is running LFS or a community LInux distro. Best effort by whomever is online isn’t support. Before people get pedantic, it’s literally in the FOSS licenses or terms of use for freeware.
Yes, I think there is barely a difference between the various OSes, it is more about the way the end users behave than the systems that manage them. Install, Win10 or !in 11 and critical updates get immediately installed as part of the process unless somebody breaks, blocks or chooses to ignore those updates. If those updates were forced on users the complaints would be even louder.
I can’t believe people are actually calling for every successive install to be fresh and up to date, because that would be the price of not shipping obsolete code, it’s a no win scenario!
Then we have the issue of 3rd party vendors, mostly hardware vendors, who lock down versions of software and drivers to avoid revisions to their own drivers and systems. That might not the fault of the OS vendor but they wear the blame more often than not!
It’s too easy, install from the latest source and patch immediately, then patch every day until the day you retire the system, very rarely will you hit a snag! Easy for me to say with only a few dozens of systems to worry about, even so I still come across Anti-Update Dunning-Krugers who are their own worst enemy.
fwiw, I’ve had two major issues during updates in my entire admin lifetime, once on MS and once on Linux. Apple seems to have nicely avoided this issue by completely blocking older hardware from ever updating. Something MS seems to have started with Win11!
When Linux kernel starts dropping support earlier, we can call the three of them the evil triad!
cpcf,
I’m having a hard time with this post. Was sarcasm intended? Because windows users are getting forced updates and some of them complain about it too. Not merely security updates, but feature updates. Feelings have been admittedly mixed about it.
My sarcasm detector is going crazy, but it’s well concealed and I can’t tell
But just to make a slight correction here: users and admins generally expect the latest software to have the latest security patches such that there are no open corruption vulnerabilities to potentially exploit. This might be but is not always the latest version of the software. As Flatland_Spider mentions, many linux distros backport security patches to whatever versions of software they’re officially supporting.
Are you talking about software bundled by system builders like Dell? Admittedly I haven’t bought a prebuilt windows system in years, but I’ve never seen them enforcing any restrictions on bundled software, is that a thing now? It’s likely I am completely misunderstanding your point, haha.
Should this apply to microsoft themselves, because obviously they didn’t do that.
While I agree snags are rare, I’ve still hit enough of them to worry about updates on production systems. My confidence level with updates is not 100%. One such update I performed in the middle of the night as usual, but woke up to users calling about errors in their PHP applications…sure enough broken by an updated component. Alas, this is why there’s value in LTS enterprise distros that push security patches instead of software updates. There are some major pros and cons to consider though. Sometimes the versions are so far behind that they’re unusable. Unfortunately I find it happens quite often where a customer or I need a specific feature that isn’t supported by an LTS release and the burden of maintaining a version you’ve installed yourself kind of sucks. This might be a good reason to look at per-application containers that have their own update time frames.
Yes, I’m walking the tightrope that exists between cynicism and scepticism, it’s very very fine!
When it all goes wrong during a patch it’s always the vendors fault, if a system is left unpatched vulnerable to become BOTimised it’s the vendors fault.
But I can’t help it, this issue is so wrought with blame in all directions I can’t take it all that seriously. Other than some obscure OS based in kiosk land I can’t think of anybody with even close to the high ground! It’s almost a debate in absurdity.
The fact this issue can become so emotive is homed in the truth that nobody knows the correct answer, there are lots of great suggestions, that close one hole and open another.
Perhaps the safest thing you can do to remain secure is to be unique and change frequently. A shark that doesn’t move is a dead shark.
This is the way!
PS: Just for fun I might have harvested some answers for this from AI, they made little sense to me as well, next step is to have them decide who I should vote for!
In defense of MacOS, the reason Apple ships outdated versions of open-source software is because some neckbeards decided to change the license of that software to GPLv3, thinking they could force a corporation like Apple to make their OS a non-Apple exclusive. Yea sure, neckbeards, thanks for making it worse for everyone…
https://robservatory.com/behind-os-xs-modern-face-lies-an-aging-collection-of-unix-tools/
(there is also the new patent language in GPLv3 that probably scares Apple too)
kurkosdr,
I’d say that’s probably most of the concern. Although I do understand the motivation for GPL3 including it. After all you wouldn’t want a corporation contributing source only to follow up with patent lawsuits when others start using it. It would be terrible and it clearly goes against the spirit of FOSS even though it’s not explicitly stated in GPL2. Theoretically GPL2 authors are at risk of this type of abuse. However that said, I don’t see most patent trolls contributing source code in the first place as that would involve them doing actual work, haha. It minimizes the use of such strategies against GPL2.
GPL 3 would have been more widely accepted if it had come sooner, namely before the software patents era. However GNU had no way of knowing this back then. And since GPL2 is more popular and incompatible with GPL3, they’re kind of stuck.
First of all, changing the license of a software post-release is just plain abuse of power, hence me cussing at the neckbeards who did this. I mean, if someone doesn’t agree with the new license, they can’t get security fixes for the software? Even Microsoft gives 10 years of security fixes for a given major version without any license chances.
Also, the GPLv2 already carries an implicit patent license under the “freedom to distribute copies and run the software for any purpose” clauses:
https://copyleft.org/guide/comprehensive-gpl-guidech7.html
What GPLv3 adds, beyond making the patent license explicit, is language pertaining to patent deals with third parties. Basically, if you license an essential patent owned by a third party (for example Microsoft) so you can distribute your GPLv3 software in your product, you have to make sure that every downstream user gets a patent license for the software you distribute, no discriminatory terms allowed. This was meant to prevent deals such as the Microsoft-Novell patent deal, and also prevent schemes where you have a colluding third-party company patent a concept you invented and then have your company write code that implements said concept (and then you license the patents under discriminatory terms from the colluding third-party).
Which is all and good in theory, but in practice it makes patent licensing for GPLv3 software impossible, because the non-colluding patent holders want to impose discriminatory terms and will hold your ability to distribute software hostage until you agree to their terms.
So, in combination with the “anti-tivoization” clauses in GPLv3, can you really blame Apple for not going through with this just to have some newer version of some command-line tools? It basically risks their ability to license patents from third-party companies.
Generally, I think the GPLv3 is a bridge too far for what is a copyright license, to the point of risking making GPLv3 software undistributable. What if the FCC demands some form of mandatory “tivoization” for devices having any kind of wireless capability? What if some NPE comes up with some software patent that is essential to most software out there (such attempts have been made in the past) and some judge upholds it as valid?
I am sure the GPLv3 makes sense in Stallman’s head (who compiles everything himself anyway), but not if you are a company trying to distribute products in the real world. It’s not just an Apple issue, everyone is avoiding the GPLv3 like the plague for the reasons mentioned above.
PS: Sorry for the long-form post, but it’s a nuanced subject that I think should be presented in its entirety.
so you can distribute your GPLv3 software in your product = so you can distribute GPLv3 software in your product
kurkosdr
The licensees are prohibited from changing the terms of the GPL2 license. However if you are the original author then you are not subjected to it’s terms. choosing a new license is legitimately the author’s prerogative. I just think it’s unfortunate that GPL2 didn’t have a compatibility clause for “GPL2 or later”. Alas, it is what it is.
That text doesn’t show up in GPL2, but I guess you’re generalizing. I’m not sure a court would uphold the “implied license” interpretation that you link to. Maybe or maybe not. It’s obviously weaker than GPL3.
Yes, it closes lots of these loopholes.
I understand that. GPL3 is clearly not a good license for software patent holders. Although given my opinion of software patents, I think software patents are extremely abusive in and of themselves and the patent office should have never started granting monopolies on code and algorithms at all. Software patents shouldn’t exist. I think most software engineers actually agree on this, but the legal system forces companies to participate in the sham at great cost. Like google buying motorola for $12.5B to take their patents for use in defending android from apple lawsuits, then reselling the remainder of motorola for $2.9B. It’s the trolls and lawyers who win, everyone else looses.
I am glad the anti-tivoization was added to GPL3, I wish it were in GPL2 because it could (potentially) solve a lot of the problems I have with today’s hardware. Who said I blamed apple though? I said the patent clause was “probably most of the concern”.
I have similar feelings about patents. Most of us would rather software patents didn’t exist, but they do. FOSS and software patents are mortal enemies and reconciling them is an ugly process. I personally take the side of patents not applying to FOSS, and GPL3 makes explicit, but I understand the complexity of this for patent holding corporations.
Things could have been different if more FOSS developers/projects had a chance to adopt GPL3 before the rise of software patents. But by arriving late really hurt uptake and now huge projects like linux are stuck on GPL2 forever and cannot be updated.
I know. We’re probably not going to agree, but that’s ok. My hope is that we can both say we understand each other’s points of view without demeaning each other
Yes, authors have the power to change the license (assuming every author is on board), but it’s still an abuse of that power. Even Microsoft gives 10 years of security fixes for a given major version of Windows without license changes. So, those people managed to be bigger twats than Microsoft. It’s an achievement, in a way.
Nope, that’s where the first major fundamental misunderstanding lies: Software patent holders don’t care, they can hold distribution of a given piece software hostage until you agree to their terms. They don’t care about the license of your code because it’s your responsibility to come to an agreement with them to license their patents. Otherwise, they can sit back and watch your software be unused while competitors that did manage to come to an agreement with them take all the market. Simply put: Why should MPEG LA care about you and your chosen license’s terms? They have enough licensees for their patents already. If your GPLv3 product needs H.264 support, you are done for, they can hold it hostage until the last patent expires in late 2027.
That’s where the second misunderstanding lies: Software patents absolutely do apply to FOSS in countries that have software patents, and if you think they don’t, there are judges and gun-totting policemen to convince you they do (one way or another).
This is the problem with the GPLv3: It was designed to be a middle finger to the existing legal framework of some countries, oblivious to the fact those countries have a monopoly of force in their jurisdictions and Richard Stallman doesn’t.
In even simpler terms. if the GPLv2 didn’t exist (and only GPLv3 did), Google wouldn’t have been able to create device like the Nexus Player or the Nexus and Pixel line of phones, because they wouldn’t be able to license essential technologies such as H.264 and Mp3 (back when MP3 was patented) from the patent holders.
In fact, H.264 is a de-jure requirement in some countries for things like broadcast TV, and there is no MPEG-2 simulcast, which means coming to an agreement with MPEG LA is a legal requirement. So if you want to make an Android TV device that can receive broadcast TV, the GPLv3 is a poison pill. But Microsoft and Apple have no problem coming to an agreement with MPEG LA. Do you see the problem now?
So, GPLv3 is an example taking a big problem and turning it an insurmountable problem. I cannot blame anyone who steers clear of it.
Nope, even when it comes to new software, for example from-scratch implementations of H.264 decoders and encoders, it’s highly advisable to license it as GPLv2 or as GPLv2-or-later so people can take a patent license for it from MPEG LA and ship it on actual products.
Generally, only license something as GPLv3 if you are sure nobody can come and assert a patent on your software, because if they do assert a patent on your software and some judge finds it valid, you can be sure as hell that they will not offer you terms that are GPLv3-compatible. But how can you know your software is free from such patent assertions? Even Richard Stallman says no software is safe from patent assertions. With GPLv2 you can always pay them to go away so you can ship your device.
Of course, you can always forgo about shipping on countries with software patents and cede those markets to proprietary competitors. Do you see why the GPLv3 is a dangerous license? Don’t use it unless you don’t want your software to be shipped as part of a device and also live in a country without software patents (so you can distribute yourself).
kurkosdr,
That’s a really heavy handed dose of entitlement though.
You can say all the grievances you want about anything at all, but the dramatization of “abuse of power” is so egregious that it’s comical. Just because an author chose GPL2 in the past doesn’t mean they’re forced to continue using it indefinitely for your pleasure. It’s their livelihood on the line, not yours. They have the right to use any license at all for their work, AGPL, GPL3, MIT, BSD, etc, or use their own license and make it proprietary. And while it might suck for you, it’s their right to decide how their future work gets licensed. You can still use what they’ve already released under the old terms, but it’s extremely unreasonable to suggest authors are not allowed to update their license terms for future work.
You’re wrong to say I misunderstand, I fully understand that software parents cover FOSS. However you seem to be forgetting the context of the discussion. We were talking about why companies like apple are hesitant around GPL3 and it’s in that context I intended “GPL3 is clearly not a good license for software patent holders” to be read. Patent holding companies don’t like GPL3 because of how GPL3’s patent terms apply to them. Think about it, If your paragraphs above applied to a company like apple, then they would not have a problem using GPL3.
I somewhat disagree with this interpretation. The original software companies relied only on copyrights, not patents. We never needed patents to get the software industry off the ground. Of course it’s very hard to change now that norms are set and the IP lawyers are involved. But the software industry was totally viable before patents and IP lawyers. In an evolutionary scenario where GPL2 were replaced with GPL3, companies like google growing up in this environment wouldn’t bat an eye. Google’s business was never based on software patents. They would be fine if GPL3 were the norm and the same is true of most companies.
It seems like you keep insinuating that I’m blaming companies today for steering clear of it today, but I’m not. They’re dealing with the software patents reality that’s been created around them. Most of them disagree with this reality, but we have to live with it. Different norms could easily have been set sooner, changing the course of history, but timing is everything and GPL3 was too little too late.
Come on man, you’re not listening to what’s being said, which you literally quoted. You’re welcome to disagree with what I think, but please at least get the “what I think” bit right, I feel I deserve that, otherwise it’s a straw man.
Future work yes, security fixes for existing work no. In fact the EU is planning legislation to mandate security fixes for a number of years, so the idea that security fixes should be provided for free under the existing license for a reasonable number of years is not that much out there. Unfortunately open source will likely get an excemption.
Well, that exactly the problem here: GPLv3 arrived when software patent licensing had already being established and essentially demanded a complete do-over. That’s not how it works. MPEG LA (and similar entities) won’t change their ways because of that new license called the GPLv3, they’ll simply not license GPLv3 software (and they don’t). Judges won’t change their decisions about about the patentability of software (in countries where software was declared patentable) because of that new license called the GPLv3. And that makes the GPLv3 a bad license, because it makes an already hard problem (patent licensing) into an impossible one. Sure it fixes the problem of companies that are both authors and patent holders. But it creates a much bigger problem when it comes to licensing patents from entities like MPEG LA. Avoid if possible.
kurkosdr,
I get that you may like to get security fixes for existing work, but even GPL2 never gave you this right or expectation and in fact it explicitly contradicts what you are asking for.
https://choosealicense.com/licenses/gpl-2.0/
I don’t have much insight into what the EU will do, I don’t even live in it’s jurisdiction. However just focusing on the logic here, just as you’ve been criticizing GPL3 for it’s side effects of making GPL3 code less palatable or non-viable for some corporations (for better or worse), EU legislation that mandates a warranty would also have side effects in making software development within the EU less palatable or non-viable, especially FOSS. Can you imagine if FOSS projects in the EU were forced to fix end user issues for free even though they never got a dime? It would mark the end of FOSS projects hosted in the EU (for better or worse).
This is what I’ve been saying. It was too little to late.
You’re entitled to this opinion, but whether a FOSS project uses GPL3 is not your call. Can we at least agree on that?