Friday saw the largest global ransomware attack in internet history, and the world did not handle it well. We’re only beginning to calculate the damage inflicted by the WannaCry program – in both dollars and lives lost from hospital downtime – but at the same time, we’re also calculating blame.
There’s a long list of parties responsible, including the criminals, the NSA, and the victims themselves – but the most controversial has been Microsoft itself. The attack exploited a Windows networking protocol to spread within networks, and while Microsoft released a patch nearly two months ago, it’s become painfully clear that patch didn’t reach all users. Microsoft was following the best practices for security and still left hundreds of thousands of computers vulnerable, with dire consequences. Was it good enough?
If you’re still running Windows XP today and you do not pay for Microsoft’s extended support, the blame for this whole thing rests solely on your shoulders – whether that be an individual still running a Windows XP production machine at home, the IT manager of a company cutting costs, or the Conservative British government purposefully underfunding the NHS with the end goal of having it collapse in on itself because they think the American healthcare model is something to aspire to.
You can pay Microsoft for support, upgrade to a secure version of Windows, or switch to a supported Linux distribution. If any one of those mean you have to fix, upgrade, or rewrite your internal software – well, deal with it, that’s an investment you have to make that is part of running your business in a responsible, long-term manner. Let this attack be a lesson.
Nobody bats an eye at the idea of taking maintenance costs into account when you plan on buying a car. Tyres, oil, cleaning, scheduled check-ups, malfunctions – they’re all accepted yearly expenses we all take into consideration when we visit the car dealer for either a new or a used car.
Computers are no different – they’re not perfect magic boxes that never need any maintenance. Like cars, they must be cared for, maintained, upgraded, and fixed. Sometimes, such expenses are low – an oil change, new windscreen wiper rubbers. Sometimes, they are pretty expensive, such as a full tyre change and wheel alignment. And yes, after a number of years, it will be time to replace that car with a different one because the yearly maintenance costs are too high.
Computers are no different.
So no, Microsoft is not to blame for this attack. They patched this security issue two months ago, and had you been running Windows 7 (later versions were not affected) with automatic updates (as you damn well should) you would’ve been completely safe. Everyone else still on Windows XP without paying for extended support, or even worse, people who turn automatic updates off who was affected by this attack?
I shed no tears for you. It’s your own fault.
I have to disagree with the car analogy. When I buy a car, sure I do consider tires and such as things I have to get. But there are two key things here.
#1 Microsoft didn’t make the tires or oil.
#2 I can get those parts from others. I don’t have to go to Microsoft to get them.
Automotive manufacturers have to make sure their vehicle is safe after they make it, even years after they stopped support. This mechanism fails, it is the manufacturer’s responsibility to issue a recall on it. And no, they don’t charge for it either.
So, while your car analogy is close, it doesn’t fit this model. Or maybe it does, but you are addressing the wrong part of it. Microsoft should be making the patch available as a security and safety mechanism for all of it’s customers, just as car manufacturers do.
As an aside, I’m not a Windows user. MacOS, Fedora, and Haiku at home, thanks.
Edited 2017-05-15 16:41 UTC
You can do all that yourself with an out of support Linux distro, assuming you can find someone to audit the code and backport patches, but if you’ve got the source code to your Linux applications around (and really, you should), you can just rebuild for a newer release if it stops working.
Yes, this is a hugely different model of OS and application lifecycle and deployment than the IBM and Microsoft one, but it also works. It also has the advantage of not forcing super strict binary compatibility on the OS. If the ABI changes, rebuild and redeploy.
This can work, but just as often you will be left with some applications that randomly crash because you did not realize that some dependency existed, or because some newer version of a library is API compatible, but has different behaviour than before.
Just using a rolling release system is far better.
Gentoo is also an alternative that fits what you describe. And despite what people may think, it’s easier to keep Gentoo working than to be updating and rebuilding all your software manually. Doing it manually, you will need to know everything it takes to make Gentoo work (and more), and you will find yourself it many weird situations that absolutely no one else has ever seen.
For something like embedded devices, using Gentoo as a metadistribution is a better fit. Compile once, ship an image many times. ChromeOS does this.
When a car producer leaves security holes in their models, or use tricks to pass pollution tests, it’s not because the car isn’t produced anymore that the car producer should be held off its obligations and put all responsibilities on the user.
Sure the user can be a bad driver and can cause problems by itself. But if the security holes are the car producer’s fault, it’s its liability to provide fixes. And fixing software is not the same cost as fixing cars.
You get richer with softwares (Microsoft, Apple, Oracle) than cars (General Motors) for a reason. So claiming the users should upgrade at their expense because the software producer decided the architecture ain’t worth anymore, wadda wadda, this is lies.
With the so many coders out there, with good coding practices available for years and for free, there’s no excuse some softwares are still coded with the foot. Remember the 2K problem that costed users billions on software producers’ incapability to provide secure and well crafted softwares in the first place ?
I’m not going to fall into this fallacy and feel at fault. Those companies gets enough money for little evolution (IE6 anyone ?) so stop believing into this mythology. You think software are top value products ? Look how flawed they are, like they are released in a rush with only little testing beforehand.
Aren’t there enough white hats out there to work with/at Microsoft to test bench the softwares with a complete regression testing suites nicely handcrafted for years and decades ? Obviously the NSA doesn’t have a problem to hire black hats to find exploits. Amurica Freedumb!!1! So better than the rest of the world.
Thanks for the legacy exploits, thanks for ransoming users to upgrade their softwares to correct them.
Edited 2017-05-15 19:38 UTC
There would be also the part when after a tire change your car would suddenly start spying on you.
I agree. I do feel sorry for all those UK citizens who may not have received the appropriate healthcare in time because their hospital messed up their IT.
Also, while this time it was an exploit for an ancient OS, it’s a good opportunity to take a step back and consider: next time it could be a 0-day. Next time it could be you. Next time your data – and your employers’ data? – could be stolen/exposed as well as encrypted.
Are you prepared?
While I personally don’t use Windows (I’m a Linux guy), can I get a confirmation, that only Windows 7 and older versions of Windows were affected by this vulnerability? (So Windows 8 and 10 are not.)
I’m asking because I’m the supposed “techguy” in the eyes of my family members, so they are pestering me if they are safe.
Windows 10 isn’t safe ever, thanks to Microsoft’s inane spying bullshit, but if fully patched it’s not vulnerable to WannaCrypt.
This article identifies which unsupported OS versions did not have a patch available:
https://blogs.technet.microsoft.com/msrc/2017/05/12/customer-guidanc…
Windows Vista was still receiving updates in March (when the patch was issued), but is now unsupported.
Windows 8 is unsupported, but Windows 7, Windows 8.1, and Windows 10 are still receiving updates.
Edited 2017-05-15 17:52 UTC
I do understand that some embedded systems are basically not viable for upgrade.
The irresponsible part, to me, is putting such a system on a network, or allowing data to pass into such a system from a possibly insecure source. Better yet if data can only move out from such a system, since that eliminates the biggest attack vector.
FlyingJester,
Yes, it also strikes me as odd that the networks themselves were not better isolated. I guess some employees inadvertently installed the malware inside the network perimeter of critical systems, however for that to be possible it seems there wasn’t enough isolation. Critical systems should not have any connectivity to the internet at all incoming or outgoing such that internet based malware could infest the inner network. They should also be physically secured.
Internet facing servers would should be kept outside the perimeter in a DMZ. Employee computers should probably have their own networks as well. They could install honeypot/trip wires to detect any unauthorized activity.
Edited 2017-05-15 18:23 UTC
80% of the systems were okay. Which means 80% are probably doing it right.
Actually, Microsoft is making it harder and harder to run their operating system(s) without an Internet connection (even just Windows connecting to the Internet).
Lennie,
Yeah, they’re especially pushing it on home/pro users, it’s probably going to get worse. But I would strongly hope that the specifications for hospital computers would ban “the cloud” because the internet going down is a predictable failure mode. Can you imagine a disaster like 9/11 when telecoms were disrupted and then a hospital having to deal with an IT issues at the same time. That’s not really acceptable.
Cloud services is a good example.
The experiences I had was with Windows servers.
Some of the Microsoft software is build in .net and that would use code singing and Windows is checking the certificates. To check the certificates on that code it needs an up to date Certificate Authorities-list or Internet connectivity (it does automatic downloading). Sometimes… CA-list updates are actually not included in the Windows updates (without an Internet connection, you need an updates server as well of course).
So what do you get ? If a server for example reboots, the server software won’t start because the CA-list is to old and it can’t automatically download an update.
In theory it should never happen, has already happened several times.
Lennie,
That’s an interesting point, there are unexpected failure modes everywhere and it’s easy to overlook those things when everything is working.
Sometimes we sign SSL certificates with arbitrary expiration dates in the future that we’ll very likely forget about (it will probably be someone else’s problem).
Several weeks ago an offsite computer wasn’t responding, apparently it didn’t power on automatically as it always had before. It’s a few hundred miles away and I haven’t gotten a chance to diagnose it yet but I am thinking it may be the cmos battery, which not many of us give much thought to.
Like many administrators, I rely on 3rd party DNS black listing for spam classification, but those could fail or get compromised causing widespread denial of services.
All these what-if’s are why certification is so important and so expensive in critical systems.
Edit:
I just remembered about OpenVPN’s use of SSL certificates…off to check whether it ignores the dates or if that’s a potential failure mode in the future!
Oh crap, it is a failure mode, and openvpn’s official stance is they won’t give users an option to ignore time even on servers where there may not be a reliable time source.
https://community.openvpn.net/openvpn/ticket/199
Time validation is correct by default, but it introduces a new failure mode in routers that don’t have a clock source… the VPN will work fine until there’s an NTP failure.
Edited 2017-05-16 17:56 UTC
VPNs can be ‘fun’, because if you have a long running VPN which route more than just a few subnets over it you might end up breaking DNS (which is might be needed for reconnecting the VPN on timeout) or NTP updates because they are also routed over the VPN.
Many embedded devices don’t have any time at all.
DNSSEC on embedded devices is a real problem, if you want to use DNSSEC you need NTP, but NTP relies on DNS… oops catch 22.
Lennie,
That’s an interesting problem.
https://www.ietf.org/mail-archive/web/dnsop/current/msg19955.html
I don’t think we can assume the accuracy of time on endpoints. This bootstrap would be solvable if the client were allowed to use a challenge/response protocol. Although that would come at some expense for both scalability and robustness during bootstrapping. Obviously proof of time is not going to be possible over an air gap
And then you still have the issues with certificate revocation that are not really specific to DNSSEC: If you know the time, you can validate a CRL, but if you don’t know the time, you have no idea if the CRL you are given is current.
https://en.wikipedia.org/wiki/Certificate_revocation_list
That really depends on your implementation and how you’ve configured it. IPSec is one of those protocols that you can’t necessarily cover with a blanket statement. If you’ve configured by using PSKs, then it may not depend on the time sync. If you’ve done it with certificates, then you’re going to have the same issues as you would using an SSL or other cert-based VPN. Even if you have used PSKs, the implementation you’re using may (or may not) reject a connection if the time is too far off. Your intrusion detection can have an impact on this also.
So, short answer: it’s complicated.
Both IPSEC and OpenVPN have a periodic re-keying of the temporary session keys. So time is clearly used (but that is relative time, not absolute time. It’s a timer).
You can do both OpenVPN and IPSEC with pre-shared keys or with certificates.
If time is a big issue, maybe a large pre-shared key can be done safely. Depending on the crypto used, pre-shared keys can be less safe than certificates if I’m not mistaken.
I ones thought really long and hard about the embedded device problem. One thing that could help is to periodically save a timestamp. So you can’t replay very old data. And obviously, don’t use DNS for NTP.
Lennie,
There is a way to avoid that, the client could use the hard coded root certificates to securely obtain the time from the root name servers. This would work and be secure. The main problem is scalability: a root server can’t delegate the time function to other servers as with other normal DNS requests because the validity of delegation depends on having the correct time.
Using the root name servers to bootstrap time is not ideal, but it ought to be secure and technically feasible. They might even reduce the accuracy to +/- an hour to discourage their use for anything other than bootstrapping.
Another idea would be for IANA to designate some permanent IP addresses for dozens of time servers. This sounds like a hack on the surface, but when you think about it, this is how the DNS root nameservers themselves gets bootstrapped. So it could be a pragmatic solution.
Another idea would be for computers to have built in time receivers
http://www.cl.cam.ac.uk/~mgk25/time/lf-clocks/
We should always plan that the internet to break, for scenarios that can’t fail, get a local battery backup time source
When you think about it, probably more than 95% of Internet runs on 2 time sources:
– pool.ntp.org (& depends on DNS)
– Windows default time source (& depends on DNS, I forgot the hostname)
Lennie,
That’s true, but GPS is notoriously weak indoors, I don’t get a signal unless I stand at the window. The low frequency time signals would reach indoor embedded devices better than GPS. CDMA/GSM phones have time synchronization too, that might be an option too.
I don’t think any of these has a significant advantage over IP solutions, other than keeping time synced during internet outages/air gaps. If the internet’s out, maybe nothing else matters, but it might be useful for merchant terminals and whatnot.
Anyway, the pool.ntp.org guys are saying something along the lines of:
how would you authenticate a public ntp-server pool anyone can join on a voluntary basis ?
Well, I guess you could use a CA-like set up like certificates do. It would bloat the NTP-packets a lot, would take a bit more CPU-time (thus making time less accurate ? At least for those servers under higher load). The bloated NTP-packets would also be great for DDOS reflection attacks.
It’s not that bad, take a look at the validity window of the root certificate used by the certificate authorities themselves: Amazon’s goes from 2015 to 2038. Comodo’s goes from 2010 to 2038.
So it will take a while before these hard coded certificates become invalid.
( year 2038 isn’t arbitrary but is the date that UTC time wraps around, a sort of Y2K problem:
https://en.wikipedia.org/wiki/Year_2038_problem )
Anyways, for the purposes of booting DNSSEC, you could just use the same certificates to validate time regardless of where you store them or when they’re updated.
DOS is still used in production in lots of places, the 3rd option no one is talking about is disconnecting these machines from the internet and plugging the networking and USB ports with glue (or setting them up on their own network where they can’t talk to the internet or the main network.
You can also mention maintenance costs as much as you want, but how many of these software companies do you think are still around? Probably 0 and that’s why the can’t get updated software for newer versions of Windows. They would need an entirely different solution and possibly to replace entire infrastructure. I’ve certainly seen old software where no replacement exists and the company behind it is long gone.
Edited 2017-05-15 17:30 UTC
Obsolescence also hits equipment hospitals rely on. Companies go bust because they cannot sell enough of a niche product to keep afloat. No driver updates means that updating the OS effectively trashes perfectly serviceable equipment.
Yup. Had a inventory system written in a proprietary scripting language by a company that went belly up 20 years ago. We got estimates in the low 2 million range for a replacement, which wasn’t affordable for a company losing 5 million a month. So we kept the obsolete one. Which luckily enough didn’t have networking as an option air gapped by history.
Thom Holwerda,
There’s no denying this was very bad for the hospitals and patients affected, but I don’t think we have the whole picture here. Many of them may be stuck between a bureaucratic rock and hard place. Their system administrators can’t just update systems willy-nilly like another business or home user could. These systems may require certifications and modifications would likely void those certifications.
For it’s part, microsoft does not guaranty the suitability of windows or updates for any purpose, things can and sometimes do break. The vendors who certify machines can’t realistically certify a windows system with windows updates, it would be prohibitively expensive to re-certify millions of computers every patch Tuesday when they get updates. Clearly some solution is needed, I’m not sure what it would look like. I’d like to hear the perspective of someone who’s dealt with these kinds of issues.
However none of this would have likely mattered in this particular case because they were zero day exploits anyways. The NSA is directly to blame for them and the software engineers are to blame for the poor quality of software in the first place. I’m surprised you aren’t blaming them (and us) more. Whoever creates these exploits, be it indy hackers or government agencies, these zero-days are a widespread problem. Updates, while important, are inherently a reactive solution. The only way to fix this once and for all is to take a proactive stance and demand safer code from project managers, software engineers, and even computer languages.
There are armies of C coders who will complain that vulnerabilities are the fault of bad programmers and not computer languages, but we can’t ignore the fact that unsafe languages semantics have been enabling human mistakes for 40+ years. No language can fully save us from our high level programming mistakes, however they can protect us from many low level mistakes that continue to plague us. If we don’t have a plan to replace unsafe languages or at least limit them to areas that can be fully audited and contained, then our software will still continue to be insecure 40+ years from now.
Edited 2017-05-15 17:46 UTC
Thanks for pointing that out.
Hospitals are stuck between a rock and a hard place in particular.
Many diagnostic machines like X-Rays, MRI etc are quiet expensive and cannot be upgraded easily. Upgrading means certifying the device from top to bottom and no manufacturer is going to do that. To make things worse all the push to make data readily shareable and digitally available means that all these insecure devices are now part of the network. If there is a dollar available that money will inevitability end up on new feature rather than securing systems.
The same happens on manufacturing plants. That’s why big names like Nissan and Hitachi got hit. Many old style PLCs and robotics don’t have support for newer OSes (many even are still stuck on Win2k!). Shutting down a working factory for security upgrades is a non-starter both in terms of cost and potential issues (it is working fine right at this moment but you may break it by updating). A lot of these are exposed to the network b/c of need to automate monitoring and what not. Again features over security.
Consumer-wise I would say yes they’re to blame – there are however many places in the world where using the latest patches is just not possible under the current schema. Hopefully there will be push to change things for the better but it’s not a situation that is easily fixable.
Got some photo shots of tremendously successful Rosetta Mission. Some Instruments showing XP welcome screens. Discipline, something you can’t ask to anyone.
System Engineers should always consider that one, a rare asset.
Are You sure you can’t run Windows10 out the swamp? As far as noted, passing networked activation, up to You.
Edited 2017-05-16 21:07 UTC
Really?
I’m surprised Alfman – sure if computers are being “certified” for running e.g. medical imaging equipment – with Windows Update turned off – then SURELY they should not be networked !?
Have a sandboxed secondary drive that is write only used for exporting the data from the primary drive
Have a strict SOP that the IT guys supply the UUID number for the drive (and a little utility for the untrained to enter this – that mounts it write only at a specific mount point and refuses to mount elsewhere, or with other privileges – system wide)
Then physically move it to a 2nd computer terminal beside it that is networked; do this once or even twice a day with a fresh External USB each time. 1TB 2.5″ drives are only $50 each now – which is relatively negligable vs cost of imaging 6 – 12 patients on MRI/PET scanners
would this not be a safe-ish workaround. If you’re needing to keep to the certification model.
mistersoft,
There are a lot of possible solutions, but ideally it shouldn’t get in the way or real time data. I read somewhere that ebay or amazon (can’t remember which, I wish I could find the article again) deliberately processed credit card payments through a very basic serial protocol to mitigate the risk of network and OS attack vectors. Even if the OS had known vulnerabilities it would be extremely difficult to exploit them through a basic serial protocol.
Security On Legacy. Ha ha, good idea. Not Worth the trouble and expenses, to most.
Good point.
Re the serial connection
While those that have been running these older OSes at home should have upgraded. Hospitals simply can’t just upgrade.
I used to work for a software supplier to the NHS.
The NHS has no money to update these systems to newer versions of Windows. In other some cases it simply can’t for a multitude of reasons that I will discuss below.
Also before you blame it on the current government in the UK this problem has been over a decade in the making.
You cannot simply upgrade the OS either on Workstation or Server. Even intranet applications may only work correctly IE or IE in compatibility mode.
There are thousands of bespoke applications than simply either do not have any vendor support, or cannot be upgraded easily. The businesses may have closed shop, but the software is normally tied to how the hospital works, or how it deals with referrals (if it is private) from the NHS.
Sometimes this isn’t just a matter of the OS it is matter of the hardware interfaces. There is hardware that needs to work over legacy ports that don’t exist on newer equipment needed to run Windows 7 and above. They aren’t going to throw away a piece of equipment that costs hundreds of thousands of pounds.
Re-training medical staff to use said systems is costly. Changing the OS will require retraining. I don’t just mean retraining in how to use the newer version of Windows or an updated application. There maybe new procedures put in place that are offline.
The machines shouldn’t have been exposed to the internet, true. However in some cases they have to because of the access to health / NHS direct that the former labour government forced through without much thought.
Most of the vendors to this applications may have since ceased trading because the investment from the previous labour government simply doesn’t exist anymore since the current Conservative Government cut spending drastically.
But your unrealistic expectation that IT departments are too lazy to upgrade shows how little you know the challenges of even getting a minor update into a production environment such as a hospital.
Unfortunately it takes an event like this until management and government will invest in IT. It is rarely the fault of the IT staff on the ground.
Edited 2017-05-15 17:58 UTC
grandmasterphp,
I know what you mean, it’s not uncommon in corporate scenarios to have to wait on all the suppliers before upgrading, and the fact of the matter is microsoft is just one of many suppliers (not necessarily even the most important one at that). All these pieces have to work together…sometimes this requires contracts, a new scope of work, training, testing, scheduled downtime, etc, it’s not always as simple as an outsider makes it out to be like updates on their home computer.
Also, welcome to osnews!
Again – I don’t think you actually read the article, but just immediately got defensive. I did not say anyone was lazy – just that yes, if you choose not to fund your IT department adequately, then yes, YOU are to blame for an inadequately funded IT department, and the resulting consequences. In the case of companies, that’s the manager allocating funds – and in the case of the NHS, it’s the government.
It not a problem that can just be solved by chucking money at it.
I don’t think you really understood what I was getting at. You are massively over simplifying the situation. The reason why these systems aren’t updated as often is due to a multitude of reasons. Some of these I highlighted in my original post. Sometimes there is noway to update them.
Throwing money at the problem definitely would help. I’m certain there are several IT solution providers in the US that would love to work on solving the issues. Not cheaply, though.
The custom medical equipment does have a new version that is supported by windows. They always do. Its just a question of weather or not the upgrade is in the budget.
I do kind of wish it had hit the US a little just so we could see which Hospitals are keeping up and which are not. In reality there should be stress tests of Hospital IT outages, aside from the ones that the Hospital IT already causes on a semi regular basis.
Not necessarily. The companies that supply custom equipment like this also have long development cycles due to certification by the relevant bodies that means they plan for, say, a ten year cycle, and the machine doesn’t change in that time. I worked for a company making such equipment, and our brand spanking new system was shipping with Vista in 2012, purely because development started in 2006 and Vista was seen as the future. Switching to Windows 7 would have delayed the product to market by a year or two – something the company simply wouldn’t accept. So even shelling out ^a`not200,000 to replace the three machines you might find in a typical hospital lab wouldn’t have gotten you an up-to-date OS.
I believe those machines have since been updated to 7 – right about the time 10 came out.
I kind of doubt you didn’t have any competitors that had more up to date software.
Hospitals, Schools, should be built with caducity integrated, up to manpower. New ones always cheaper on maintenance.
Those wanting to extend age of retirement -well, the’ll need to ‘update’ Maybe some will prefer a career change. [recommending organic gardening]. Or go through PAID nursery school again. So easy for the true lovers of that discipline.
Just Trying to take the light side. Code wise, wasn’t so grave, if well extended.
[Rosseta Mission Teams were ‘reassigned’ afterwards, just as example].
My point here is that the ETERNALLY TRANSMUTING INSTITUTION ends being the ETERNALLY LOW PERFORMANCE ONE.
[Even Microsoft Get This -LOW PERFORMANCE- issue. On Going back to the Home Button]. On a now general policy of STABILIZING. Who could have bet on a Linux console?
Hey! Teacher’s Board: Needing a Generation XII. Still one available? Or, Are We the last?
The Eternally Transmuting is a Valid Pattern of Life, but an extremely expensive one. And That is Main Issue, right now and decades into the future.
As much as I dislike the current goverment. There was a deal in place for extended XP support. However the trusts didn’t take it up http://www.theregister.co.uk/2017/05/16/wannacrypt_microsoft_blame_…
Now thats if you believe El Reg.
Also I agree with the analysis of our current gov’s approach to destroying the NHS.
The fact that Microsoft had a patch so quickly, and even for Windows XP just proves what I have alleged for years that a back door exists in Windows to allow the NSA to peruse user data at its will.
Glad I switched all of my systems to Linux back in 2002.
This is uninformed BS – fake news, if you will.
The patch was so readily available because customers who pay for a support contract are still getting XP patches. You just don’t get these patches for free.
Please, this isn’t rocket science.
I think it is more likely that Microsoft could patch the vulnerability on all platforms quite easily.
https://twitter.com/Partisangirl/status/863995226943246336
There’s a lot of conspiracy theories going around, but IMO they’re all BS. The reality is so much simpler.
I see this article and raise you “This is why Windows users don’t install updates”
http://goodbyemicrosoft.net/news.php?item.810.3
(Seriously, though, as the other commenters have pointed out in detail, this is a gross oversimplification.)
I would not ever boot a Windows XP system on any network-enabled machine. About any reasonably recent laptop (< 8 yo) can launch it on a virtual box virtual machine, with no networking adapter enabled. It does not even need much RAM, XP is known to run well on 512 MB.
For non-techies, of course it could make sense, but I cannot see how any user with a dual boot would not know this.
Hey, I’m not saying I agree with that reckless behaviour… just that it’s not necessarily that simple for people who are determined to be that reckless.
My Windows 3.11/98 and XP retro-gaming machines sit alone on their own leg of my router where the only traffic allowed to cross the boundary is connections initiated by the retro PCs which are either local DNS and DHCP (to daemons running on the router itself) or NTP and SSH (to my main workstation, with the SSH being limited to a chrooted SFTP-only account which I use for quickly moving files back and forth).
I find it a nice way to balance security with the convience of having networked file transfer, NTP time sync, and automatic network setup. (I even dug up DOS NTP and SFTP clients.)
Heck, the DNS allow rule is just a convenience that I should probably drop, since I’ve pinned the IP address of the workstation that provides the NTP and SFTP servers.
Edited 2017-05-15 21:37 UTC
Jesus Christ! ssokolow
That’s a horrible counter argument. An old out of support version was too old to get the update because it hadn’t been updated. Great. How is that MS fault?
I think the argument there is Don’t use unsupported operating systems unless you really really have to and are supa careful on how they are used ( ie air gap them, please!)
While I agree with OP, mostly, the fact that MS was able to release a patch for XP within hours makes me wonder why did they stopped the automatic updates of XP if not to force people to buy a newer system ?? And in that light, MS is considerably at fault. They made billions selling XP!! and still you should pay extra to get those patches now.
And to use your analogy, they not only stop offering oil change, they now say you should buy a new car instead of having an oil change
I just thought I’d post this here, it’s dated today:
http://www.zdnet.com/article/apple-fixes-dozens-of-security-bugs-in…
A remainder that all platforms have vulnerabilities! Ironically it’s because of these vulnerabilities that owners are “allowed” to jailbreak their own IOS devices.
Yeah, there lies the rub.
I think at this point, I’m more interested in a secure device that I don’t have full control over, than one that has vulnerabilities that can be exploited to allow me greater control over the device.
Bill Shooter of Bul,
Yea, I understand. Although personally I don’t like that manufacturers present us with such a contrived choice in the first place. Owners should never be put in a position to depend on vulnerabilities to get the most out of their devices.
Indeed. I am sick of people pushing this false dichotomy and preaching that you can be safe only if you give up your freedom. It is not like that.
Well, its certainly true that you don’t have to give up freedom to have a secure device. However, the options for that in a mobile phone are very limited at this point.
Heck its difficult just to get a secure device without freedom. Right now the options are…
iphone
Nexus/Pixel
Maybe Top of the line Samsung*?
I think Nexus/Pixel will also allow most freedoms (obviously there are some binary blobs there and closed source pieces that can’t be replaced).
*Samsung phones are getting Android security updates, but they also have Samsung written software in them.
http://www.intel.com.au/content/www/au/en/support/processors/000006…
https://www.microsoft.com/en-us/windows/windows-10-specifications
Please note the miss match between these sites. People have install windows 10 on older CPU than what Intel support and have been forced to disable update so their system runs. Of course it would have been helpful is Microsoft on their site had reported correct information and if Microsoft tools had blocked installing windows 10 in the first place on too old of hardware. So those users not updating have been trapped by Microsoft incompetence and possible Intel incompetence for not sharing correct information with Microsoft in time.
https://support.microsoft.com/en-us/help/4012982/the-processor-is-no…
Here is Microsoft again choosing that with Windows 7 and 8.1 not to provide updates if person is using newer cpus.
There are other elephants in the room where people are failing to get updates.
So no, Microsoft is not to blame for this attack. They patched this security issue two months ago, and had you been running Windows 7 (later versions were not affected) with automatic updates (as you damn well should) you would’ve been completely safe.
This is also wrong.
http://www.pcworld.com/article/2953132/windows/set-windows-10s-wi-f…
If you internet connection is set as metered in windows 10 even that Windows Update is enabled your computer might have downloaded no updates for a while because automatic updates only kicks in when you connect to a non metered. Yes if you are on metered manually performing updates is required.
After allowing for the Elephants a percent of effected users have be effected by miss information that auto updates on and they are done when with metered connections not they are not. Also a percent have been effected by Microsoft and Intel information miss match. A percent has been effected by Microsoft refuse to allow old OS on new hardware.
Also there is another percentage where automatic updates with windows 7 and 8 have resulted in breaking vendor provided parts.
So there are issues here. There are a percent I will give who are guilty of turning off automatic updates out of fear caused by seeing people they know suffer from the above issues. So yes a percentage of this problem lands cleanly at Microsoft feet.
“Here is Microsoft again choosing that with Windows 7 and 8.1 not to provide updates if person is using newer cpus.”
Remember when Microsoft charged you every X years with a new Windows? Now it’s a rolling release.
Also when You had your ancient Windows and danced with it along successive generations of junk? Demanding Microsoft to keep the damn thing alive and well? Well, now you can’t.
[As soon as Continuum effort started, they could not keep the old scheme of asking more and more hardware stamina].
This scheme achieves an ETHICAL balance, by allowing old equipment to slip down the food chain, and taking care of the planet, by not forcing planed trash dumping. Or worst, Linux trans-personalization
Edited 2017-05-17 14:22 UTC
Hi,
The list of people that should be blamed are:
a) Every software developer that assumes “Internet connected” means that they can release buggy crap followed by a never ending plague of updates and fixes (and associated unwanted end-user hassle) as they continually try to bring their buggy crap up to “release quality” (instead of realising that “Internet connected” means that it has to be extremely secure before release).
b) People like Thom that make excuses for software developers that fail to release secure products.
– Brendan
But… But they fixed it two months ago?
Hi,
There was a critical vulnerability in every version of Windows for a decade because Microsoft released insecure products that should never have needed to be updated in the first place.
An unknown number of people who should never have needed to update got affected by insecure products before the update existed without ever knowing they’ve been affected.
A huge number of people who should never have needed to update know they were affected after the update existed.
A huge number of people who should never have needed to update can’t update and are still at risk.
Microsoft will not be compensating anyone that has been affected for damages that their faulty software has allowed.
Microsoft will not be reimbursing anyone that has paid for faulty software.
Microsoft will not be apologizing to anyone that has been affected or will be affected.
Microsoft won’t be changing any of their practices (doing a full security audit, hiring a new security team, etc).
The developers that created the security vulnerabilities, and the quality assurance testers that failed to notice the vulnerabilities before each version of Windows was released, probably won’t even get a “stern warning” and will probably be allowed to continue creating more security vulnerabilities in future Microsoft products.
Nothing that actually matters will change, it’ll just be a yet another slightly different vulnerability next week, and the week after that, and …
The reason nothing that actually matters will change is that stupid people think all of this is acceptable. There’s no incentive whatsoever for Microsoft to do anything to prevent vulnerabilities.
This is not just Microsoft, it’s “most” software developers. It’s an entire industry where incompetence and negligence is standard practice.
Note that people who install updates are victims too – if 1 billion people spend an average of 6 minutes of their time each month installing updates and their time is worth an average of $10/hour; then that adds up to a total cost of $72,000,000,000 per year just to install updates for dodgy crap that should never have needed to be updated (and that’s not including costs of anti-virus subscriptions, bandwidth consumed, etc).
– Brendan
Thom Holwerda,
Sorry Thom, but this isn’t nearly as simple as you are making it out to be. I absolutely hate to make an argument from authority, but if you had more experience in IT you would see it’s not this simple. If an os upgrade or update breaks a peace of software or equipment, then what?
This isn’t remotely hypothetical, I’ve experienced several windows incompatibilities. At one company the customer ticket management system we were using for several years broke on windows 8. And so we were stuck with using windows 7 internally until the ticket management software could be replaced. The company’s software licensing agreement actually entitled every employee to install windows 8, so it was never a matter of cost, but of feasibility and compatibility.
Ironically even our own software we were developing was broken by an update. Granted upgrade/update complications are usually more of an annoyance, like having to throw away a card/printer or borked wifi/usb until the manufacturer releases a new compatible driver (all of these have happened to me and my family btw), but we move on. However with specialized and certified medical equipment and software that MS doesn’t even own, allowing untested/uncertified software auto updates can have life threatening repercussions. This is irresponsible! Certification is not something that should be rushed under time pressure either.
And I’m not saying you don’t have valid points, but you’ve oversimplified the challenges that IT administrators are facing in order to push this narrative, you are wrong to think it’s just a matter of updating. Don’t think for a moment a lawyer wouldn’t sue a hospital for gross negligence for allowing untested/rushed software to run on it’s systems. The updates cut both ways.
Administrators have no authority to re-certify updated medical equipment at the hospitals, automatic updates pose too great a risk and are ineffectiveness against zero day exploits anyways. Arguably the best course of action is to focus instead on keeping them isolated. That these systems were compromised over the internet is totally unacceptable. These systems shouldn’t touch the internet, not even for updates.
There may be times they need to be updated, but only through certified channels and NOT automatically while they are in commission.
Edited 2017-05-16 02:00 UTC
They fixed it via an ultimatum though. Didn’t the group that grabbed the NSA tools, give the companies affected, a time frame to fix the security holes before they released the NSA tools and announced the vulnerabilities?
If there was no ultimatum, the holes would still be there with no update(s) in sight. Of course this is speculation on my part, but in line with the way MS and company work. So not hard at all, to imagine there would be no fixes, were not for the ultimatum.
That’s not sufficient.
Microsoft created an ecosystem in which many of their customers do not trust their patching system or updated products. Hence they own this problem.
Apple is not perfect here either but I believe most iOS and OS X users rarely think twice about taking a patch. Okay, I usually wait 2 or 3 days after a release^aEUR|
I am beginning to think that OS Vendors should be required to supply free Security Patches for NN number of years, where NN is 10, 15 or 20 years. Customer and businesses behavior is never go to match good IT Practice if requires regular support and/or major upgrade costs. Even if you legislated businesses to purchase support or update regularly, it will turn into a bureaucratic mess (example: USA: Sarbanes-Oxley).
Medical, critical infrastructure or life safety equipment with embedded computing is always going to be a challenge. It’s going to take changes in the professions that use these devices to step up and demand stricter support processes from their vendors.
I would challenge you to provide any heath service in the world not vulnerable to the same issues. The reality is every country buys the same equipment from the same small sets of suppliers. Dutch hospitals are just as full of MRI scanners running XP as British or American ones.
So Thom is blaming the victims here?
#scandalous
They aren’t the victims. The victims include the thousands of patients whose surgeries and medical appointments had to be canceled as a result of hospital computers being taken down by WannaCry. Unlike the morons that failed to secure their computers, those patients did nothing wrong.
WannaCry should be no big issue for individual home users: they are usually behind a router provided by their ISP, not directly exposed to the internet means no surface attack for WannaCry. Also, is less likely they have many Windows computers at home, so they will be attacked over the LAN.
This leaves as the most likely victims big corporate networks. There may be solid reasons there are still older Windows versions on a big corporate networks, but if this is the case their IT departments should have prepared accordingly.
Still, I don’t accept the blame to be put solely on the victims. NSA is to be blamed, they discovered a vulnerability and they developed it into a weapon instead of pushing for a fix.
And there is blame to Microsoft: they used the [lack of] updated as a tool to force people update to an unwanted version of Windows, this making people distrust the updated at all.
On medical equipment this won’t change until governments step in and demand change. Notice that every government in the western hemisphere has access to the Windows Source Code because of “national security”? Yet for some reason there are $20 million MRI machines in hospitals with proprietary imaging software, some which only interface with NEXT, some UNIX, or some old Apple crap? These devices never get patched, updated, or migrated to new equipment because the suppliers have got the government by the balls. Until governments refuse to do business with suppliers that don’t provide source, and refuse to buy these dodgy products it will never change. A $20 million dollar device which never gets updated is a flawed device.
Edited 2017-05-16 07:16 UTC
A $20 million device which receives updates will be a $30 million device purely because of the vast amounts of extra manpower required to recertify the device every time a patch is rolled out.
Precisely. In the aero industry, a change in the development machine that provides the code that flies the plane means that plane has to be recertified. Not just that plane but every plane that might potentially use the new code. If you can retain the same machine then you have the same output and the cost is reduced by millions and possibly tens of millions.
yerverluvinunclebert,
That’s a great example, the risk of botched upgrades is not acceptable for critical control systems where lots of money and lives are at stake. These systems should be hardened. Perhaps the operating systems should be on read-only media such that rebooting them brings them back into their certified state and only certified updates could be deployed with physical access.
One of the things I do is to maintain essential legacy systems that provide a fundamental service to aero, military, hospital, nuclear and oil industries. All these systems are still in place because the job they do is first class, they CANNOT be upgraded EVER (nuclear SCADA) and they will continue to operate forever. Imagine having to rebuild the software that supports the drawings for the whole of airbus industries. Even though those aeroplanes seem new they were actually designed decades ago in the late 70s/early 80s and the ‘puters that they run on are still the originals. As well as needing recertification of all aeroplanes in the air, to redesign and rebuild the software to run on new machines would cost tens of millions and give no benefit whatsoever, except to increase the uncompetitiveness of Airbus’ offerings. The nuclear industry never change ANYTHING as to do so could cause a big radioactive hole in Cumbria. Trackside and hospital systems running on Windows 10? Do you want your Blue screen of death to be your death literally? No new systems anywhere critical, no new bugs, no new back doors please… Only systems that are tried and tested, equally fault tolerant – are required. Avoid new systems like the plague if you want the world to actually operate reliably and you want to live.
Edited 2017-05-16 18:48 UTC
Obviously the NHS nightmare happened at the “Office” side of IT. Extremely Lousy Certification [Or no Certification at all] happened there.
Judiciary assessment pending at that -would like to think- lack of professionalism.
As you said: No Hardening occurred there…
Edited 2017-05-17 15:04 UTC
There is still the non-technical mentality in many companies like the NHS, when they are buying a new machine they forget that it is no longer a one-time purchase like it used to be. The associated processing unit is considered part of that fixed cost and when time and money is hard pressed the on-going costs are simply forgotten because the device just works… Fifteen years of largely uninterrupted operation is the justification for not upgrading. The alternative might be a new MRI scanner that costs millions and tens of thousands in retraining, not to mention possible deaths if new kit is used incorrectly. The NHS is so massive and widely distributed that it is very, very hard to ensure that all vulnerable machines are not web-facing.
The whole world literally runs on these types of legacy machines – from trackside equipment to automated cranes in nuclear power storage facilities – and if the author is still unaware of this fact then frankly he should not be writing irresponsible articles like this.
The one thing this infection scenario does point out is that none of us should be using closed source operating systems from companies that regularly abandon their recent os releases just in order to bring out something new that will sell more.
Agree on The Unavoidable need of VERY Long Term Kernel and OS cycles for critical systems. [Those OS use to be Real Time].
Hardened systems link exclusively through protocols [Or Unlink-able at all]. So the Open/Close shouldn’t be a heavy issue here, as far as protocols fully open and market supported.
On support of closed -or preferentially IP protected code: Too many medical equipment OEM vendors confronting market realities, plausible only buying, rather than on-house developing the supporting IT frame.
Makes certification a lot easier also, because Software Houses build interacting confidence with Certifying Authorities. Remembering QNX, just as an example. [Is stupid to leave all that accumulated expertise just to browse a TV set].
I prefer fully open stacks, also. As long as not having to fight with Certifying Authorities.
Edited 2017-05-17 16:31 UTC
I’m not exactly surprised that the NHS are running out of date software like Windows XP in 2017. When I visited an NHS hospital lab in the mid 90s I was a bit shocked at the out of date and kludged together state of equipment that could literally be a matter of life and death.
There was gear in the haematology lab that still relied on CP/M software dating back to the 70s. The original hardware had been replaced with a BBC Micro + Z80 second CPU at some point to keep it functioning – I think the lab equipment connected to the BBC’s analogue port, and of course used 5.25″ disks to store its data.
The guy who’d re-written the code (burned to an EPROM inside the BBC) and cobbled together the hardware interface for it was long gone by that point. At least they wouldn’t have to worry about malware I suppose…
As other people have pointed out, it’s not as simple as them forgetting to install updates, or even lacking the budget to upgrade. The bespoke hardware and software in use makes things very different from a typical home or office, and there’s also a definite reluctance to try fixing things before they’re (completely) broken.
Just throwing money at it wouldn’t necessarily solve all problems – under the last government the NHS blew around ^Alb12 billion failing to implement a new IT system after all.
“The number of people whose job it is to make software secure can practically fit in a large bar, and I^aEURTMve watched them drink. It^aEURTMs not comforting.”
— Quinn Norton ( https://medium.com/message/everything-is-broken-81e5f33a24e1 )
[%])There. You’ll feel better
Will chat about Firm and Hard ware Security, latter
Edited 2017-05-17 16:54 UTC
So people who can’t afford a new pc or hundreds on updated software are to blame? People who just use their computer for browsing the web and not much else are to blame? Charities who can’t afford to update computers and software are to blame?
I’m sorry but the ivory tower you are sitting in is far too high for me. Most people don’t think about their computers they just want things done. Most people don’t understand how to update or turn automatic updates on or off.
Most people just want to live a happy life without being worried about having all their precious memories encrypted and extorted for money they don’t have.
I won’t be reading your site any more. I used to think you were down to earth but you are actually just mean.
I’ll stick to getting my news from websites that don’t judge their users.
For the record I’m a Linux lover who knows a thing or two about computers. But most of my family aren’t. They are the “idiots” you have no sympathy for.
Shame on you!
” Charities who can’t afford to update computers and software are to blame? ”
From Windows10 and S upwards your updating is free. As long as genuine activated copy, and your computer doesn’t drop dead.
Ask Microsoft for licenses. Who knows?
If dismissed, “vayan por la libre” go Linux. All the tools are there. Except the fancy, the shiny and the commodities. It’s an spartan environment, but once you get used to, you won’t want to do critical work, out of it.
“Most people just want to live a happy life without being worried about having all their precious memories encrypted and extorted for money they don’t have.”
Activated or not, genuine or not, You should teach your loved ones how to make Optical backups. Recommending you DVD-RW disk-at-once. That goes to Linux-ers also.
Been reading OSAlert for years and can assure little evil here. Windows the most used desktop OS, world at large and no way We could consider every situation. You’re right there. Sorry about lexicon, I’m so easily tempted to use it, also.
The Real REAL tragedy here is that WannaCry has showed Us AGAIN that Sensitive Data is out-there, sitting duck to Financial And Insurance Entities, Criminal Organizations and even repressive factions within States.
Writing bug free, compatible and performant software is both expensive and a slow process. The consumer market certainly doesn’t appear to want software made with Ada and the most stringent engineering processes. Operating Systems, libraries and services are still coded in C, so go figure.
The fact is there are millions upon millions of LOC hiding all sort of bugs and 0-days waiting to be exploited, in all major OSes. That can’t possibly be solved anytime soon, and won’t in the future as long as our infrastructure is still developed the way it is. The only thing to be done is patch and pray. But every new LOC rushed and written in C comes with the possibility of new bugs. There’s still hope Ada/Rust will catch on and newer systems to be developed with better languages, slowly replacing rotten bits.
BTW, It’s been years and I keep having to double login to post a comment in OSAlert. Time to fix the bug you guys!
sbenitezb,
I reported that years ago. The login on the top right doesn’t have this bug if you use it instead.
Damn stupid way at looking at the problem, lots of people/institutions are at fault. Number one being the NSA and it’s ilk hoarding security vulnerabilities that eventually get out into the wild.
Course the vulnerability should have been patched and the NHS should have paid for it’s XP systems to have security updates from MS etc etc
But this has impacted a lot of people (me for a start who couldn’t get my blood results or prescriptions) which you should have a little empathy if not tears. It wouldn’t surprise me if people didn’t die because of this.
Lots of other ‘bad stuff’ happened …Nissan, a couple of miles away from me, stooped car production etc
Wake up call, it could have been a lot worse, not moralising
Yes and no. Yes for some small applications, but not possible with large software projects, porting maybe, given if the language is portable, but good only in theory.
THOM, you did not KNOW what you are talking about, I stop reading your post once I reached this nonsense. If you are not a software developer, stop pretending to be one.
Yes – it’s your fault in that it was entirely predictable. On the other hand the flaw was Microsoft’s.
Today, if the car I own turns out to have a manufacturing/design flaw that’s dangerous, the car company will recall and fix it – not tell me I should have bought a support contract or simply tell me to get a new model.
Bottom line software companies, for too long, have been getting away with the idea that any flaws ( no matter how serious ) in the product it sells you – is something the consumer has to simply accept with no redress. Perhaps it needs to be brough more in line with other industries.
Especially as software is becoming more critical.
Bottom line MS had the patch for XP but didn’t release it – result people may have died due to delayed treatment in hospitals affected. If a car manufacturer had done that there would have been an outcry.
Yes, from a user perspective it was entirely predictable, but the flaw was Microsoft’s responsibility and they had a fix and choose not to release it intially – that’s not responsible.
That’s precisely what they have been avoiding by not selling software. Instead, they sell you a license to use their software and then disclaim any responsibilities for error-free operation, security, suitability for any purpose, etc. If they sold you software, then it would be a product that was subject to all of the same FTC regulations that govern any product.
Software engineers (I used to be one) are quick to proclaim it unfair to require that they produce a reliable, secure product because ‘software is so complicated.’ I look at it the other way: Software is so complicated because they aren’t required to make it reliable and secure. Windows is a bloated, incomprehensible mess (at the source level) precisely because Microsoft is not legally liable for the chaos that results in a case like this. Instead, they reap rewards as companies scramble to update from old versions of Windows to new ones, paying Microsoft for the updates.
fmaxwell,
I’d point out that many software developers know more than anybody how broken things are. In many cases if you dig further there’s a very good chance developers did bring up the issues before the product reached market. However management creates an environment that isn’t conducive to building secure code with unrealistic timelines that omit testing and security auditing and just allocating insufficient resources. The incentives from the top of the company down the chain are to do the minimum amount of work possible.
Meanwhile the CEO is telling customers how important the company takes security, blah blah blah, but it’s rarely actually true. If consumers feel they are becoming the beta testers, it is in fact because that’s exactly what they’ve become.
Alfmanl,
That’s what happens when a company has no legal obligation to make their product perform as advertised.
If Microsoft faced the same repair/replace/refund model that vendors of normal products (rather than software licenses) face, there would be a lot more time and money put into simplifying the codebase, testing, and security auditing. Feature additions would be based on a risk/reward assessment: Does this proposed feature really justify the increase in code complexity, testing time, and security auditing effort?
Looking at this in a completely heartless, GOP-esque manner, why would Microsoft issue updates to Windows XP when they can just discontinue support and wait for something like WannaCry to result in a barrage of orders for Windows 10, or extended support contracts, from panicked companies, governments, and consumers? Windows is entrenched. Microsoft knows that the UK National Health Service isn’t going to convert all of their computers to OpenBSD.
If other companies operated like software companies:
Hello, Toyco Products Customer Service, Nancy speaking.
My baby is in surgery because he swallowed a plastic eye from your Huggles Bear XP stuffed toy.
I’m sorry to hear that. We became aware that the eyes were not properly attached after we had discontinued support for the Huggles Bear XP.
If you knew it was defective, why was I not notified? Why didn’t you recall it?
You bought a license to use the Huggles Bear XP. It remains our property, so we are not legally obligated to fix it or notify you of flaws unless you buy a Huggles Bear XP extended service contract. If you don’t want to do that, we could sell you a license for our current Huggles Bear 10.
I’m going to sue you!
I must refer you to paragraph 13 of the End-User License Agreement for the Huggles Bear XP, which reads as follows,
13. EXCLUSION OF INCIDENTAL, CONSEQUENTIAL AND CERTAIN OTHER DAMAGES. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT SHALL TOYCO OR ITS SUPPLIERS BE LIABLE FOR ANY SPECIAL, INCIDENTAL, PUNITIVE, INDIRECT, OR CONSEQUENTIAL DAMAGES WHATSOEVER (INCLUDING, BUT NOT LIMITED TO, PERSONAL INJURY, FOR FAILURE TO MEET ANY DUTY INCLUDING OF GOOD FAITH OR OF REASONABLE CARE, FOR NEGLIGENCE, AND FOR ANY OTHER PECUNIARY OR OTHER LOSS WHATSOEVER) ARISING OUT OF OR IN ANY WAY RELATED TO THE USE OF OR INABILITY TO USE THE PRODUCT OR OTHERWISE ARISING OUT OF THE USE OF THE PRODUCT, OR OTHERWISE UNDER OR IN CONNECTION WITH ANY PROVISION OF THIS EULA, EVEN IN THE EVENT OF THE FAULT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY, BREACH OF CONTRACT OR BREACH OF WARRANTY OF TOYCO OR ANY SUPPLIER, AND EVEN IF TOYCO OR ANY SUPPLIER HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
_____________
Note: The above contract paragraphy was based on the Windows XP Professional license and only lightly edited for this fair-use in this parody.
fmaxwell,
I don’t think most software developers are against holding the companies accountable, many of us have been calling for that for a long time.
I think there may have be some unintentional confusion here, when you said “software developers”, it generally means someone’s title, although now your post clarifies you meant the software developing companies. That changes a lot and when you go blaming “software developers” this distinction is very important. For the most part the employees who develop the software have very little authority to invest company resources into security, more often than not I’ve found the only time companies seriously invest in security is…you guessed it…right after a breach.
Alfman,
Based on the idiotic notion that you can add security on rather than having to design it in. At one point in my career, I headed up a team developing a secure workstation that went through a formal C2 evaluation conducted by a team from NSA (back before Common Criteria). Most software engineers are pretty clueless about security. Most software companies don’t want to invest in training or to hire enough senior software engineers with a specialty in security. They don’t want to be constrained by engineers asking “do you really need a programming language inside of a word processor that most users run with admin privileges?”
fmaxwell,
I agree, but I’d go even further and say this low investment and appreciation for security skills is quite discouraging even for those of us who have those skills.
Edited 2017-05-18 21:33 UTC
Alfman,
You don’t have to tell me. It’s beyond a lack of appreciation; it is often outright hostility as we resist implementation of ill-considered features that put security at risk.
Unless the courts rule that software is a product, I don’t see this bleak picture changing. Software companies have no incentive to change a model that absolves them of liability and provides them an income stream from upgrades and paid support.
Edited 2017-05-19 00:22 UTC
fmaxwell,
It got derailed because I disagreed with the view that most software engineers side with their company’s position on software support (like a warranty, or lack thereof). Apart from that I think we agree on everything else.
Thank you.
I can only report on my own experiences discussing this topic over the past 30 or so years with other software engineers, including W2 employees, contract employees (1099 wages), and those with their own consulting firms. I’ve never seen a proper survey on this topic.
The Internet is filled with websites of one-man software companies. If the notion of software-as-warranted-product were popular among software engineers, I would think that many of these companies, not constrained by management, would offer their software that way. I’ve not found that to be the case. I’ve found some offers of refunds if one is dissatisfied shortly after the purchase, but that’s about it.
—–
Most of my career was in embedded systems, which has some big advantages for people who share our views. Whether the company builds heart monitors, car stereos, or home alarms, they are selling products. The firmware is an integral part of the product, so if it doesn’t work properly and reliably, the product is defective and must be fixed. An ECU in a car that just randomly locks up, leaving the car powerless, can’t be explained away as a “known issue” and you can’t direct owners to “just turn the key to the off position, wait 30 seconds, and then restart the car.”
I found that aerospace took software development, testing, and quality assurance deadly seriously. When you’re launching a $100million satellite, it doesn’t pay to cut a few hundred hours out of the development budget. One “anomaly” can result in man-weeks of investigation to determine the cause and remedy, because “unverified failures” are something that one never wants on a bird they are trying to launch.
Thanks for the discussion and keep fighting the good fight.
It’s not Microsoft’s fault that their system is so insecure, and people are afraid to allow updates because it makes the computer reboot, stop all your work and wait for the update to finish…
Microsoft is perfect… Users are to blame for this situation…
Edited 2017-05-18 04:20 UTC
I know it is trendy amongst beard-strokers to attack the “evil Tories”, but what the hell did Labour do between 2006 and 2010 when WinXP had been superseded by Vista and Win7? They didn’t think it was important to upgrade NHS systems.
“Nobody bats an eye at the idea of taking maintenance costs into account when you plan on buying a car. Tyres, oil, cleaning, scheduled check-ups, malfunctions”
Because those are all real non self afflicted problems, as opposed to computer problems which are mostly self afflicted or imaginary.
Even if one has the most secure version of windows 10 there ever was or will be, all he/she still has to do (and will do) is ignore the million times they have been told: “No, you didn’t win the Nigerian lottery… DON’T OPEN EMAIL ATTACHMENTS”, or “It doesn’t matter how flashy the popup was and what kind of doctor suit the guy in the ad was wearing, no program is going to defy physics and reality by creating more physical RAM than what you already have.”
Its amazing how totally secure windows xp was at the time. Now everyone says your an idiot for using it and has amnesia that they ever thought otherwise. Just as windows 10 is totally secure and safe and awesome now. In 10 years it will be Microsoft’s biggest most insecure disaster that was never ever secure at any time.
And a final point. That old computer runs just as well today as it did 10 years ago (unless you did something stupid). It doesn’t cost more to run than it did 10 years ago. For the most part you were careless and loaded it down with crap ware by downloading anything and everything you ever encountered and now falsely claim that it is “broken” because – shocker – it is now slow. Combine that with the fact that you see new and more powerful (albiet more stripped of your control or anything useful, because taking away ownership of your own device is “progress”) computers and want those.
The more correct car analogy is that you are driving a 2007 car, and now you want a 2017.
Edited 2017-05-18 14:31 UTC