When users attempt to launch a 32-bit app in 10.13.4, it will still launch, but it will do so with a warning message notifying the user that the app will eventually not be compatible with the operating system unless it is updated. This follows the same approach that Apple took with iOS, which completed its sunset of 32-bit app support with iOS 11 last fall.
This is good. I would prefer other companies, too, take a more aggressive approach towards deprecating outdated technology in consumer technology.
I’m certain this is one step toward moving away from Intel chips. A chip that has no 32-bit hardware at all is going to require a significantly less number of transistors and therefore less heat / more room to do something else. It’s also much easier to emulate or transpile the Intel code if you’re only dealing with one (much cleaner and newer) arch.
Well, if you delete 32 bits from 64 bits, you actually get… 32 bits. I bet these legacy 32 bits are the foundation of the current 64 bits architecture. So you better not ditch it out just because to woke up on a left foot.
Btw, if you’re really into going to get rid of such “nuisance”, you also better get rid of all these useless emulation layers. Out DOSBox, out Mame, out 32 bits Qemu, out old and retro tech. Welcome all new shiny 64 bits God !
Hi,
For 80×86, the only major difference (for user-space code) between 32-bit machine code and 64-bit machine code is that 64-bit machine code is allowed to have “REX prefixes”. In other words, deprecating 32-bit apps wouldn’t make any difference for application emulators – they’d still need to emulate all 32-bit instructions to make 64-bit code work (excluding an extremely small number of instructions that were recycled to make room for REX prefixes – mostly one “inc” and “dec” group of opcodes).
The reason Apple is deprecating 32-bit applications is much more likely to be related to the cost of maintaining a compatible kernel API and shared libraries.
– Brendan
As someone who has done both x86 and x86-64 assembly programming, I can tell you that is not the case. Even in long-mode, the AMD64 additions still seems to favor 32-bits operations over 64-bits ones.
Instructions are shorter (a function of x86’s variable instruction size) when you work on 32-bit registers and instructions referencing the extended registers (r8-r15) require an extra byte, making them awkward. As an example, in long-mode adding 2 32-bit registers is only a 2-byte instruction, but requires 3-bytes for the same operation on 64-bit registers. That’s a 50% increase in code size for something you may or may not need.
I think the problem with supporting 32 and 64-bit applications is in the syscall mechanism. If you take a look at Linux’s syscall numbers, you will notice they are completely different for each architectures. This is true for every kernel (as far as I know). So in order for the kernel to be able to run a 32-bit compiled program in a 64-bit kernel, it first needs to identify that the application is going to be making 32-bit syscalls, and then translate those calls to the equivalent 64-bit syscall every time.
I will also add that the syscall convention for x86 and x86-64 are completely different. So it really is a translation for every kernel call.
Some of this might be about ARM, where ARM has two different ISAs for 32 and 64-bit modes?
Note that the A11’s cores don’t even have an AArch32 decoder, just AArch64.
It can simplify some things from the perspective of the programming model, and this may allow some small but advantageous hardware changes. However, as a piratical matter, most architectures that support mixed 32-bit/64-bit ISAs do so by simply using the 64-bit hardware – there is no “32-bit hardware”…
In other words there is no significant hardware cost to to supporting 32-bit code using 64-bit hardware and whatnot – you just ignore half the contents of the registers. The only cost real cost comes in decode and some bookkeeping, but generally that cost is mostly significant.
Not at all. This will make it easier for apple to maintain software and have it be even better and faster.
In a computer science point of view strictly, this allows the default precision be higher for mathematics and numbers, a very integral part. Handling more numbers at once without having to process more code or loops is great for energy efficiency and code efficiency, and this is the same for all 64 bit CPUs using 64 bit instructions instead of 32 bit instructions.
Now that being said, switching from intel to arm is mostly just a compiler optimization task, and iOS and macOS are not that different on the entire base system, which means that it’s already technically been done a while back.
All in all, switching ot another architecture is just a matter of them optimizing their compilers.
The move to 64 bit is welcome and I love that it’s finally happening, as any developer will tell you. You can just worry about supporting the newest technologies and languages which are optimized for 64 bits, the same with the IDEs, without having to do alternative code segments for 32 bit sections.
I apologize for the long rant, but the myth of “Oh they’re switching now because of X!” has to stop at some point.
HAS NO HARDWARE LIMITATION FROM THEIR 64BIT VERSION IS BASED ON IA64 AND EM64T, SO GET YOUR FACTS RIGHT TROLL…AND PEOPLE THUMB YOU..HOW A SHAMED YOU DON’T REASEARCH FIRST BEFORE DECIDING WHEN HE IS WRONG
AND PROVED MY POINT I AM RUNNING PURE 64BIT OS ON APPLE LATELY ON MY OLD I7 INTEL CPU…IT SHOWS 64BIT HARDWARE SUPPORT IA64 AND EM64…NO AMD64 FOR SURE..APPLE DON’T SUPPORT AMD64 CODE..OR ELSE LET HACKINTOSH TRY TO EMULATE IA64 OR EM64 ON THE KERNEL
SO ITS ONE STEP TOWARD DITCHING ROOT AND 32BIT ACCESS FROM AMD HACKINTOSH USERS..ITS ILLEGAL FOR AMD USERS
Edited 2018-02-06 17:11 UTC
This is yet another reason I’d never use a Mac.
Heck, one of many reasons I prefer Linux over Windows is that I can still run the 16-bit Windows 3.x games I paid for like Lode Runner: The Legend Returns inside 32-bit Wine on 64-bit Linux.
To legally do that on 64-bit Windows requires a valid Windows 3.1 license that can be installed inside DOSBox or some other emulator.
Good. As a Mac user, I don’t want a bunch of new users coming over to Mac and advocating that MacOS become the kind of bloated mess that Windows has become with its never-break-anything mentality and the millions of lines of code to try to make that happen.
Edited 2018-02-04 09:25 UTC
I understand, it is good that Apple is giving you a chance to repurchase or just throw away software you’ve paid for.
Stupid Windows users, with their stupid being able to run software from 10 years ago.
Stupid Mac users, with their valuing performance, stability, security, and maintainability over the ability to run Jurassic apps.
fmaxwell,
Apple may not wish to support 32bit software, that’s their prerogative. But your analogy is a bit off. 8 track->cassette->CD is replacing one technology with a new & incompatible technology. This doesn’t match the situation for x86 hardware, since 32bit->64bit is largely the same technology with new extensions (like larger registers). Some features like segments were removed, but these weren’t generally used in 32bit code (they were used by 16bit DOS eons ago). One way to make your analogy more accurate would be for your car manufacturer to stop supporting audio CDs but to continue supporting MP3 CDs. In other words, the hardware is still physically capable of supporting the legacy format, but your manufacturer chose not to.
From an x86 hardware perspective the 32bit and 64bit components can’t be fully separated because 64bit registers and mov instructions are an extension of 32bit ones and not a replacement! So even 64bit x86 compilers can still generate 32bit instructions/addresses/registers depending on the software requirements.
Here is a very brief overview:
https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/x…
Edited 2018-02-05 05:13 UTC
That’s like contending that x86 Linux binaries should run on x86 Windows computers because they use the same hardware “technology.” This is about the OS, not the CPU. It’s about Apple deciding to retire a lot of code in their OS by dumping support for 32 bit apps. You can talk in vague terms about how things are “largely the same technology,” but the 32 bit apps can’t run any more and it’s not because Apple incorporated something in their OS to block execution of 32 bit apps that would otherwise run without a problem.
P.S. Thanks for the link, but I’ve been developing in assembly since 1980.
fmaxwell,
Awesome, I’m always happy to meet other assemblers
My first personal computer (disk-based) was a CP/M-80 system. Although I had source for most of the software on it, I can’t think of any that I’ve ported to my current system or even the systems in-between. To cite one example, I don’t really need the source code to the programmer’s text editor I use — I just need a viable alternative on the platform on which I will be working.
I suspect that’s the case for many people. If the elimination of 32 bit app support on MacOS means that they can’t play the Donkey Kong knock-off that they bought from some kid in Latvia in 2007, I’m okay with that.
32 Bit software is not equivalent to 8 trax, or cassettes, but that’s cool, if you want to pretend that it is.
Oh, and someone who uses both, Macs maybe slightly more stable than windows, but i hazard a guess that’s more likely because of the locked down, semi obsolete hardware in most macs, not the ability to run legacy software. Windows has to run on a very complex and crazy mix of hardware, and there is lots evidence OS X er…sorry, macOS isn’t designed to the same standard. It runs well on a very small subset of the pc ecosystem.
Edited 2018-02-05 18:54 UTC
Performance? Games typically end up with higher system requirements under macOS. Stability-wise there’s no diff between OSes for a long time (and when there was a difference, MacOS was often worse than Windows). And as for security… RDF is strong with you, you already forgot howjust last year macOS provided full password hint or allowed admin logins without a password? Windows wasn’t nearly as bad security-wise since the times of 9x (and MacOS Classic…)
And Apple had untill few months ago exclusively only a cruft of a filesystem, still has it as only option for HDDs…
Also curious how you “forgot” about iTunes bloatware.
You forget that 16bit apps were depreciated from Windows in every 64 bit verion of Windows that has been released
Yeah, maybe that really started affecting people five years ago. So 1995-2015 I think twenty years is a good warning time to give people to migrate off a technology.
But even today, you can buy a 32 bit version of Windows 10. That’ll run your crusty old dos or 16bit Windows app. It’ll go away eventually, but it exists right now.
Edited 2018-02-05 18:48 UTC
I see you haven’t actually tried to run a 16-bit DOS app in Windows 10. Maybe you should try it before you say it will run any 16-bit app someone may need.
Where did I say “Any”? Either way, it’s more of an option than macOS, which won’t run any apps from that era.
Edited 2018-02-05 22:52 UTC
‘Heck, one of many reasons I prefer Linux over Windows is that I can still run the 16-bit Windows 3.x games’
You’re certainly free to use legacy equipment for as long as you like. However, the rest of the world is under no obligation to accommodate you.
Edited 2018-02-06 01:57 UTC
Thom, why do you think 32-bit technology is outdated?
I understand we need databases to have access to more than 4gb (2^32) of RAM, animation rendering software, graphical software, etc. that needs to have huge amounts of RAM.
But our Angry Birds, a Sudoku game, our calendars, YouTube, all social media apps, etc. DO NOT need to address 4gb or RAM and in such cases, they are good staying as 32-bit apps.
64-bit apps can access more RAM and have more CPU registers available, BUT, the memory footprint of most applications is larger too because of pointers size being used intensively.
So, getting rid of 32-bit support is, IMO, only a way of saving money having less architectures to maintain.
Java 7+ has a nice workaround to have shorter pointers in 64-bit tech:
https://wiki.openjdk.java.net/display/HotSpot/CompressedOops
Hm, and IIRC there was some ~Linux effort to have system/apps which can use additional “64-bit” CPU registers but without the memory overhead of “full” 64-bit apps…
https://en.wikipedia.org/wiki/X32_ABI
Nice!
Do you know if similar technology exists in Windows?
Nope, Linux is it, as it requires core-system support in the OS kernel.
TBH, it barely exists on Linux as it is though. It’s buggy as hell, can’t easily co-exist with a regular 32-bit userspace on the same system without some significant effort, and pretty much nobody uses it (largely because cases where you’re sufficiently memory constrained for the few hundred pointers being twice the size to matter don’t generally use 64-bit x86 CPU’s, and x86 is the only arch that was stupid enough to begin with that it needed extra registers in 64-bit mode). The only distro I know of which even remotely supports it is Gentoo, and even they don’t really support it to any significant degree.
Did it really “need” them or was that simply a nice bonus that AMD threw in?
Not really a bonus, the original x86 register set was a legacy of the 8 bits era that had just a couple accumulators. While the 68000 ditched the 6800/6809 past out and provided coders with 8 full orthogonal data registers, 8 more address registers and a bunch more system registers, all of this in 1979, the x86 continued providing only a handful of weird registers used either for math, or for memory access, or for system configuration.
AMD just tried to par with “normal” cpus.
Considering that the status-quo even for CPU architectures of roughly the same vintage was at least 8 GP registers? Yeah, I’d say it did need them, especially considering the other restrictions the 16 and 32-bit modes imposed on the usage of those 4 ‘general purpose’ registers (for example, most of the math operations only output to AX/EAX, the loop instruction only lets you use CX/ECX as the counter, etc).
Quoting myself from elsewhere in the comments:
Extending that list with further examples: HP PA-RISC has 32, DEC Alpha has 31, OpenRISC has 16 or 32, Rensas/Hitachi SuperH has at least 16, RISC-V has 15 or 31, VAX (which pre-dates the 8086) has 16, the original IBM S/360 (which also predates the 8086) has 16, Hitachi H8 has 8, TI MSP430 has 12, Atmel AVR has 32, and Microchip PIC has at least 32 as far as the ISA is concerned (specific implementations may have fewer).
In fact, the only CPU architecture of that vintage to survive to the modern day in widespread usage that I know of that has so few GP registers other than x86 is the Zilog Z80, and that’s an 8080 clone.
Yes, you’re saying that more-than-x86 registers is fairly standard among CPU architectures and good to have – however I wondered if …something in AMD64 strictly needs them to work.
PS. While searching info on x32 I stumbled on https://wiki.debian.org/X32Port so it seems not only Gentoo that you mentioned toyed with it a bit…
Edited 2018-02-07 19:59 UTC
Ah, sorry I misunderstood you.
Strictly speaking, there isn’t anything about 64-bit x86 that required them to be present (though they’re established as an architectural feature now that you can’t really get rid of without breaking most 64-bit code), though I will comment that the performance improvements they allow for have been a significant driving force for the adoption of 64-bit x86 systems.
ahferroin7,
Ironically these x86 deficiencies probably helped drive the adoption of AMD64. For better or worse, AMD64 gave x86 new life. I know the market strongly favors wintel compatibility, but part of me would have liked to see x86 be replaced with a cleaner architecture. Clearly AMD64 is an improvement over it’s x86 predecessors, but I do wonder where mainstream computers would be today if it hadn’t succeeded? IA64, PPC, ARM, RISC-V, some other x86 variant?
Edited 2018-02-07 23:15 UTC
We would likely still be using x86, only with PAE to get over the 4 GiB RAM limit.
Probably IA-64 because of the hardware level x86 emulation (which was absolute crap, but still better than nothing), though it would have taken far longer to become as ubiquitous as 64-bit x86 is, and POWER, SPARC, and MIPS would likely be more widely used than they currently are in mid-level systems (POWER and SPARC are still very widely used in top-tier systems, and MIPS is pretty ubiquitous in embedded systems (though it is being slowly supplanted by ARM in that area)). Intel made some really stupid choices in designing IA-64 that came very close to the project being dead on release, but the big ones didn’t matter for people who didn’t care about x86 compatibility to begin with.
More interesting is what would computing be like today if the Motorola 68000 had been production-ready when IBM was designing the original PC systems. Up until it became evident that Motorola couldn’t meet IBM’s timetable, the 68k was in contention as a competitor to the 8086 for the original PC, and had it been chosen instead the world would be a very different place right now (Motorola would be much bigger, Intel would be smaller, PowerPC CPU’s as we know them probably wouldn’t exist (Motorola was a significant factor in their original development, but part of that was because they didn’t have any killer product lines at the time), etc.).
ahferroin7,
I imagine you are right about this. Business partnerships made all the difference for the x86, which wouldn’t have otherwise been the best technical choice. I can’t remember where I had read an engineer’s account of the early days with x86, but he said it was more of an application specific design at the time and they never intended for it to become a defacto computer processor standard.
https://en.wikipedia.org/wiki/X32_ABI
On a compiler level and a software engineering level, it’s easier to maintain only the newer architectures (especially for security) and the advantages of 64 bit and up instructions instead of the older instructions will make software even better once it is left behind.
It’s not only about memory, but speed of processing and security.
Updates can also be released quicker because there’s less testing and compatibility required, whilst IDEs can move beyond 1990s too…
I don’t know about you, but when you install Visual Studio (latest and greatest), you need to install a lot of legacy support software just to build UWP, so yeah.
If they’re so primitive?
https://en.wikipedia.org/wiki/Coelacanthimorpha
Patents on the open. See the pattern…
Nowadays, Satya could easily say: thanks!
Edited 2018-02-03 18:03 UTC
SatyaN. himself ..
Steam and it’s games. I’m going to bet that 99% of them don’t have 64bit binaries. Hell, even after all of these years that 64bit processors have been the default, most games are still compiled as 32bit.
Fuck games. If you wanted those, you’d use a console or windows.
Oh I agree with you. In fact I see little to no reason to use a Mac at all. I was just giving an example of why you’d want to keep 32bit around.
To be fair, I do use Linux for gaming where I can. Simply because I have the hardware, where you can’t actually get game-worthy hardware for macOS.
Edited 2018-02-05 07:28 UTC
If you’re a gamer, then Mac is clearly not the platform of choice. But if you’re a developer, it’s a great platform. I was surprised at how many of my coworkers in aerospace had moved to Mac (not just me!).
Um, hardware isn’t an issue to the degree you would think. The only big hardware issue is that they insist on using Intel GPU’s on most of their systems except the Mac Pro’s, but even that isn’t as much of a handicap as you would think (the only recent triple-A game I’ve seen that doesn’t run well enough to play on a 6th or 7th gen Core i5’s integrated GPU is Evolve Stage 2. 2016 DOOM, Warframe, Overwatch, Borderlands (except the first which has a shitty renderer), Mass Effect (again, except for the first), Saints Row, Assassin’s Creed, and most MMO’s all get a reasonable 40-50 FPS on the low settings on a 6th-gen i5’s iGPU, which is absolutely playable despite what most hard-core gamers might say).
The issue for most people with gaming on macOS is the insane latency in the input drivers. Apple’s stock keyboards have some of the lowest hardware latency around, but their input layer is so bad that it pushes the latency to almost double what you see on Windows or Linux, and near triple what most console systems have.
For a lot of applications, 32 bit only has small benefits.
Users have very little to gain from this. All it will do is needlessly break apps, and save Apple a bit of money. In the meantime, some programs you’ll never get an update for. Only Apple has something to gain.
If you want eternal compatibility (Well, near eternal), stick with Windows.
Based on Apple’s history, this is actually on schedule for a platform transition, even if partial.
The first mac came out in ’84. The first PowerPC Mac came out in ’95. The first Intel mac came out in ’06. Now it’s 2018 – just about time to excise the previous architecture. It’s not like the writing hasn’t been on the wall for years.
Users gain, also. Less work spent maintaining old API and kernel interfaces means more work can be spent on maintaining and improving 64-bit interfaces.
Also Several threat mitigation techniques are improved with 64-bits, meaning better security for users.
The first x86 mac came out in 2006, but x86_64 was already available then… It was Apple who chose to go with the 32bit core duo series for their first x86 laptops and imacs… The first mac pro was 64bit right from the start, as was the second generation of macbook.
The G5 was also 64bit, the 32bit macbook was actually a step backwards…
They could quite easily have gone direct to 64bit x86, and never had to worry about 32bit compatibility at all, but it’s all because intel’s only competitive laptop chip at the time was 32bit… They would have had to go with a power hungry p4 chip, or used an AMD processor in their laptops.
The G5 was also never available in a mobile device, which (iirc) was one of the reasons Apple decided to switch to Intel at the time.
Outside of x86, yes, there’s really not much benefit unless you are handling very large amounts of data or need to deal with large numbers (though TBH, there are a lot more things that need to handle 64-bit integers than you probably realize, especially since files larger than 4GB are not all that uncommon).
On x86 though, the 8 extra general-purpose registers can actually have a pretty serious impact on performance of an application because the base register set is absolute shit (4 registers that all have odd restrictions on how they can be used as a result of the original hardware implementation).
Then tell me why 80% of x64 laptops sold are with just 4GB of RAM ? What’s the point ?
Because OEMs and retail stores will foist the cheapest crap they can on to unaware customers, that’s why. And then they can make a fortune on selling RAM upgrades to those same customers who don’t know any better.
Again, as I said in the second half of my comment, you get 8 more general purpose registers when running in Long mode (64-bit mode) on an x86 CPU, which can be pretty damn significant in terms of performance. For reference, 16 and 32 bit x86 have 4 GP registers (which aren’t even entirely general purpose in the original ISA, as they have odd hardware level restrictions on which instructions use them for what), while the Motorola 68000 has 8, ARM has 15 (pre ARMv8) or 31 (ARMv8 and newer), MIPS has 32, SPARC has 31, PPC has 32, and IA-64 (Intel’s now defunct Itanium ISA) has a whopping 128. More general purpose registers means you need to make fewer memory accesses when working with small amounts of data (or don’t have to regularly load and store frequently used values), which is a huge performance boost in many cases.
You also get a measurable boost in performance for math operations involving potentially large numbers (which is also pretty big, as a lot of I/O calls use 64-bit numbers so they can deal with files bigger than 4GB), and moving data to and from memory becomes a bit more efficient in some cases (this really depends more on the memory controller and how the memory modules are connected, but in general a single 64-bit load from RAM is going to be more efficient than two 32-bit loads, even if it’s just because the CPU only has to execute one instruction instead of two).
There’s also the fact that many of said 64-bit laptops support more RAM, they just don’t ship in such a configuration, and while 32-bit x86 can handle a larger hardware address space through PAE, handling of that is a pain in the arse for OS developers and actually can hurt performance pretty significantly relative to not using it, and more importantly that purely 32-bit x86 consumer CPU’s aren’t really produced anymore beyond some Intel options that are more solidly targeted at ultra-compact embedded designs but for some reason still get used by laptop manufacturers.
Well, the 32 bits memory barrier is also dependent not only on PAE but also the BIOS’s ability to offer memory remapping, which not everyone allows. My AMD A8 x64 get stuck at 2.25GB of RAM under Windows XP, not matter what, because of this stupid BIOS limitation, even though it features 8GB of memory.
I also understand your register concern, I’m a 68k and SH-4 assembly coder, and I know too well how much these ISA are vastly superior to x86, which despite its flaws bred pretty well its retard architecture at rabbit pace. AMD64 only closes a little bit the gap.
I’ve made quite challenging graphic processing using byte/word register swapping to avoid as much as possible memory access, abused 64 bits registers to do fixed point color scale dithering, with the expected throughput increase, so I can confess it works.
But the memory consumption, God, Windows x64 is just such a resource hog, Chrome or Firefox with ten tabs open will make any PC with “just” 4GB crawl like a dry snail. How can it be possible, how can it be just acceptable ?
Actually, that’s the web browsers, not Windows (Chrome on my Windows system sits at roughly 750MB resident with the 16 extensions I use, but is still about 720MB on my 64-bit Linux laptop with the exact same set of extensions and tabs), although Windows is pretty damn bad too (about 20-50% higher memory usage than an an equivalent set of services on Linux).
The problem is that memory efficiency isn’t really a developer priority unless they’re working with particularly small systems, because it’s not something that end users really notice in most cases (that is, if there are memory efficiency issues, they show up as performance issues for most end users). For the example I gave above with Chrome, that 30MB difference is essentially nothing as far as most developers are concerned (and TBH, with 16GB of RAM, I consider it not worth worrying about either).
OS software, must evolve with the hardware changes. It sucks that some software won’t be updated, because the developer is out of business or loses interest in the product, but thats life. One thing that’s always the same, is change….
For those that still want to stick with old software, because they like an old game or other software, there is emulation/simulation available. It sucks, for those that need to use old software, but that’s life…
Of course, you can always continue to run old platforms on the old hardware as well, as long it keeps working…
Personally, I’m still bummed that MiniDisc is no longer a current product…:-(
BTW, I run a Virtual Windows 7 install, just so I can run the old Sony MiniDisc software with my MiniDisc devices, that support it.
Hum…..
I believe that Windows 7 reaches its end-of-support-life next year.
Would you continue to run this Virtual Windows 7 set-up to maintain your MiniDisc collection?
Wine on Linux
Actually in 2020. But close enough.
The only positive aspect of the story is that users will be warned ahead of 32-bit support completely disappearing and that the application just lunched will not run on the next macOS upgrades.
Depending on the set of applications the user has, there will be a choice between:
– freezing macOS to the current version to retain 32-bit support
or
– shelling out much money to upgrade the most valuable applications, if the vendors are still in business and continuing to maintain these applications.
Another down-fall – what about all those games one may have collected over the years?
Backwards compatibility has never been a Apple force … (?) Just look at Adobe software ..
Remembering that the original Mac came out in 1984, a G5 Mac shipping in 2006 could run almost all Mac software released up to that date. The Classic MacOS was a flawed product (necessarily of its time) and most of its internal technologies weren^aEURTMt changed much after System 7 (with a couple of exceptions), so the Classic environment could run almost everything that shipped.
Even after the Intel transition, a Mac running Snow Leopard could still run almost every OS X title released to that point. Apple^aEURTMs penchant for breaking compatibility came with the release of Tiger. Sure, Classic didn^aEURTMt make it to Intel, but I suspect that was more due to technical issues.
In and of itself, I don^aEURTMt mind this so long as there remains a viable way to run the older software without needing old hardware.
Edited 2018-02-04 00:20 UTC
A modern mac purchased today can run ancient macos versions and associated software under emulation, with performance superior to the real hardware.
Apple had MacOS Classic running on top of x86 DOS in the early 1990s ( https://en.wikipedia.org/wiki/Star_Trek_project ), so it’s probably unlikely that any technical issues stopped them a decade+ later…
And Old Relational DB Manager from Lotus. I Won’t Learn another package, for the now and then, very casual need.
Setting up a quick Win7 32bits partition image over SSD. (Won’t go virtual on DB jobs). Handling indexes and tables over RAM disk, fast.
Mind, what remains of, my main asset. Hardly working on not to repeat myself.
Suspecting of lots of 16bits blocks impeding the carry of this legacy to Win10.
64 is twice 32, so it must be better. Right?
NOT!
Most of the apps that I run are written by .. me. All working fine in a 32 bit environment.
So what happens when you move to a 64 bit machine? Gah! Memory exhaustion. All those 32 bit pointers, which were largely innocous before, now become 64 bit pointers, for which almost half of the bits are zero. So if our app uses lots of pointers, like mine do, now the memory footprint has nearly doubled, all in order to store zeroes.
At least fore me, going to 64 bits means that nearly half your memery is wasted. YMMV
So you’re seriously dealing with enough pointers that it actually matters that they double in length on a platform that pretty much universally has at least 4G of RAM these days?
If that’s the case, then you probably need to be re-evaluating how your code is written, as it can almost certainly be made far more memory efficient. Even the Linux kernel doesn’t have double the memory footprint when built 64-bit that it does when built 32-bit, and it uses pointers all over the place and it has a very large number of other data structures that are larger on a 64-bit kernel.
I presume this means you’re storing a “char” (8 bit) as a 64 bit value?
If your code doesn’t distinguish between bytes, short and long ints, perhaps you should put the keyboard away before you hurt someone.
Unless your code is spewing off pointers by the bucket, the extra 4 bytes per pointer shouldn’t be costing you more than a kilobyte or two (256 pointers will cost you 1k. Are you really using thousands of variables in your code??).
It’s about time one of these Windows 10 big updates did a similar thing and warned of the deprecation of 32-bit support. It still annoys me that because there’s no schedule to do so, we’re *still* seeing 32-bit-only new applications being released.
I’ve been on 64-bit desktops in Linux and Windows since 2005 and Linux is so much further down the 64-bit road than pitiful Windows is.
At the very least, Microsoft should pull all 32-bit Windows 10 ISOs to stop it being installed on new kit (how many years has it been since a new desktop or laptop wasn’t capable of running a 64-bit OS? Must be well over 5 years now).
The huge mistake that microsoft did was to sell people off on 32 bit office until very recently.
If you called them, they would tell you to buy the 32 bit version (as of the 2013 version), and furthermore, even the latest and greatest office 365 is, by default, from the installer, 32 bit.
They sort of dug themselves a bit of a grave, although I don’t blame them, they have been keeping a gigantic Office legacy code compatible with modern code.
But I agree, they should get rid of the 32 bit versions of everything even if it breaks a bunch of very old software.
Why though?
The only disadvantage to supporting 32 bit software on a 64 bit OS is having to keep 32 bit userspace components around. And most of them are just 32 bit recompilations of essentially the same 64 bit code anyway, which has a side benefit of helping make sure the code is portable and relatively clean.
You have the option in server editions of windows to not install the 32 bit compatibility layer. It saves some space. That’s about it.
32 bit isn’t bad. 16 bit isn’t bad. 8 bit isn’t bad. If it does what it needs to and is relatively easy to maintain, what’s it matter?
ironically Office on the Mac is 64bit, like windows, i think legacy and bad code which really holds back 64bit office. Take for example Microsoft’s own platforms such as GP and CRM, all recommendations are for 32bit office due to compatibility problems using Microsoft Office on Microsoft Windows Client, connecting to Microsoft GP which in turn is running on Microsoft Windows Server and a Microsoft SQL Server back end.
to all the naysayers, i’d say instead of clinging to the backwards compatibility of binaries, advocate open-source and/or sustainable release models instead.
if you’re still runnning on 32-bit hardware, it’s likely that you can’t even run the latest versions of your applications – which means less features and more security holes – or they run as total crap.
if developers release their work as open source and only charge for support, or they provide the software at a fair price with a free/affordable upgrade path (instead of the asinine IAP and ‘pay for real features and bling’ mentality), migrating to a newer architecture is totally worthwhile.
so yes, this is totally a good thing, especially in a world where binary translation and emulation can take the place of backwards compatibility in almost every scenario.
Edited 2018-02-04 13:19 UTC
Well, still running a 2007 Via C7-Windows XP based computer just right now because it have the drivers for my 2002 scanner installed on.
Do you really imagine things would go running out of their legacy usage because there is 64 bits and blocked updates up there ?
I’m sorry people threw in bugs and security issues in the first place then asking me to update my whole system at my full expense to cover their ass.
Old tech still works like they were intended to, people still play with original NES and get fun from it. I see no point at playing this planned obsolescence thing.
You have a good point here.
How many peripherals will be orphaned by their manufacturers with no updates to the driver and plus-value software?
For some, like yourself, it makes sense to maintain an older system running to keep using key peripherals. For others, maybe a “virtual machine” might do the trick.
Another possibility would be for manufacturers to open-source the code for the “obsolete” drivers if they are not willing to keep them current with the newer requirements of latest operating systems.
Having open source drivers for these peripherals is extremely important…
I have several printers and scanners which came with official drivers for windows and macos, these drivers were 32bit and powerpc respectively, they don’t work anymore on modern versions. Modern Linux distros however support these devices out of the box, even on 64bit or ARM.
I used to use an Alpha workstation for my desktop and open source drivers for all kinds of hardware which never officially supported the alpha would run just fine.
Nowadays when i buy peripherals i check for open source drivers before i make the purchase, or i look for devices which support open standards and don’t need custom drivers (eg postscript when it comes to printers).
I have some old printers which support postscript, either via ethernet to parallel adapters or via their built in 10baseT ethernet controllers. Virtually anything will happily print to these printers despite their age. I can also print to any modern postscript printer using an ancient os if necessary.
There’s lots of reasons, physical ones, supporting the idea that -resources equal- you can make stronger security on 4bits, than 64.
Just concluding that moves like this -Apple being far from only one- are purely Cattle_ing police. On governamental privileges nearing obsolescence.
Edited 2018-02-04 18:24 UTC
“I would prefer other companies, too, take a more aggressive approach towards deprecating outdated technology in consumer technology.”
But Google actually does this and, and Thom you complained the other day how much a mess it is.
I really hope that now that we have meltdown and scepre, there is a chance to reinvigorate the x64 space with new noncompatible OSs in a few years so we can buy new shiny things instead of our i5s and i7s.
and the only non Apple 32bit binaries I am using are from Adobe and Drobo.
All the Drobo 3 binaries (Even the latest ones) seem to be 32bit.
These are the sort of companies that these warning are aimed at. Time to get thy fingers out Adobe and Drobo
Shouldn’t new hardware, basically because it’s unexplored functionality. If older hardware is broken, then it was always broken. Hiding broken design by obscurity is not security.
Apple is great if you fit into their box. They tell you what you want (you don’t need a USB port, you don’t need a headphone jack, you don’t need 32-bit support…) and if you agree (ie like their products and like having the latest version of stuff) then fine. If you like anything out of the ordinary … not so good. It’s good we have choice in the marketplace. Windows is more ragged than Mac but much better for long term support and range of software. Linux is there if you don’t need to use commercial software or can virtualize it, but you prob have to be a little more savvy. Mac is great if you don’t want to think about the machine at all (and are happy to pay for the privilege) and don’t do anything bespoke.
USB or headphone jack is completely different than 32bit support. What’s the point of maintaining 32bit apps ? Changing app from 32bit to 64bit is simpler than interfacing USB-C device to USB 3.0 port. If you use some severely outdated piece of crap of software that developer never bothers to update, then stick with the old ass caveman hardware.
Linux and compatibility ? It’s a complete joke. Linux^A apps will usually break the next week after you compile them and update any library in the system – that’s why no serious developer will ever support linux unless they fix their compatibility problems. Shipping .so or static linking is laughable “solution” – you miss various things. They drop support for various things in kernel all the time, because – IT’S A MAGIC – there are no infinite developer resources.
Probably because not every software needs more than 4GB of memory. Things worked pretty well until recently, 32 bits is enough for Word and stuff. If you need them to be 64 bits then there are questions pending. Not everyone runs a server farm in their garage. At least 32 bits apps are more easily sand-boxed into a 64 bits system. Better security it is.
You don’t always need 64bit, but the frequently with which you do need it are increasing every day. Consumer laptops are now frequently coming with >4GB of ram, a browser with many tabes open can easily consume more than 4GB. And remember the 4GB limit is address space, not total ram usage of a process.
Having both 64bit and 32bit support requires support in the kernel, 2 sets of userland libraries etc, and the 32bit libraries will contain support for more legacy features (ie anything that got deprecated before 64bit was introduced likely wont have been compiled into the 64bit builds).
So yes individual 64bit apps may consume more resources than 32, but having a mix of 32/64 and all the legacy baggage associated with 32bit libs going back 20+ years could actually result in higher resource usage than a pure 64bit system.
Then there are the quirks of amd64, where the 64bit mode adds a lot more registers for example… The lack of registers in 32bit mode can be a performance bottleneck, which is eliminated by running in 64bit mode. Many programs run faster, even if they don’t take advantage of any other 64bit features.
By only supporting 64bit you also rebase the lowest common denominator, there are more cpu features that you can take for granted and use without having to have multiple code paths to support older processors.
There are many benefits to moving towards pure 64bit… The stupid thing for Apple is that they never should have supported 32bit x86 at all… Microsoft has a long legacy of 32bit x86 support, but Apple moved from powerpc to x86 *after* amd64 was already established. They could very easily have made OSX 64bit-only right from the very first non-powerpc version.
I understand your concerns about 64 bits performance and less 32 bits support bloat. However I would have to wonder why browsers needs so much memory nowadays, web pages doesn’t features 4K pictures. Coders should be more frugal about memory consumption.
Apple chose Intel because of deal, better overall performance and power economy in 2006 face to AMD’s offering, and also the integrated Wifi AMD was lacking (the whole Centrino stuff).
Apple made the transition in early 2006 when the Core 2 Duo would only be available later that year, once the first Intel Mac were shipped with 32 bits cpus.
Was there such matter to downvote, really ? God, tech peoples’ believes are fragile these days.
Edited 2018-02-06 06:13 UTC
High memory consumption is due to aggressive caching strategies to get you faster reload times. Many people visit the same sites over the day, and to improve load times it is reasonable to cache whatever is possible and keep it in memory. If you load a web page that is 40MBs, the next time you visit, it can load as little as 1MB, and no, it won’t use hdd cache when you have enough ram available.
Thats the key thing though, its why a lot of people (not all purchase apple equipment).
You purchase it knowing it has a set working life but during that life it’s going to be fully supported in both software and hardware. Like you say you purchase apps from the vendors and you buy into their support cycle as well, which if you use the mac for work is pretty much how you will see that device and others such as a company van on lease.
As others have mentioned Windows is the other route, you get one year support from the company supplying the hardware, you then can pay for support for windows, however people can run software and hardware for as long as they wish, if something breaks then they either pay microsoft for help or replace the item (sorry we dont sell that scanner anymore, purchase our new one) however you have more control.
Personally i use both, i see the benefits in both, with apple i know im on a treadmill and i accept it as it works well for me. However i do use my windows pc to play older games and run older software.
Checking my mac i see that all of the software i use is 64bit already so i wont be affected by the upgrade, one of the last hold outs for me was dropbox.
On one hand I completely understand the rationale behind this, and I can’t even disagree from a technological standpoint. However, this is going to break a lot of stuff, particularly driver and associated applications that go with those drivers. A lot of perfectly working hardware will end up useless because of this change, since manufacturers don’t bother to update their drivers most of the time.
An unfortunate side effect of this is that it will kill off the last version of the Adobe Creative Suite that was available for “one off purchase” rather than Adobe’s horrid rent-to-use subscription software basis. Lots of people are still using CS6 since it works fine — it’ll be sad for MacOS to kill this off.