Two browsers for old Mac OS X and classic Mac OS releases, developed by the same developer, are shutting down. TenFourFox, the browser developed specifically to give PowerPC Mac users a modern browser, is the first.
I’ve been mulling TenFourFox’s future for awhile now in light of certain feature needs that are far bigger than a single primary developer can reasonably embark upon, and recent unexpected changes to my employment, plus other demands on my time, have unfortunately accelerated this decision.
TenFourFox FPR32 will be the last official feature parity release of TenFourFox.
Today is a one-two punch, because Classilla, too, is calling it quits. Classilla is a modern-ish browser for Mac OS 9 and 8.6.
An apology is owed to the classic Mac users who depend on Classilla as the only vaguely recent browser on Mac OS 9 (and 8.6). I’ve lately regretted how neglected Classilla has been, largely because of TenFourFox, and (similar to TenFourFox in kind if not degree) the sheer enormity of the work necessary to bring it up to modern standards. I did a lot of work on this in the early days and I think I can say unequivocally it is now far more compatible than its predecessor WaMCoM was, but the Web moves faster than a solo developer and the TLS apocalypse has rendered all old browsers equal by simply chopping everyone’s legs off at once. There is also the matter of several major security issues with it that I have been unable to resolve without seriously gutting the browser, and as a result of all of those factors I haven’t done an official release of Classilla since 9.3.3 in 2014.
It’s an inevitable consequence of just how complex the web and web browsers have become. Single individuals – or even a small group of people – simply cannot maintain a modern web browser, let alone two, let alone on two outdated platforms. A big hit for PowerPC Mac and Mac OS 9 users, for sure.
Modern HTML (along with CSS, JS and WebGL) is not a document standard anymore but an API, which means it has breadth and complexity befitting to an API.
Modern webpages have long abandoned any concept of “document-ness”, they are apps. I wish we could get plain HTML pages, not even CSS (that stuff should be compiled server-side IMO).
BTW, the Mac community should start worrying about the retirement of Intel Macs. Intel Macs are going to receive one more version, just like PowerPC.
have a look at Gemini:
https://en.wikipedia.org/wiki/Gemini_space
Citation needed. Apple are still producing Intel Macs at this point in time, so this speculation doesn’t quite make sense. When Apple stops producing Intel Macs, as I’ve no doubt they will, then the idea that there will only be one more version would certainly fit with Apple’s past track record.
darknexus,
My prediction was that apple would continue x86 in the form of macpro, but would keep it priced out of the market for normal consumers. The mac pro fits the bill here. An entry level system starts at $6k, those who want good specs need to shell out $10k-15k. Assuming apple discontinues all x86 laptops, the only remaining x86 systems will be too unaffordable for everyday consumers, making M1 the only practical choice for them.
I’m not privy to why apple does anything, but I think apple would have pleased a lot of users offering AMD systems. It’s kind of surprising that apple doesn’t given that AMD has the most powerful CPUs on the market today. I wouldn’t be surprised to learn that apple’s x86 contracts give intel exclusivity though.
Apple never went with AMD because they never had the capacity Apple demanded to be considered a source. Furthermore, at the time Apple made the switch to x86, AMD did not have a clear low power strategy and they only got their shit together again, on the CPU front, until very recently with Ryzen. But by that time Apple already had their own cores in advanced design stage.
Apple did the right choice by not going with AMD, specially right now given how most of AMD’s CPU capacity is taken up by their console contracts. At this point Apple has a core design which is competitive with Intel or AMD’s best, so they don’t need to depend on a 3rd party.
AMD don’t need apple either, especially nowadays. AMD have done very well out of supplying console chips for the last 2 generations (8 years), providing them with a good source of guaranteed income. At this point in time, consumer hardware is almost a sideline for AMD, with much of their profits coming from consoles, where margins are low but volume is high, and high-end servers/supercomputers, where margins are high but volume is low.
javiercero1,
Except that apple outsources to the exact same fabs that AMD uses. Clearly apple wants to bypass AMD and keep more profits for itself, but practically speaking AMD can scale up as easily as apple can. And besides, for things like macpro, I don’t believe apple has much volume anyways even for AMD. Now days the whole industry is in a mess since components have to be back-ordered by possibly as much as a year. But as far as demand goes I am positive that even to this day many apple customers would love to have an AMD option especially in the macpro segment, which are the fastest CPUs available.
@The123king
AMD most definitively needs all the customers they can get. Hadn’t been for Apple’s GPU contracts RTG would have been in even worse shape.
AMD’s main profit line being their semicustom console contracts is a problem for them. They want to desperatedy break into the server space, which is where the real profits come from. Unfortunately as good as Epyc is, it is still having a hard time competing with Intel as AMD really needs to get an actual software/service strategy.
@Alfman
Using the same fab is only part of the equation. Given the nature of their contracts, Apple has a higher priority within TSMC (i.e. they have been the launch/risk partner of the past few nodes, by the time AMD gets to 5nm, Apple will have been on it for 1 year. Also AMD has to contract most of their fab capacity for the console SoCs, and given how most of the high end Ryzen parts are MIA, I don’t know if AMD can scale production as easily as you assume, but there’s also the factor of the pandemic.
I don’t think Apple is bypassing AMD, since they have never used their CPUs.
And Yeah, it would have been nice to have a ThreadRipper MacPro. But let’s not forget that the MacPro uses AMD GPUs, with Apple being a life line for the RTG.
javiercero1,
Sure the pandemic is screwing over everybody, which would clearly impact decisions made today. But apple’s decision to avoid AMD and go it alone with their M1 predates the pandemic, so obviously the pandemic was not a causal factor in these decisions. There’s no reason AMD couldn’t have reserved as much capacity as needed to fulfill an apple contract if apple wanted this.
It’s also worth considering that AMD’s personal computer market share is greater than apple’s, meaning they’re already shipping more computer CPUs than apple does, so I think it would be realistic for them to be a viable apple supplier if apple had wanted to provide an AMD option for their customers.
https://www.tomshardware.com/news/amd-vs-intel-q3-2020-cpu-market-share-report
Yes, but ironically AMD isn’t the forerunner when it comes to GPUs. A lot of mac users wish apple would allow them to use nvidia cards. A lot of professionals doing work with cuda are left in the cold. There’s even a petition.
https://www.provideocoalition.com/officially-official-nvidia-drops-cuda-support-for-macos/
https://www.change.org/p/tim-cook-apple-publicly-commit-to-work-with-nvidia-on-drivers-for-mac-os-10-14
javiercero1,
Hypothetically it could be a problem to be overly dependent on console contracts, but that doesn’t seem to be the case here. The market is gobbling up AMD CPUs as quickly as they’re being stocked. Looking at enterprise markets specifically, AMD is growing it’s market for enterprise chips better than semicustom chips (the console contracts you referred to).
https://marketrealist.com/2019/11/amd-gains-data-center-cpu-market-share-intel/
And from this year…
“Nvidia, AMD see rising sales from server sector”
https://www.digitimes.com/news/a20200630PD213.html
To look at this a different way: AMD’s stock went up 2111% over the past 5 years while intel’s went up 11%. And for comparison, apple is up 421% over the same time. Who knows, AMD’s joyride might not last, but most metrics point to AMD doing exceptionally well!
@Alfman
Apple’s has a 14% share of the PC market, to AMD’s 20%. Apple puts a lot of pressure on their supply chains, and it is doubtful AMD had the capacity to be considered by Apple as a CPU provider.
In any case, the M1 was already on advanced design stage by the time Ryzen was out.
And yes AMD’s stock rise has been great, Lisa Su is a fantastic CEO and she’s be able to execute big time. But also, let’s remember that the stock was almost penny level at the beginning of the surge. It’s a remarkable turn around because AMD was basically on life support before her tenure.
I think the fact that AMD can’t keep up with the current demand points at their limits in capacity. They really need more wins for the datacenter, but they really really need to get a software and services strategy implemented ASAP (which right now they lack). Otherwise it doesn’t matter how much better EPYC, intel is still going to get most of the large datacenter contracts with their roadmaps.
javiercero1,
I’m just sayin’ there would have been healthy demand for AMD and for it’s part AMD contracts with the same fab that apple uses so it would have been a completely realistic option. Obviously apple had different ideas, oh well. The consolidation around x86 kind of sucks, so I’m for M1 chips in the interests of competition. But my concern is that it may end up putting more roadblocks to FOSS and repairability versus x86 computers
To be fair, AMD isn’t the bottleneck, it’s the fab. Fab capacity is increasing and orders that were placed ahead of the surge never stopped being fulfilled, it’s only the new growth/demand that fabs can’t keep up with. A vendor’s shortages are going to be highly correlated to it’s rate of growth and in AMD’s case it’s been very high.
I’m not really sure what you mean. Most enterprise data centers run their own software stacks. Off the top of my head I couldn’t name any software that I need intel to supply. As long as both intel and AMD work with linux, that’s honestly the extent of what I expect from either company.
@Alfam
It’s not just about using the same fab, you need to take into account how much of the fab you can afford to contract out and also there is a whole lot of stuff in terms of validation, testing and packaging. The end product is delivered by AMD not TSMC, so I don’t think Apple was ever confident in AMD as a source for their CPUs.
Apple got burned with supply capacity issues from IBM and Motorola when they were on PPC. Which is why they are paranoid about that, one of the main reasons why they went Intel was that intel was the only CPU vendor that could contractually guarantee specific supply volumes.
Regarding the data center. The procurement processes in large data center orders are a bit more complex than “it just runs linux.” There’s a whole lot of software validation and guaranteed roadmap deliverable/contracts that usually happen 2 to 3 years before deployment.
So even though AMD may have an edge in HW with EPYC, they are not even considered for some large contracts because they lack the software/services organization that Intel does.
javiercero1,
Other than your opinion, I’ve seen no evidence to suggest AMD wasn’t up to the task.
Obviously you didn’t read the links I sent earlier that contradict your assertion. In 2019 there was a 26.6% drop in console sales. Not only would it have been completely realistic for AMD to become an Apple supplier, if Apple split it’s mac pro skews between AMD and intel, it might not have even been enough to make up AMD’s lull in console sales.
That’s the thing, between all the consoles, enterprise systems, consumer desktops, AMD’s numbers are much greater than Apple’s computers, so it’s no stretch whatsoever to think that AMD was a viable Apple supplier. And not for nothing but as you stated AMD is already the primary supplier of GPUs to Apple and I see no evidence that AMD couldn’t deliver.
IMHO Apple’s decision to not use AMD CPUs was merely a strategic one and possibly even a contractual one with intel.
If you still want to disagree, fine, but there’s not that much evidence to support your opinion.
This is too vague, to understand your point I need specifics. An IT director can go to their favorite enterprise vendors, such as Dell, CDW, HP, etc and choose AMD servers that are supported for enterprise use. It’s conceivably possible that some companies are dependent on intel software, but in my career on enterprise systems over the years (both windows and linux) I can’t recall a single instance of a company being dependent on intel software. Usually it’s microsoft, EMC, or other random enterprise vendors that are typically compatible with AMD systems anyways. So while I concede that AMD might be able to expand it’s enterprise software offerings, I am not following your point about AMD CPUs not being sufficient for enterprise applications. Regardless, the data shows that AMD’s enterprise segment is growing very quickly anyways.
@Alfam
Holy shit dude, here we go again.
The fact that Apple never went or considered AMD as a CPU supplier is evidence enough that they never had either the products or the capacity to interest Apple.
When Apple decided to switch to X86 they went over to Intel for 2 reasons: they had a clear low power roadmap and could meet volume demands. AMD didn’t have either of those things.
AMD was having tremendous issues with their fabs, to the point they had to spin them off as Global Foundries in 2008.AMD were also teetering bankruptcy and had cash issues for a long long time, so they made even less sense for Apple to be considered as a CPU source.
AMD didn’t get their shit together until a couple of years ago. Which is too late since Apple already had started the shift towards their own cores as part of their roadmap.
Re; enterprise/data center. Vendors have to validate the software stacks that the customers are going to deploy. Specially with things like virtualization, management infrastructure, and migrations. And Intel has an edge in their services infrastructure that is required to win those contracts. Intel provides a lot of support for their partners to do a full validation of the SW/HW platform. So they have a large services and software side (from compilers, drivers, support, consulting, etc). Whereas AMD has much little to offer in that regard.
Intel and AMD don’t just sell CPUs they sell platforms and services for their integrators. Which is why even though, current Xeons are being EPYC in terms of HW, they’re still edging them in the market. I assume AMD is working on getting that side of their enterprise division sorted out.
javiercero1,
That’s pure hand waving, not evidence.
That was a decade and a half ago, things change.
That’s what I’m saying, apple had different plans for themselves and it had little to do with AMD’s ability to deliver. Apple’s strategy didn’t include AMD, that is all. Anyways since there’s no clear evidence that AMD couldn’t have supplied apple chips via TSMC, I suggest we just agree to disagree. Fair enough?
From the sounds of it you are not talking about AMD’s own software and services strategy, you are talking about 3rd party vendors supporting AMD. But that’s just the thing, many of the mainstream enterprise vendors that are selling intel systems ARE supporting AMD systems too. So IMHO enterprise consumers can definitely find AMD vendors who will support them.
@ Alfman
Hand wavy? AMD’s fab issues are a historical fact. During the 2nd half of the 2000s they had to spin off their fabs due to funding/capacity issues. They were still fabbing their CPUs via GlobalFoundries until 2017, which had lots of process/volume issues. Currently most of the TSMC capacity that AMD was able to afford to contract has gone to the Console SoCs. And chiplets for EPYC are their second capacity priority.
So it’s not much of a stretch to see why from 2005 til 2017 AMD was not really a consideration as a CPU source for Apple given their capacity issues (plus AMD did not have competitive CPUs until 2017. Which again, is too late for Apple to consider AMD as a source.
As per the software narrative. I am afraid you’re still not getting the point. It’s regarding validation and support. Intel has the infrastructure to guarantee that their large customer infrastructure will run on their platforms. Just because something may run on x86 is not necessarily validated for AMD, which is something needed for a bunch of contractual stuff. It does not apply for small operations, which is probably what you’re thinking of. That is something that AMD has not done a good job at, which is why they have traditionally had such small fraction of the data center space (where the big bucks are) even thought heir HW has been superior in some cases.
Most datacenter purchases are done years before the chip is even out, based on roadmap guarantees. That requires to have a lot of consulting infrastructure to engage with the customers at that level. AMD has a tiny group for that compared to Intel’s.
javiercero1,
You are digging up ancient criticisms that have no bearing on the AMD of today.
I appreciate that you really want to disagree with me, but given the fact that AMD and apple share the same fab, it seems very realistic to me that AMD would have been able to supply apple with CPUs (just like GPUs). Given that there’s only handwaiving and no hard evidence to back your opinion, I still think that we have to respectfully agree to disagree.
Except that many enterprise solution providers do in fact support AMD as they would intel. AMD is doing extremely well in the enterprise segment. I’ve already pointed out some enterprise vendors embracing AMD including Dell, HP, etc, but we can also look at the big guns including microsoft, IBM, Oracle, Amazon too. Whatever your criticism may be, much of the industry is already on board supporting AMD.
https://azure.microsoft.com/en-us/blog/announcing-new-amd-epyc-based-azure-virtual-machines/
https://www.oracle.com/cloud/compute/
https://www.ibm.com/cloud/blog/announcements/amd-7f72-bare-metal-workloads?mhsrc=ibmsearch_a&mhq=amd%20epyc
@ Alfamn
2018 is not “ancient history.” AMD getting their shit together and being able to execute is a very recent thing. And it’s only recently (2 months ago) that AMD has been able to afford larger capacity from TSMC. AMD is smaller than you think, which is why they never first the profile that Apple needed as a CPU supplier. The fact that no major vendor, as of today, is depending on AMD’s CPUs for the majority of their systems should give you a hint that AMD’s capacity is not there even when they have the performance edge.
They are going to grow, no doubt about it if they keep the momentum with the Zen architecture. But it’s going to take at least another 2 years (the usual roadmap cycle) before any vendor thinks about depending on AMD as main supplier of CPUs.
You’re also not understanding what I’m trying to explain to you regarding how the large enterprise/data center procurement process., like at all There’s a hell of a lot more involved than just a couple of vendors press releases offering/supporting AMD. I don’t have the time or the energy to give you a primer.
In any case. We’ve gone this road before, and it’s abundantly clear to me that no amount of education and 1st hand experience are going to get through that Dunning-Kruger wall So I have learned my lesson and I’ll better invest my time than engage with any further discussion with you.
Cheers.
javiercero1,
No, but the period you referenced AMD’s fab problems obviously was.
I don’t need a primer. You declined to elaborate on specifically what intel offers that AMD does not. You made an assertion about AMD being unsupported, but I supplied numerous examples where they are and there are many more. I honestly don’t understand how you are coming to your conclusion or even why you are this opposed to the notion that anybody could find AMD ready for enterprise. Of course I have no idea who you are IRL, but it almost feels like you’re working for a competitor.
Cool. At the end of the day these are just opinions. If it’s any concession I don’t think your opinion is bad. I just didn’t find your evidence sufficient enough to change mine.
Historical precedent makes it a distinct possibility.
Intel macs came with OSX 10.4, and started with the low end models before moving up to eventually replace the powermacs with the mac pro (first gen mac pro came with 10.4.7). After that, 10.5 was available for both platforms before 10.6 went intel-only (although 10.6 still had some vestiges of PPC code in it, it wouldn’t boot).
Support for the first models of 32bit intel macs was also dropped with 10.7.
Not to worry about Intel Macs — there are plenty of free OSes for Intel platforms which can easily fill any holes left by Apple. PowerPC Macs did not have that luxury.
Linux also supports PowerPC. But anyone who has bought a Mac probably bought it to use it as a Mac.
There’s actually more OSes for PPC Macs than you think. Not only is there a plethora of Linux distros, but BSD has always had a decent following on PPC Macs. Add to that MorphOS (more in later years mind you), and the options for 3rd party PPC Mac OSes is actually really good
“Not to worry about Intel Macs — there are plenty of free OSes for Intel platforms which can easily fill any holes left by Apple. PowerPC Macs did not have that luxury.”
Unless they have that nasty T2 chip, which renders them useless for anything other than macOS.
I bought an expensive Mac desktop last year, before the “Apple Silicon” announcement. A little bit of extra money right as the lockdowns began. Shame on me. Now I won’t have an upgrade path once Apple decides it doesn’t want to build amd64 macOS anymore. The latest Ubuntu can see the drive doesn’t install correctly. I can probably use an external TB3 device, but then there’s a wasted 2TB internal drive not being used.
Peak Apple was 2017 for desktops, 2015 for laptops. It’s likely that this older 2015 MacBook Pro (OpenBSD) and my 2017 iMac (Debian) are still running long after the newer T2 devices have been forgotten.
That is a personal decision on the part of the web designer. There is a swing is to less complicated websites. Websites that are HTML first with some JS as needed.
An SPA or PWA is good when it fits the problem, but most sites don’t need to be an SPA or PWA. Thus SSR and static HTML is coming back into vogue.
Hosting on Netlify or a CDN is the hot way to host, and load times are the hot topic currently.
Webpages are still documents, sort of, which is the problem. They don’t have typesetting features, like LaTeX, which makes them less portable then they could be, and they don’t have pixel precision, like a regular UI toolkit does.
People aren’t breaking the original document concept, especially since it was such a loose concept anyway, and they are creating Rube Goldberg machines which run in the background to maintain the illusion. If we would accept HTML is a lightweight markup language for user generated content which should be embedded into a real UI framework that is nothing more then APIs, like Qt or GTK, it would be much better. HTML should be an input and not an output.
I work on the backend and sometimes have to build frontends sometimes. I opt out of all the JS heavy stuff because I have better things to do.
Flatland Spider,
I don’t like the browser postback model myself. Vanilla HTML pages can leave a lot to be desired in terms of interactivity and constantly having to move UI state between client and server. With .net microsoft tried to alleviate the problem with the viewstate mechanism, which actually seems nice at first especially with staticly loaded controls on a single page, but in more complex applications with highly dynamic interaction synchronizing state between client and server is still cumbersome and slow. I find that plain old HTML pages cannot compete with highly interactive client side applications and on top of this eliminating all the postbacks results in significantly less server load. In some cases I’ve been able to consolidate dozens of postbacks for server UI generation with a single one by shifting the UI to the client.
I admit that creating interfaces in javascript isn’t my favorite thing to do, the Document Object Model is kind of old fashioned and tedious. Regardless IMHO client side UI can produce a much better end result when done well! (It’s not always done well though, haha).
Just as a random example, updating records with phpMyAdmin became so much friendlier once it got the ability to update values in place without having to do a postback and regenerate the whole UI for updates. It doesn’t yet do client side inserts, but it’s on my wish list because it would be so much friendlier to add records without having to open multiple windows to see the output and input at the same time.
Oh yeah no, I don’t do that. That’s just as awful.
I don’t reach for react or angular, or vue.js to build a web page. I use JS as needed to on the client side where it makes sense, but the web page is mostly HTML and CSS with some JS rather then a giant JS blob.
That’s where they went wrong with HTML: Give ’em more power, and they will misuse it without responsibility. HTML having an extensible nature was a major mistake, because unlike data-exchange formats, having an openly-defined mark-up standard means that once an extension is added, everyone has to implement it otherwise things won’t render correctly on the browsers that don’t implement it and those browsers will be perceived as “inferior”. This had led to a race to add moar and moar things to browsers, often without much care and foresight because browser vendors rush to claim dibs on a certain feature (and the way it’s implemented as a tag) before anyone else, which causes what Thom observed in his article. It’s why the wretched “script” tag became an industry standard despite breaking HTML’s beautiful DOM graph.
Where? Have some examples of mass-market websites? All I see is moar and moar JS everywhere.
Apple has sold more intel macs than PPC. And PPC was supported for more than 4 years after the switch to intel. So x86 mac users are going to be just fine.
Furthermore, the transition from do-based web pages to graphical distributed apps is similar to the transition from CLI to GUI back in the day.
Apple doesn’t care, they will discontinue support anyway.
For anyone using retro computing platforms, it’s far easier and more compatible nowadays to use Web Rendering Proxy (https://github.com/tenox7/wrp). Put it on a modern desktop in your house, or on a Raspberry Pi, or even on a VM in the cloud. It connects to modern websites with modern crypto for you and renders the pages to PNG or GIF to send to your vintage browser.
Cameron Kaiser has done significant work on both projects, and HyperLink for the Commodores.
If it wasn’t for him we’d be worse off. He’s one of the nicest people, too.
I want to see the modernization work Jake Hamby’s been posting about on Twitter about getting newer Firefox versions running on G4 and G5 chips under Linux. It may help keep some of these older machines online.