Building on its industry-leading A-series chips for iPhones and iPads, Apple wants Macs with its custom silicon to have the highest performance with lower power usage. Apple says the vast majority of Mac apps can be quickly updated to be “universal” with support for both Intel-based Macs and those with Apple’s custom silicon.
Starting today, developers will be able to apply for a Mac mini with an A12Z chip inside to help prepare their apps for Apple’s custom silicon. The special Mac mini will be running the macOS Big Sur beta and the latest version of Xcode.
The news everyone knew was coming. The transition will take roughly two years, and the first consumer device will become available later this year.
There’s a whole lot of mentions of “performance per watt.”
That doesn’t bode well for high-end users, at least for a while. Still, I’m excited for the transition. It should keep things interesting.
Maybe AMD’s mobile Zen 2 or even 3 for their high performance mobile segment? Those chips feel like a breath of fresh air after the years of near stagnation by Intel.
Actually it’s the opposite, given how Apple’s ARM architecture has already matched intel’s on a cycle-per-cycle basis, having a better performance per watt actually means that Apple’s CPUs have the potential to out perform intel’s core, even at the high end.
Honestly, this is a no brainer for Apple at this point. They can target the whole spectrum of application with the same architecture; from the iPhone all the way to the Mac Pro as they can successfully support all the power/thermal levels of each segment.
I was thinking the same thing. If you want basic office work, web browsing, and some multimedia editing ARM covers them fine. This should probably be 90% of the users.
That leaves out programmers. I have never seen ARM as developer machines, except for specialized setups to match target architecture. They just are not fast enough for large projects.
And CPU architecture is not the first change in that direction. Apple have previously made some changes to keyboard making the keys worse for long term typing, and actually *reduced* battery size (citing enhancements to Safari power usage).
I have switched away from the MacBook Pro, and was hoping to come back one day (they kinda-fixed the keyboard). But it looks like it would continue to be Linux + Windows for me.
I have never seen ARM as developer machines
They just are not fast enough for large projects.
How did you know they are slow if you haven’t seen one?
Here for example a “high end” ARM workstation:
https://www.anandtech.com/show/15733/ampere-emag-system-a-32core-arm64-workstation/6
It seems to be faster to cross compile on AMD than using a 32-core ARM machine.
To be fair, that is not a particularly high end ARM part.
Amazon’s graviton is fairly impressive, and can keep up with high end EPYC parts…
https://www.anandtech.com/show/15578/cloud-clash-amazon-graviton2-arm-against-intel-and-amd/7
Per the article: that’s using a “legacy” CPU that was released in 2017.
javiercero1,
You omitted much needed context for those benchmarks. Those benchmarks are not fair to amd or intel, which was explicitly acknowledged by anandtech:
https://www.anandtech.com/show/15578/cloud-clash-amazon-graviton2-arm-against-intel-and-amd
The second problem is that the benchmarks run on amazon are not using the best x86 components. Note that the EPYC 7571 AMD CPU used by amazon scores very low compared to newer CPUs from both intel and amd.
The third problem is that the CPUs are not publicly available. In intel’s case there are no published specs or benchmarks and AMD’s CPU only has a single passmark submission. Obviously it would take more data to do a better job gauging the performance of gravaton2 more directly.
https://www.intel.com/content/www/us/en/products/processors/xeon/scalable/platinum-processors.html
Anandtech does the right thing by calling out these issues…
The benchmarks should be read with all of this in mind. The problem is when the benchmarks are posted out of context. Without context, the scores you link to give a very misleading impression!
For example. the SPECINT2017 Rate-64 Estimated Total (16xlarge) gives the impression that the gavaton2 ARM processor killed the competition,
But then recall that both AMD and Intel have processors that blow away the AMD EPYC 7571 used by amazon…
(Note this is a subset of a much larger list…)
http://www.cpubenchmark.net/high_end_cpus.html
So yeah, while it’s technically true that Amazon’s gravaton2 instances blow away Amazon’s AMD and Intel instances, that’s really because Amazon’s AMD and intel instances are so far from cutting edge in the first place.
Note that I had to estimate gravaton2’s ‘passmark’ score based on relative performances which is indirect measurement at best. Still, even if you want to give Gravaton2 the benefit of the doubt it should be evident that it wasn’t running the race against the best that intel or AMD have to offer.
These posts make me sound like an intel fanboy even though I’m really not, haha. I just feel it’s important to stick to the data rather than letting hype get the better of us. One day, the data may well put ARM performance ahead, but that day is not today.
> They just are not fast enough for large projects
I’m a developer – at the moment involved in 2 really large enterprise projects targeting JVM – and still using an old laptop that nowadays should qualify as pretty low end (i7-6500U ). It’s just fine for what I’m doing and have no plans to replace it in the near future, and never heard colleagues needing a high end machine.
>> It seems to be faster to cross compile on AMD than using a 32-core ARM machine.
8180 used 2016 X-Gene3 cores. They were slow even at that time.
Now Ampere has Altra with 80 x N1 cores. This one is more interesting.
Apple smartphone cores can outperform i9-9900k at compilation. Check clang benchmark:
https://browser.geekbench.com/v5/cpu/compare/2653391?baseline=2653580
viton,
It does exceptionally well on single threaded, but is awfully weak on multithreaded. Not for nothing, but I feel you are cherry picking the data for the purpose of validating your opinion rather than using the data in a meaningful way. There are scenarios where single threaded performance is more important, especially with desktop software and many games that simply don’t take advantage of high core SMP.
However let’s be honest here, compiling is a task where having more fast cores to do the work matters much more than having faster single threaded cores. If we are being objective and not playing favoritism, the iphone’s CPU is woefully lacking for compiling large projects.
Seriously when you said “Apple smartphone cores can outperform i9-9900k at compilation. Check clang benchmark:”, did you honestly not look at the MT scores? If you did look at the MT scores and decided to ignore them, that’s deceptive. There are a lot of good things to say about apple ARM CPUs. I welcome ARM competition, I would even speak up for it if I were given the chance. I hate to make a big deal about this but I’m growing tired of people’s lack of objectivity and candor. I get that people have favorite companies, but sometimes it becomes more religious than objective. I’ll be damned before I allow myself to buy into some reality distortion field version of the truth.
One thing that is going to help Apple/ARM is optimization. Intel has a lot of baggage, and a clean implementation is of course going to have advantages. (I don’t want to disparage Apple’s accomplishments, their ARM cpus are really fast at single core).
Basically you can put so much transistors in a CPU. Here there are real physical limitations, including the speed of light (you can not move the signal 7.5 cm at 4GHz for example). So anything on the die is taking up valuable space.
For Intel, they have to support the code all the way back from 8086 days. I think you can still boot DOS on modern i7s (but I have not checked that for the last few years). A new design will not have this limitation.
Also for a single CPU target you can use all the features in compilation. I recall switching to -mcpu=686 was a big move on Linux. Basically this means Apple’s software will be compiled -mcpu=native -march=native, while alternatives will have to work with the lowest common denominator.
And even with all these (no legacy code support on CPU, no legacy CPU support on compiler), Apple only has advantage on single threaded workloads. Someone in the thread posted a benchmark (which was really useful).
The instruction decoder for x86 is single digits of the total design area for an intel core. This hasn’t been an issue since ISA and microarchitectures were decoupled decades ago.
A modern x86 and an high performance Out of Order ARM design are not that different design wise.
The main difference is that ARM implementors have much more experience in low power design (and have the libraries for it) than intel does.
Also, ARM is not a new architecture… it also has decades of cruft in it. The “modern” era of x86 started with the 386, which is the whenabouts of when ARM started.
javiercero1, Thanks,
You might be right. Even though ARM is an older arch, not supporting older binaries (and 16/32-bit) might not be a very big factor.
On the other hand, not supporting older CPUs by the compiler would still be a large factor. Some programs would probe for extensions (like AES, vector/AVX, multimedia, etc), but many would not even know how many registers, or how much cache is available.
I have seen a good optimization to speed up existing code by a factor of 10, but that is of course rare. Nevertheless, if you have a better match (or avoid mismatch) the gains are visible.
(Look at gaming consoles for example. The same exact components on a PC would not give the same performance, since the games would be tightly optimized for two or three sets of designs).
If performance per watt is true, then all they need to do is crank up the wattage
No mention of the Mac Pro and how (if?) it will transition to ARM within the next years, which is what everyone expected to hear but didn’t.
When it comes to laptops, we have reached a point where you can use an SoC from an iPad and have very good performance, and the transition to ARM for Apple laptops had been whistleblowed by SemiAccurate long ago, so no surprise there. But the kind of CPU chips powering the Mac Pro are a whole other story. Those kind of chips have been an Intel and AMD exclusive game for a long time (ever since Apple dropped PowerPC architecture and Sun and SGI made it clear that the UltraSparc IIIi and R16000A would be their last desktop chips respectively, aka circa 2005 or so), so it’s interesting to see if Mac Pro will receive some kind of super powerful ARM chip or just keep using Intel.
Most definitively Apple will eventually want to move their entire line up to their own CPUs. Mac Pro is such a low volume business, that I assume they will be at the back of the queue in the transition. But I can’t see why Apple would remain with any significant x86 presence once enough volume of their software catalog has been ported over to ARM.
javiercero1,
You keep saying that, but we need to wait until the products come out and are independently benchmarked before making any assumptions. People keep citing the iphone 11 pro’s performance (like here http://www.osnews.com/story/131888/apple-will-announce-move-to-arm-based-macs-later-this-month-says-report/ ). It has really good single core performance, but as it stands is nowhere close to being competitive on multi-threaded workloads, which kind of makes it a no-go to replace macpro, at least for the time being.
If you have any data that shows otherwise, please link it.
Well, yeah a SoC with 2 large cores is not going to be as fast as intel desktop CPU with 8+ cores.
The point is that the SoC will use will be a scaled up version from the iPad/iPhone with more cores. An Apple A-14 based SoC with the same number of cores as an intel CPU, running at the same speed, is now achieving similar specrate results as intel’s 10th gen cores. And since they have the process/power edge, apple could edge out Intel by pushing their chips a bit faster, or even add more cores since they have a bit of edge in area as well.
Note: I work for a competitor of both Intel and Apple, and I can’t share some of our internal competitive analysis. But Apple’s microarchitecture team really is that good.
javiercero1,
I asked areilly to provide his figures for saying that “Apple’s ARM chips already in production are more powerful than the laptop processors in the latest generation of laptops.” and that’s the data he cited. If you have any data whatsoever showing better results, that’s great but I must insist that you cite it otherwise I have to take it as ‘opinion’.
It’s fair to say ARM chips will improve over time, but then so will x86 chips from both intel and AMD so there’s no way to conclusively say ARM chips will be better based on data we have in the present, which is my point. Until it actually happens, it’s just speculation.
As of Whiskey Lake, Apple had already matched Intel:
In Geekbench:
1 core:
A12X- 5000
i5 8400- 4800
M core:
A12X- 18000
i5 8400- 18400
Basically it’s a dejavu situation with the PPC->x86 transition, it’s just that now Intel is in the place of IBM having a hard time providing apple with a competitive roadmap.
I can see why it is a no brainer for Apple to dump intel. Their microarchitecture now matches intels, and they have access to better fab tech, and on top that gives them higher margins,
javiercero1,
Can you provide a link so that we can talk apples to apples? Also is there a specific reason your comparing against an i5 8400 rather than something faster?
I can see why apple wants to dump intel too. Replacing intel means they can take intel’s share of profits. It puts a lot of control in apple’s hands, which could lead to potential abuse, but at the same time I’m not going to pretend that intel wasn’t abusive with it’s power. In general, I feel the best outcome will probably be the one with the most competition. ARM (and AMD) are providing much needed competition, however as you can tell from my posts I’m quite concerned about commodity platforms becoming increasingly vendor locked, which is a strong possibility with apple and why I’m so vocal about it.
Those were the benchmarks I could find publicly. The i5 makes sense because it’s the comparable part.
javiercero1,
You do realize you would deserve a failing grade if this were how you cited your work academically and professionally? Anyways, I’ll have to substitute my own source and if you don’t like it, well that’s kind of your own fault.
browser.geekbench.com/v5/cpu/2649196
browser.geekbench.com/v5/cpu/2653122
Are these numbers representative? Maybe or maybe not, but that’s the point! You always need to cite the data so that everyone has the opportunity to analyze it properly!
Comparable how/why? You say that as though it’s self evident, but whatever you are thinking isn’t obvious to me.
The i5-8400 was a 6 core part launched Q4 2017.
The A12X bionic was an 8 core part launched Q4 2018.
Why not the i7-8700k launched in Q4 2017?
Or why not a i9-8950HK launched in Q2 2018?
These both score better than the i5-8400 and A12X bionic.
Clearly you can cherry pick slower/older intel cpus, but what is your justification for doing that?
Okay. This is the comment section of a fairly low volume website, not an architecture journal.
It’s a simple example of a trend; that a chip in a thermally constrained form factor like an iPad can essentially match a 6-core i5 with better cooling and larger power consumption.
A13 continues that trend:
https://www.anandtech.com/show/14892/the-apple-iphone-11-pro-and-max-review/4
x86 chips still have an edge in FP. But for all intents and purposes Apple now has microarchitectures which can match Intel and AMDs, which makes sense why they’re moving on to their own CPUs for their entire line up.
javiercero1,
Ok but to be fair nobody is arguing that the ARM processors aren’t more efficient than intel’s. That’s moving the goalposts from what you originally claimed about performance.
I still don’t know why you chose to compare an 8 core processor from apple with a 6 core processor from intel from a year earlier. In any case, let’s rewrite your original statement to match the facts: “In Q4 2018 apple’s A12X bionic had already matched the single threaded scores of Intel’s i5-8400 processor from Q4 2017”. Great, now we can agree this statement is true and most importantly not misleading by omission or false inferences.
Sorry, I know you’re probably annoyed that I keep beating this dead horse. For me this has nothing to do with apple CPUs versus intel CPUs and I’m really not here to endorse one CPU over another. It just bugs me when people make misleading assertions first and then try to come up with the facts later rather than just starting with the facts (and sourcing them clearly) to make assertions in the first place! I’ll stop dwelling on this, but I just hope that you’ll take my feedback into consideration next time.
For what it’s worth, I think you should emphasize power and process size advantages (ie performance per watt) at least until ARM cpus get a significant demonstrable boost in MT performance.
Anyways, I’ll raise a glass to more competitive CPUs!
I meant what I say. If you need to twist other people’s points perhaps the problem is with yours.
Again, it’s very easy to see, with the numbers I provided you and the link I posted last, that from a microarchitecture stand point the trend is that of Apple’s matching Intel’s performance.
Furthermore, if that wasn’t the case. Why on earth would apple bother to move parts of their mac line up to their own CPUs?
Cheers.
javiercero1
I asked you to clarify your point several times, you refused to answer questions so I was forced to take your claim at face value. And because your claim at face value was misleading, I took the liberty of correcting it. I told you you wouldn’t like it
Come on now, don’t pretend you weren’t dragging your feet. You provided numbers that you refused to cite. Not for nothing, but my message has been transparent and consistent all along: apple’s ARM ST performance is extremely good, but so far benchmarks show lousy MT performance. You keep wanting to argue with me, but so far you haven’t provided any data that changes the facts.
Look at your latest argument, you cherry picked an older & slower CPU for reasons you still haven’t explained in order to make a dubious claim. Sorry but that’s just not a very strong argument.
The pecking order for MT performance today is AMD, Intel and then apple! Could apple ARM cpus move up in the future, that’s absolutely possible! I’ve already said as much several times. Intel has a huge disadvantage with respect to die process size and power consumption! We would probably be in agreement if only you’d admit that they are behind in MT so far. Oh well.
Why not? More control, more profits, etc.
Indeed, we’ll see what the benchmarking shows when the new macs are out, until then cheers!
I think we keep going in circles because we keep talking about 2 different things. I am not refuting the gap in multithreading performance. I mean, when cores are comparable in performance, it’s really not a surprise that a 6 core part is faster than a 4 core one in multithreading workloads.
The point I keep trying to reiterate is about “microarchitecture.” The numbers and the link I gave you point clearly that Microarchitecturally apple is matching intel from a cycle per cycle performance perspective (to be fair they’re still lagging a little bit in FP)..
But if you need to wait for the benchmarks once they release their laptop/desktop parts, so be it.
javiercero1,
It’s not just me, you also need to wait for the benchmarks before you can make factual claims about performance. If you had prefaced your assertions with “this is my opinion” or “this is just speculation” or “this is my prediction”, then I wouldn’t be here challenging you on it. Unless you work at the ministry of truth, facts need data!
Once again, I meant what I wrote, so please keep what I should or shouldn’t have said and concentrate on your own argument.
You seem to be talking more about yourself at this point; providing your personal qualitative opinion in response to quantitative data that does not meet your expectations. BTW, I have noticed you have totally ignored the Anandtech I linked that gave you plenty of more data.
In any case, I have a PhD in the field and work for a direct competitor of Apple in this case, so at least I don’t need to wait to understand what the trend of Apple’s uArch is going. If you need to wait to confirm that water is wet, so be it.
EOL
javiercero1,
My argument is that you need to base your argument on data and you need to cite that data. I don’t really see what the problem is. Note that even if apple cpus eventually win in a fair comparison over x86 cpus, I still won’t be wrong. They’ll have won because the data shows it and not because you proclaim it!
We both agree that ARM processors are better than intel on power. I literally said “For what it’s worth, I think you should emphasize power and process size advantages (ie performance per watt) at least until ARM cpus get a significant demonstrable boost in MT performance.”, That’s what the data shows, so I’m happy that we agree on power consumption.
Yes, even you need to wait for real world benchmarks before claiming apple won. I find it funny that this notion even upsets you. Oh how dare I challenge the great almighty javiercero1, whose word is above the need for data. My apologies, sir. /s
[There is no Reply button on your answer addressed to me]
However let’s be honest here, compiling is a task where having more fast cores to do the work matters much more than having faster single threaded cores.
All these years ARM processors were named the “flock of chicken” and their ST performance was low.
But now, when the ST performance of Apple cores is competitive you’re prohibiting me to mention it?
If we are being objective and not playing favoritism, the iphone’s CPU is woefully lacking for compiling large projects.
Congratulations, you totally missed the point.
Who said current iPhone CPU should do that? Laptop chips will have more cores with better uarch than this.
did you honestly not look at the MT scores? If you did look at the MT scores and decided to ignore them, that’s deceptive.
Yeah, what a suprise! 2 big cores (+4 small) cores can’t outperform 8(16) core CPU.
This is irrelevant, because nobody will compile with A13.
hate to make a big deal about this but I’m growing tired of people’s lack of objectivity and candor.
I’m comparing apples to apples and orange to orange. You’re not.
It was claimed in a post above that Ampere system is high-end, while it has pathetic performance in reality.
viton,
You aren’t prohibited from mentioning it, if anything my objection is that you did not mention it when you said “Apple smartphone cores can outperform i9-9900k at compilation. Check clang benchmark:” It was true of the ST score but you neglected to say it was false of the MT score which is usually the one that matters when we talk about intensive compilation jobs.
I’ll grant you maybe I am missing your point, but then why did you specifically choose compilation to highlight the iphone outperforming intel? Surely you can understand my point of view here.
@Alfman have you seen the top 500 lately ? I think we can say ARM can in theory deliver.
Lennie,
I see you just posted the list.
https://www.top500.org/lists/top500/list/2020/06/
That’s pretty cool. I can’t even fathom 7+ million cores
I was curious about the normalized figures per core and performance per watt, but the site doesn’t present them that way. I imported the top 50 into a spreadsheet and the highest gigaflops per core and gigaflops per watt is this one:
#7 Selene- DGX A100 SuperPOD, AMD EPYC 7742 64C 2.25GHz, NVIDIA A100, Mellanox HDR Infiniband, Nvidia
NVIDIA Corporation
United States
The list shows they only had 272,800 cores versus Fugaku’s 7,299,072, but something’s not adding up on the list. This link shows a different number of cores…?
https://www.top500.org/system/179842/
I’m not sure how they’re counting, maybe one means cpu cores and the other is gpu cores? I’d like to find the single core performance for Fugaku but couldn’t find any data. Obviously the sheer number of cores and efficient interconnects matters more to total gigaflops than individual core speed, but I’m still curious about core speeds at consumer quantities. I’d like to get a real ARM workhorse some day.
Because it is totally irrelevant. OBVIOUSLY 2 core CPU (of the same perf tier) will be slower than 8 core, but 8 (big) cores in a future ARM mac will be comparable. This is such a trivial logical conclusion.
Compilation is task, easily scalable to the large number of cores.
Because sukru claimed that compilation on “high-end” ARMs is much slower than AMD(x86)..
viton,
No it isn’t irrelevant…we can’t just say apple wins a race that it lost because it coulda shoulda woulda done something different to win. Sure apple coulda shoulda woulda built an 8 core CPU to compete, and who knows maybe it would have won with MT, but the fact remains it lost MT by a long shot! There’s a lot of coulda shoulda woulda in this world, but you don’t break records until you have an actual verifiable implementation. Ergo, the implementations in hand are very relevant!
We’ll see where they stand with these new ARM macs. Intel’s supposedly also coming out with new tiger lake CPUs this summer/fall, it should be interesting
I agreed with your point with respect to the single threaded performance, but I do think you should agree with mine points as well.
It says “entire lineup,” so they’ll either update the mac pro with Arm or axe it. The problem is their “our chips are better than Intel chips” comparison only counts -u series processors for ultra low power consumption. I think they’ll probably just axe it as ARM vs desktop and workstation processors, or even -H series laptop processors would be bloodbath in benchmarking and performance of graphics programs for Apple.
Microarchitecturally, Apple has matched Intel cycle per cycle.
Scaling the latest A-series core up to the same frequency and power envelope would yield similar, or perhaps even better, performance. Plus in some areas Apple has already surpassed Intel, they have a much better GPU architecture and memory controller, and they already much more mature IPs on their SOCs (mainly their NPU) than Intel.
Furthermore, Apple has now the edge over intel regarding fab process. So not only can Apple SoC match intel’s, but they do so with a much better die size (Intel needs 2 dies: Core+Hub to match apple’s single die solution).
I think this is uncharted territory for Intel. They are now the “mainframe” guys facing the “micros” invasion.
While they can theoretically up the power and frequency, it’s still doubtful any ARM processor will be able to compete. Considering Apple’s behavior with running OpenCL code on the CPU instead of just flat out disabling it in Catalina, it appears Apple will simply hide behind rigged benchmarks to make their product appear better than it actually is, but to get the same results you’ll have to use only special software written for Mac and their new APIs instead of industry standard CUDA/OpenCL, etc. Plus a lot of these changes make it seem like Apple just doesn’t want to provide support for a desktop OS to begin with. Dropping openGL soon and not supporting Vulcan is more of a throwing in the towel on desktops than a competitive effort.
The Mac Pro will certainly be the last to switch and it’s mostly dedicated to the pro video crowd. It’s more about the 2×2 Radeon Vega cards and the custom (ARM ?) Afterburner card with hardware encoding/decoding.
The CPU is not what matters the most.
Furthermore, Apple went 68k->ppc->intel before so I think it’s safe to assume they do have a plan…
Discrete Vega GPUs have separate memory, no NPU, no Signal processing engines. Apple SoC will be radically more efficient, because of unified cache-coherent memory to combine CPU, GPU, Signal processing and Neural network units together and communicate through high bandwidth SoC system cache. This will blow away any current solution.
Right now they can play 3 4k streams with realtime FX on 2 years old tablet SoC.
https://www.theverge.com/2020/6/23/21300097/fugaku-supercomputer-worlds-fastest-top500-riken-fujitsu-arm
So, the sky didn’t fall. The transition to ARM didn’t mean Apple suddenly locks down the entire Mac app ecosystem. Finally the FUD campaign can stop.
Moochman,
Where did you read that they weren’t locking down new ARM applications? I’m not saying you’re wrong (because I don’t know one way or the other), but as far as I can tell they haven’t made any indication that ARM applications would not be locked down.
As for the FUD, it’s rational speculation IMHO and apple could dispel by putting out a statement one way or the other.
Even third-party kernel extensions work and bootloader protections can be turned off, don’t worry.
https://developer.apple.com/documentation/apple_silicon/installing_a_custom_kernel_extension
never_released,
“Don’t worry” sounds like famous last words
Thanks for the link! I’ll concede my own ignorance here, but aren’t 3rd party macos kernel extensions being deprecated?
https://www.zdnet.com/article/apple-deprecating-macos-kernel-extensions-kexts-is-a-great-win-for-security/
Regardless, I don’t feel this is sufficient to make conclusions about whether ARM applications will or wont be restricted. Assuming apple’s intention is to lock down ARM applications (and nobody seems to know if that’s the case) I wouldn’t be at all confident that apple would look the other way if 3rd party developers built a driver that unlocks said restrictions.
What sort of restrictions are you worried about?
The presenter in the WWDC platform keynote clearly said that they consider the Unix workstation market and the open source software ecosystem to be critical to their product position, and that all such code will continue to be both available and installable.
I switched my main workstation from FreeBSD to MacOS during the PowerPC era, when it was clear that FreeBSD would have trouble keeping up with GPU driver compatibility. I use macOS as a Unix workstation, and it’s always been the best such thing, for my needs. (Well, I still use FreeBSD for network storage, but that’s a different game.) I’m still happy with it from that point of view, and I don’t expect that to change if the processor switches to Arm somewhere down the track: it’s quite similar to PowerPC in many respects, and better where it differs. Better than MIPS or SPARC, too (not least because it’s not dead).
areilly,
IOS-like restrictions.
I watched as much as I could in the abbreviation of it here. I’m not inclined to watch the whole keynote though.
http://www.youtube.com/watch?v=_Q8AKghK44M
If you think there’s something specific you want me to watch, then please link to it and provide the exact time, I promise I’ll watch that and maybe that will change my mind…
No, KEXTs are not being deprecated. The link that tells otherwise is an example when a self proclaimed “security expert” told nonsense. I doubt this “expert” had real experience with macOS kernel as he was unable to distinguish some kernel mode API deprecation from KEXTs deprecation.
Also, another point I wanted to make to pre-empt anyone saying “just jailbreak it”. If a jailbreak becomes technically possible, the reality is that software industry cannot rely on jailbreaking. It should not be positioned as a serious option.
Jailbreaking is a double edged sword. On the one hand it’s the owner breaking into their own device (they should be given the keys to in the first place, damn it). On the other hand jailbreaking implicitly relies on platform vulnerabilities that should not be there and should be patched. I would not recommend software developers and/or users rely on unpatched operating system to be able to run the software of their choice, yet this very conundrum exists when vendor interests override user interests
From https://www.theverge.com/2020/6/22/21295475/apple-mac-processors-arm-silicon-chips-wwdc-2020
Moochman,
Hmm, well thanks for responding. However it doesn’t really answer the question I asked specifically, which was whether new ARM applications would be locked down. The quote from apple’s engineer merely says “the vast majority of developers can get their apps up and running in a matter of days”, but it doesn’t say whether they’ll be forced to buy a developer subscription and distribute software to customers via the apple store. I don’t want to assume that’s the case, but I don’t want to assume that’s NOT the case either if they haven’t promised that it wouldn’t be. It doesn’t seem like anyone knows the answer to this right now. Given how many devs are nervous/concerned about IOS restrictions, it almost seems like apple is going out of it’s way not to directly answer such a basic question.
Regarding rosetta 2, I mentioned my thoughts about it already in another thread, but in short it is likely going to be a temporary transitional measure that will be deprecated like rosetta 1 was.
Really, there is zero reason to believe ARM versions will be any different than x86 versions in terms of app support and installation options. Mac developers already need to pay Apple even for non-store apps to get them signed, there’s no change there. But limiting apps to store-only would not only require significant developer effort, which MS, Adobe et al are sure to push back against, it would also severely limit the types of applications that can be installed, which developers simply won’t tolerate. App store apps need to be sandboxed, and a lot of developer tooling just doesn’t fit the model. Homebrew is another example of alternative software distribution that Apple knows they need to support if they want to hold onto the developers that make up a good portion of their pro user base.
On top of all that, Apple reiterated multiple times that consumers won’t be able to tell the difference whether they are using x86 or ARM apps. That implies that there will also be no difference in the way the apps can be downloaded.
And finally: if this were true, you’d think they’d have previewed some improvements to their Mac App Store, which is really a rather choatic and seldomly updated piece of work. The fact that they haven’t suggests to me they aren’t planning on investing more into it any time soon.
There is absolutely zero evidence, and plenty of counter-evidence, that Apple will lock macOS to its app store in the near future.
Forgot one more argument: there would be no need to support universal binaries if ARM apps were locked to the app store. That fact alone, that they support universal binaries, should really be enough evidence to convince you that they are not locking ARM apps to the App Store.
Moochman,
Not locking apps to the app store YET. Just because they don’t initially is no reason to think it won’t change later. Apple has been very quiet about this. If they don’t come out with a WRITTEN promise, I’d bet that it’s on the road-map for a year or two down the road.
JLF65,
I need a new laptop next year. As long as a Mac still meets my needs, there’s a good chance I will still buy one. If and when they ever change their policy, which is still pure speculation, I can and will change platforms. But I’m not going to base my decision now on some kind of fear mongering, just as I didn’t let the existence of the iPad and its locked down app ecosystem influence my decision to buy a Mac the last time I decided to do so. Nothing has changed, aside from some random people coming up with conspiracy theories that equate ARM chips with turning Macs into glorified iDevices.
Moochman,
Again, I have no idea what apple’s going to do, but apple has a large incentive to do it and every one of your arguments is rather weak and unconvincing. Seriously, I was kind of hoping for an “a-ha” moment, where someone would be able to specifically provide a statement from apple saying that there are no restrictions. However so far every single response has been more of an open interpretation with no proof and plenty of assumptions.
Distributing through a mac store would not necessarily require much “effort”. Apple claims it will take a few days to port, that time could well be for repackaging the software for the apple store. Heck there may be very few changes other than recompiling it, signing it, and submitting it to apple in which case everything apple said can be 100% true, but so far no one has been able to cite an official statement that would clearly and unambiguously contradict the possibility of a walled garden for macos ARM software distribution.
And sure there would be plenty of resistance from devs like adobe, but apple knows this and the way to play it smart to increase it’s odds of success is to go to key players individually with more favorable confidential agreements that we are not privy to. Apple may have learned from microsoft’s experience that they’ll need to get key players on board up front. Apple knows if it can create critical mass, the rest will follow.
You didn’t quote a specific statement here, but it sounds like your referring to rosetta, which is most likely just a temporarily transitional compatibility layer (like it was in the past). There’s no implication whatsoever that native ARM software won’t be restricted – not even a hint.
I don’t know what apple’s going to do, but none of the news cited thus far is even close to conclusive that macos ARM software won’t be restricted. In short, I want to know the facts, instead of the assumptions about the facts.
Why though? Tim Cook explicitly said they would continue to support x86 macs for now and they already have a mac app store for x86 macs. It makes a lot of sense for the app store to be cross platform and support both x86 and ARM macs.
Clearly you believe that apple won’t restrict ARM software regardless of whether that belief is well founded or not. Again I don’t know because the facts at hand don’t clearly reveal apple’s intentions. But I’d like to ask an honest question, if it turns out that you are wrong and apple does restrict ARM macs, will you be disappointing in apple then or are you ok with walled garden restrictions for macs?
Moochman
I agree, It IS speculation on both our parts because apple hasn’t made clear what it plans to do. Keep in mind that personally I haven’t asserted that apple restrictions were a done deal, only that 1) they have an incentive to do it, 2) they haven’t publicly committed to not doing it.
I understand your opinion, and I share your frustration over the uncertainty and doubt. The truth is that I don’t like the uncertainty, I’m keen on finding the facts to a high degree of certainty, but for better or worse apple isn’t saying anything one way or the other. The main difference between you and I is that you’re willing to give the apple the benefit of doubt before the facts come out, whereas I am not. There’s nothing wrong with that, I just have less faith in corporations.
Alfman,
This is for me the silver bullet, though I expect in the coming days if you dive into the dev sessions beyond the keynote there will be more explicit explanations/definitive statements that answer your question. Admittedly it’s still based on a lot of assumptions but they are pretty damn logical IMHO. The point is: the App Store takes care of distributing the correct version of an app based on the architecture. As far as I know this is a common pattern among all app stores that support multiple architectures – Windows and Android for sure. If you’re on an x86 platform, you get the x86 build (if there is one on Android, otherwise you get a message about incompatibility with your device), and if you’re on an ARM platform you get the ARM build (again assuming there is one). This is standard industry practice, which is enabled by the fact that the app store is meditating between the user and the binary, unlike with when a user downloads a random file from a website. I fully expect that the ARM versions of Apple’s own apps, for instance, will only install either ARM or x86 but not both simultaneously, since that would require up to twice the disk space and be slower to start to boot.
To sum up, it seems to me that there is no reason Apple would need or desire to support Universal binaries if they were planning for everything to be distributed by the app store. This technology really only exists to cover alternative software distribution methods.
Worked at an Apple reseller during the Intel transition. Firstly, this is the perfect opportunity for Apple to optimise their CPU’s for specific benchmarks, so they can do the same rediculous advertising they pulled with the G5’s (lol “Vector Computing”). Practical apps ran like crap on the G5’s (because they were ported to PowerPC as an afterthought), but their benchmarks were great.
It also works well, because it means consumers can’t directly compare specs when Apple once again falls behind, and their GPU support after a while will probably degrade generations behind competitors (like the last G5 powermacs which were running GPU’s generations behind PC which were stream processing). Unless they plan to get Nvidia or AMD to write the drivers for them?
Can hardly wait to see Apple’s “Benchmarks”, and look forward to Apple Palladium (which hilariously we were worried that Microsoft would enforce decades ago, but now Apple has re-marketed it, and users are actually buying their crap).
Developers need to stop supporting Apple, until they stop demanding royalties for absolutely everything. You can’t even develop for them without buying a Mac Computer.
Auzy,
First, I’m not an Apple fanboy at all.
Second, the iphone success gave Apple a huge opportunity to build up expertise on may fields, bring and keep talents.
There is no base to compare what Apple was with what it is currently.
Rigged PowerPC benchmarks as well as all the talk about “PowerPC is RISC while x86 is CISC diddly doodly doo” was a desperation play. Everyone but Apple had abandoned PowerPC (not POWER but PowerPC) which meant PowerPC laptop chips were horrible but Apple had to keep trying to push them before they finally threw in the towel and went to Intel. And yes, Apple’s desire to control their silicon (as they started doing with the iPhone SoC) probably started right there.
And when it came to Intel machines being left behind, they were planning the switch to happen soon anyway so why bother? Let’s not forget that any third-party chips that went in need new drivers for the iGPU.
I don’t think Apple will let Mac performance languish now that they have control of the hardware, and they can always leverage technology from the hyper-competitive phone SoC segment like they do with the iPad
Of course the big question is the Mac Pro, which isn’t apparent if it can leverage technology from the phone SoC business that easily.
Why? A small team at Amazon could make a nice server CPU from stock parts, but Apple can’t?
Be realistic. Mac Pro need a dozen of moderate-clocked power efficient cores. Exactly that they have in iPhone. And the core is scalable from phone, to laptop and server.
The only part they don’t have is a high-clocked core for gaming desktop, but Apple does not make gaming desktops.
I don’t see why they wouldn’t partner with AMD to supply GPUs and drivers.
I’m curious to see if Apple will leverage the even more tighter integration of hw and sw to implement all kind of speciality hardware on the CPUs, being able to make use of it very quickly. How cool would be something like an integrated FPGA, for example?
Mark0,
I’ve been pushing for that for a long time. Putting an FPGA could raise the price by a substantial margin. High performance FPGAs today cost from a couple hundred to several thousand dollars. I think the price could come down with scales of economy but in general there’s a “chicken and egg” problem. If we could somehow get past that, I think it could be worthwhile and add far more innovative potential. IMHO we’re reaching the end of the road with sequential CPUs. Many of the mechanisms we use to accelerate sequential code, such as speculative execution, are at the core of the security vulnerabilities that are coming out. In the long run transitioning the software industry to explicitly parallel computation is difficult, but I think nevertheless imperative for progress.
As Jean-Louis Gassée notes, desktop/laptop CPUs are not a big part of the profit for Intel https://mondaynote.com/arm-mac-impact-on-intel-9641a8e73dca nor is the Mac a big profit segment for Apple (relative to the iPhone/iPad). So one option is Apple is just trying to consolidate their investment for a more unified set of platforms. A more radical idea is that Apple is interested in growing their market into the profitable Cloud and industrial markets where Nvidia and Intel make their profits. While everyone seems focused on the ARM side of the equation, Apple’s Metal framework and machine learning hardware could have a big impact here. CPUs are like airplanes, moving small loads quickly, while the GPU is like an oil tanker moving a huge amount of data more slowly. For many applications, the GPU can offer huge benefits. Apple has deprecated OpenCL, and stopped using Nvidia (with their competing CUDA). While most people think of Metal as graphics, it has been designed for compute. Tech companies like Apple have desire to grow profits, the low margin laptop/desktop regions may not be their prime interest. the AWS Graviton2 shows how ARM can impact the cloud.
Alfman,
This is for me the silver bullet, though I expect in the coming days if you dive into the dev sessions beyond the keynote there will be more explicit explanations/definitive statements that answer your question. Admittedly it’s still based on a lot of assumptions but they are pretty damn logical IMHO. The point is: the App Store takes care of distributing the correct version of an app based on the architecture. As far as I know this is a common pattern among all app stores that support multiple architectures – Windows and Android for sure. If you’re on an x86 platform, you get the x86 build (in the case of Android this is only for apps that are C/C++ based. If there is no x86 compiled version you get a message about incompatibility with your device – which is why games support on x86 Android devices was an issue). If you’re on an ARM platform you get the ARM build (again assuming there is one). This is standard industry practice, which is enabled by the fact that the app store is mediating between the user and the binary, unlike with when a user downloads a random file from a website. I fully expect that the ARM versions of Apple’s own apps like Logic and Final Cut will only install either ARM or x86 but not both simultaneously, since that would require up to twice the disk space and be slower to start as well.
To sum up, it seems to me that there is no reason Apple would need or desire to support Universal binaries if they were planning for everything to be distributed by the app store. This technology really only exists to cover alternative software distribution methods.
Moochman,
Ok, when you come across any statements that definitively answers the question without making assumptions, I’d ask that you keep track of the link and timestamp so that we can all see it for ourselves. Fair enough? I’m sure I’m not the only one who wants clarity from apple.
A universal binary is one way to tackle the problem, but I personally don’t know if apple is requiring them. For all I know they may also support architecture specific binaries, I just don’t know, maybe you can find some documentation that shows these requirements. Regardless, this couldn’t be further from what I’d accept as definitive proof of apple’s intention not to lock down ARM software. This is exactly the kind of speculation based evidence I am wary of.
Alfman,
I think I finally found the clear language you’ve been looking for. It’s at minute 35 of the WWDC 2020 developer talk “Port your Mac app to Apple Silicon” – to save you the need to log in with an Apple developer account, within the Apple Developer app, on an Apple device, and find the video :D, here’s the screenshot:
https://ibb.co/r56rkW8
Accompanying narration: “Once your app is building, running, and is tested and verified to work on each architecture, you’re going to start distributing your app, either on the app store or perhaps by a download link on your website, in which case you will need your entire software package to be notarized.”
As I said in my response to JLF65, no, I would not be OK with the restrictions and would probably jump ship at that point. Luckily it seems I don’t have to worry about that at this point.
Incidentally, it seems like I may have been wrong about universal binaries being submitted to the app store – based on that video it looks like the default build settings are to build a universal binary regardless. Although it could still be the case that Apple’s servers separate out the bits and only serve the architecture that’s necessary.
Moochman,
I appreciate it, I wouldn’t have been able to get to it otherwise!
(my emphasis)
I’m glad you posted that, but it opens up more questions for me, haha.
Just a nitpick, but “download link on your website” is a bit ambiguous. For example, http://www.pokemongo.com clearly has a “download link” to the pokemon go app on IOS even though obviously that doesn’t prove anything about the openness of IOS. But I digress, I think any reasonable person would interpret that as meaning self-distribution is permitted!
Do they provide any more detail or context for what they said with “notarization”? What happens if users want to run ARM software that hasn’t been notarized by apple? Hopefully it’s not mandatory as that would mean owners would loose the freedom to chose what they can run without apple’s say so.
I still really want to know more about whether the OS will allow the execution of ARM software that’s not been officially sanctioned and signed by apple. The quote seems to suggest that the software has to be signed by apple one way or the other even if you’re going to distribute it yourself. I’d view that as a serious regression and I really hope that’s not the case.
Anyways, props to you for finding that, it’s the most direct evidence yet
The part about notarization is nothing new, in fact there was an OSAlert-linked article relating to it recently:
https://www.osnews.com/story/131830/macos-10-15-alow-by-design/
If you know what you’re doing it’s easy to bypass and run unsigned apps, you just need to know where to look in System Preferences after attempting to run the app for the first time:
https://support.apple.com/guide/mac-help/open-a-mac-app-from-an-unidentified-developer-mh40616/mac
For non-techie users, of course, this isn’t much of an option – which is why, unfortunately, even open-source projects end up needing to pay some kind of Apple tax ($99 per year) in order to get their binaries notarized, if they intend for the app to be downloaded and used by the masses.
But anyway, it’s not a regression for macOS 11, but rather since the current version of macOS, Catalina:
https://appleinsider.com/articles/19/12/23/apple-will-enforce-app-notarization-for-macos-catalina-in-february
Also to answer this question:
There was no additional information regarding notarization other than to refer to the WWDC talk from 2019 on that topic.
One thing I took away from the talk is that x86 and ARM apps are handled identically. It just doesn’t seem to be worth it to Apple (not to mention it would make everything more confusing and thus be very un-Apple-y) to treat ARM code any differently than x86 as far as distribution options and limitations go. The binaries are universal, and regardless of platform they should work the same – that is the message.
They didn’t explicitly say that they are going to continue to provide a workaround to run unsigned code, but if they were to change their policy, I would expect them to do so for x86 and ARM code equally. However, I don’t expect them to do so, since the feature is already so hard-to-find that even some developers I know have a hard time finding it – and it’s a much-needed workaround for developers who want to test their code on different machines before going through the notarization process.
Moochman,
It is a relatively new addition in the past year and a half with some of the enforcement requirements postponed to this year. I haven’t used any macs this recently.
https://developer.apple.com/documentation/xcode/notarizing_macos_software_before_distribution
https://developer.apple.com/news/?id=09032019a
A few years ago I had trouble installing some open source software on a mac and I had to ask someone else to help because I couldn’t figure it out…I knew there was a switch, but I couldn’t find it. I’m sure I could have found it with more time, but given that I’m fairly tech savvy and knew what I wanted to do, sheesh.
I do criticize the regression on macos catalina. Apple and microsoft are both complicit in incrementally chopping away computer freedoms. A lot of little incremental changes over time can add up to draconian measures against our digital freedoms as owners
I’m not clear on something… are you asserting it will be possible to run unsigned code on macos11? Or are you asserting that it’s no longer possible to run unsigned on macos catalina?
Alfman,
It is (still) possible in Catalina, via the workaround I linked to above.
I have truly mixed feelings about the notarization. In a lot of ways it’s “for the user’s own good”, and compared to the alternative of requiring users to run a virus scanner I find it the more user-friendly option. But it is at the cost of some developer effort and expense, which is of course not so great.
Of course, if you as a developer or power user want to continue living in the “wild west”, there are still a few ways to disable the checks entirely (involving setting some hidden preferences over the command line, disabling System Integrity protection, etc)… It’s nothing the average developer or Linux/Unix user should have any trouble with. And I don’t see any reason for Apple to make things any harder than they currently are, since their goal is already met – the vast majority of developers have signed up to get their apps notarized, while a negligible minority of open-source die-hards haven’t.
It’s not a perfect situation, but I personally don’t mind it that much and I think for the vast majority of users it provides for more peace of mind when installing new software. I would still like to see discounts or free options for nonprofit open-source projects, however.
Moochman,
I think apple could get away with removing this override on the new ARM macs if they wanted to, but we’ll wait and see. Again, I’m not trusting as you can tell, haha.
That’s always the excuse to take away freedoms though. “We’re doing it for your own good!”. When the government steps in, it’s always “for your own good” too. I don’t care if it’s apple, microsoft, the US government, china, or anyone else, the “doing it for your own good” metra is always a power play. I am an avid defender of owner rights (and personal liberties in general), but looking at the long view, manufacturers have been gradually taking them away. because they can. A lot of people say “well at least apple’s not as bad as microsoft (or google or whatever)” but meanwhile they miss that the entire industry is collectively taking away our ability to be self-dependent.