AnandTech, after benchmarking the M1 in the new Mac Mini:
The M1 undisputedly outperforms the core performance of everything Intel has to offer, and battles it with AMD’s new Zen3, winning some, losing some. And in the mobile space in particular, there doesn’t seem to be an equivalent in either ST or MT performance – at least within the same power budgets.
Ars Technica on the M1 in the new Mac Mini:
Despite the inherent limitations of trying to benchmark a brand-new architecture on a minority-share platform, it’s obvious that the M1 SoC is exactly what Apple told us it would be—a world-leading design that marries high performance to high efficiency. When its power consumption and thermal profiles are effectively unlimited as in the Mac mini tested here—or, presumably, the actively cooled 13-inch MacBook Pro—the M1 puts the smack down on very high-performance mobile CPUs, and in many workloads, even very high-performance desktop CPUs.
Apple wasn’t lying. Every review and benchmark is clear: this is insanely good hardware. The M1 is bonkers.
And obviously, I was so wrong I don’t even know where to start.
Impressive, but are we back to the olden Power PC Mac days? I remember a time when the latest power macs were undeniably faster than intels offerings every time they debuted. However, they were increasingly slow in releasing newer versions of them. So they were the fastest for half a year, then intel would trounce them for a year and a half, two years until the next powerpc update. They’ve shown an ability to continue releasing ever faster arm cores, so I don’t think there is a lack of talent there, but how much is Apple going to invest in desktops and laptops going forward? It seems if they spun out their chip maker, they’d have a shot at taking away volume from intel & amd if Windows could get its arm act together. But short of that or offering them to those same companies as server chips, I’m not sure if Apple will stay interested long enough to continue making the fastest chips.
The difference between this new era and the PowerPC era is that IBM and Motorola could never achieve Intel-like economies of scale with late 90s/early 00s Apple (aka the Mac company) as their main customer, and without the same level of investment the chips would increasingly lag behind in efficiency and price until things reached a head with the G5 fiasco. Today’s Apple sells a combined number of devices (running on Apple chips) that dwarfs the entire PC industry so that won’t be an issue this time around.
Anyway, this isn’t about ARM vs. x86 so much as it is Apple vs. everyone else. Apple isn’t interested in becoming a chip company. Having a world-class chip design team all to itself working in tandem with Apple’s software engineering is what will give the Mac a huge competitive edge over other platforms running on general-purpose x86 and ARM processors. You can already see it in the M1 with its 50/50 split in high-perf and high-efficiency CPU cores, the unified on-package RAM, and the number of specialized cores for ML acceleration, image signal processing, video encoding / decoding, cryptography, etc, Just as they’ve done with the A-series these chips will be relentlessly iterated year after year, and the high-end high-margin Macs will get their own high-performance chips. This is the shot in the arm that the Mac has badly needed for several years, and hopefully it’s also a wake-up call for the Windows PC market that is oftentimes all too content to rest on its laurels.
Agreed, Except I’m still not sure if the economies of scale work for the desktop focused chips. I guess that depends on how expensive they are to engineer on top of the mobile chips. Apple as a company should go where the easy money is. What I’d love to see is the benchmarks between these and Amazon’s Graviton chips. If Apple kills Amazon, then I think there would be a market. If its close, I wouldn’t bother.
The economies of scale will actually work in Apple’s favour. Since the M1 (and future Mac CPUs) are ARM based, much like the iPhone and iPad CPU, a lot of work can be shared between the iOS and MacOS CPU families. Instead of having to spread the design costs of a CPU between the iPad and iPhone lines, now it can be split between the iPad, iPhone and Mac lines, making the design cost per-device much cheaper.
Also, prior to the ARM transition, Apple would have to pay Intel for CPUs. This would incur costs, split, for example as: Development(50%), Manufacturing(10%), Mark up(40%). So, for a CPU that cost, say, $100, you’re paying $50 for development costs, $10 for manufacturing/logistic costs, and $40 goes into Intel’s coffers. By bringing chip design in house (and, for arguments sake, all costs are still the same), Apple no longer has to pay the 40% mark-up/profit to Intel, and can keep it for themselves. This could, with economies of scale, save a great deal of money for Apple.
Lololol, how the tables have turned when it comes to “fabless” chip designers… Nowadays, Intel is the one that cannot achieve TSMC’s or Samsung’s economies of scale, which is why Intel is currently saddled with the increasingly outdated process (and outdated manufacturing technology) their private fabs offer.
And yes, I know PowerPC (AIM) wasn’t true fabless, but still, back then nobody could match the process Intel had access to, courtesy of their private fabs.
Yeah, that’s something else I was wondering, how much of Intel’s struggles are due to the issues its had matching TSMC or Samsung. If you made intel’s latest on Samsung’s process, how much performant would they be?
Bill Shooter of Bul,
The processes are not 100% identical, I’ve read that TSMC uses a different physical transistor layout, but I’d guesstimate in the 10-20%.
Also, it’s easy to forget but intel took a huge hit with meltdown and spectre and those mitigations cost them 15-30% performance. Obviously that was eventually absorbed by faster hardware, but newer hardware probably would have been faster if not for these mitigations. Given the way that these kinds of vulnerabilities are highly related to the stateful superscalar pipelines that modern processors use to achieve high IPC, I imagine it is quite an expensive undertaking to study the architecture in order to find all these vulnerabilities. It took two decades for them to be discovered on intel CPUs. Hypothetically engineers who are focusing primarily on performance (like intel prior to spectre) could lean to designs that perform better but pose additional risk. Unfortunately there are tradeoffs between security and performance and as an outsider it’s not always clear which one the engineers have opted for.
They have a tremendous advantage thanks to the vertical integration.
Apple chips do not need to be the fastest all around. But with the tightly controlled software ecosystem Apple can provide very good performance on iOS and MacOS.
Windows on the other hand has lots of legacy baggage. The same can be said for Intel CPUs (the modern ones can still run very old DOS version, that is just wasted silicon).
A similar thing happens with gaming consoles. They usually do not have the best chips, but generally offer a better gaming experience.
The wasted silicon to support the legacy x86 modes is probably very, very small. Just compare the transistor count of a modern Intel CPU to that of a 8086 or even 80386, and you’ll end up with a tiny spec of die for that support.
The idea that any modes that the x86 supports are “lecacy” is a farcical proposition anyway. Every Intel x86 chip starts up as an 8086 to begin with, and then throughout boot-up, it enables and transitions from one featureset to the next.
The older, “legacy” x86 modes are actually so hard-baked into the x86 instruction set, that you couldn’t remove them without essentially creating a completely new ISA. The “legacy” modes are as important today as they were 25 years ago, and most modern PC’s would be completely crippled without them.
Is it a good thing that x86 has been continually evolved and rebuilt around this need for backwards compatibility? You can argue it both ways. It’s fantastic for software devs, who don’t have to rewrite their applications every year, but it does mean that a lot of stuff that frankly should have been shelved many years ago is still needed to be supported.
Intel did try to create a brand new ISA for the modern world, with no backwards compatible baggage. It was called Itanium, and it was a total flop.
It’s not about supporting some legacy modes it’s about variable instruction lenght and format.
In the Anandtech article reverse engineereing M1 then clearly explained how it creates a cpu width ceiling that ARM (or any risc for that matter) doesn’t have.
It took 2 decades for moore law to max out the potential x86 arch complexity but we’re probably now witnessing it. Intel managed to kill all the risc completion by that time, they just overlooked a tiny “refrigerator” CPU.
The portion of silicon dedicated to the instruction decoder in a modern x86 CPU is tiny.
There’s nothing intrinsic about x86 that makes it more “sensitive” to Moore’s law than ARM, for example.
This is a show that the fabless business model is superior to Intel’s. As AMD, for example, is now extracting more performance out of x86 than intel.
You are right, 8086 would take up a small amount of space.
But then we also have:
FPU, SSE, AVX, and their variants, and associated registers
Other legacy extensions which may or may not make sense for modern applications
Strictly equivalent cores, while M1 has 4 high performance + 4 low power in the same package
and so on.
Given all the legacy overhead *since* 8086 is still there, a fresh start with a still known (ARM) instruction set is a great advantage. Especially when you also control the OS and the compilers.
The main reason why apple went with ARM vs x86 are mainly due to licensing concerns.
I think a lot of people do not realize the tremendous level of decoupling between ISA and microarchitectures that has happened ever since out of order was introduced… and that was well over 2.5 decades ago!
There is little “cruft” in most processors these days. Just about every ounce of the microarchitecture has a very good reason to exist for the common case and there. Most of the micro-ops within the functional units are far more generic than the complex x86 instructions that they implement via composition.
Had they been able to obtain a license from intel, Apple could have gone x86 for all they cared. FWIW, Apple’s current out-of-order ARM cores started as PPC ones…
Very impressive for an ARM chip. Looks like Apple executed their R&D extremely well. I probably won’t ever buy anything Apple, but kudos. It may well kick other ARM vendors in gear and we’ll finally see some innovation again in the non-Apple camp. Looks like the X86 camp also needs to push the power envelope down even more.
Amazon is the other ARM vendor with x86-competitive chips right now. The Graviton2 SoC powering the AWS EC2 6g family of instances already straight up beats the 5 family which are based on Xeons. I think the real inflection point will be Broadcom getting their hands on one of these crazy powerful core IPs, since then it could percolate down to the Raspberry Pi.
If we exclude mobile i remember companies like Canonical and Microsoft trying to bring ARM architecture to the desktop consumers. That more or less didn’t pan out. The only thing that had some success is Raspberry Pi. The main problem likely always was the lack of desktop applications for ARM. Apple now has fast ARM hardware and a lot of experience in building and managing (mobile) software. Profiting from it too. In addition having a lot of money does for sure help. The biggest question for me hence is will Apple be able to scale its successful mobile software ecosystem to the desktop or not. Attempts like Rosetta in my opinion won’t cut it. Regardless of the position Apple holds i feel that success will still be hard to achieve in foreseeable future. Due to x86 still being considered to be more open, compared to the way Apple does business.
Geck,
Software-wise it’s a bigger problem for windows. But if you’re a linux user, then the software is already there. I’ve used ARM SBCs if it weren’t for the notably bad performance, I probably wouldn’t even notice my desktop was running on ARM. It’s the same software and experience. The holdup is with hardware. While SBCs are fun to tinker with, I really want a more refined product in desktop and laptop form factors, disks, expandable RAM, a tolorable boot environment, etc. Also historically ARM performance has been bad, I’m optimistic these things will improve with time, it just takes time.
My main worry is that ARM computers could evolve to be far more boot locked and restrictive than their x86 counterparts. If we end up in a situation where most ARM computers are locked, this drastically sets back the world of computing
ARM is already vendor locked, and that’s part of its appeal for OEMs, from what I understand. Sure Linux runs on everything, but until it boots and has drivers, the hardware is useless, which the vendors know.
Flatland_Spider,
We need to distinguish between vendor locking (as in actively blocking alternatives) and simply being incompatible. If these M1 PCs from apple sell well and we start seeing millions of them on the market, it is quite likely the linux community will put in the effort to support them.
Historically macs were not vendor locked. An open question right now is whether apple will follow the precedent set by previous macs in allowing owners to install alternatives or if they will follow the precedent set by IOS in denying owners a right to install alternatives.
Indeed you need to distuingish between Apple’s vendor lock-in and simply “being unsupported”. The bootloader on Apple hardware will always be locked and will only support Apple-sanctioned operating systems.
This has already been a problem for Linux on PCs where obtaining Microsoft’s Secure Boot signatures has been a pain, although for regular laptops and desktops it is possible to turn off Secure Boot altogether; but this will not be the case for Apple computers.
Reverse engineering SoC support is severely limited by the fact that you are not even able to boot the machine if there is a bug in the drivers, or they are not fully implemented. So testing the drivers will be major issue. The iGPU drivers will of course be a major hurdle alone, as GPUs are very complex hardware, let alone something called a “neural engine”.
On Linux the lack of software for ARM is still a rather big issue too, compared to x86 hardware. On x86 whenever some software claims that it supports Linux, then it usually just works and you can find a package easily. For ARM there is a lot less FOSS packages available and situation gets much worse when it comes proprietary software, such as drivers and games … Hence if the hardware is equally fast, like M1 compared to some AMD and Intel offerings, you still get much more software on x86.
Geck,
My point was that the vaste majority of software designed for linux is open source and the majority of it is likely to run on ARM without issue. I’ve run the same linux software on x86, PPC, ARM without hitting architecture-dependent roadblocks. Granted if you’re reliant on 3rd party proprietary software, then linux will be much less appealing (this is true regardless of x86 or ARM).
Some linux users may be running proprietary x86 programs & games via wine, and if you want to include these, then I guess the ARM architecture would be worse for you, although I wouldn’t have included these because they’re really windows applications. Overall the FOSS nature of the linux software ecosystem makes it easier to shift architectures than the windows or mac ecosystems.
Anyways, there was a recent osnews article about hangover, a project that enables x86 windows programs to be emulated on ARM:
http://www.osnews.com/story/132547/hangover-alpha-2-lets-windows-x86-x64-programs-run-on-arm64-power-64-bit/
I agree. The viability of macos and windows on ARM will ultimately depend on software. Microsoft never strongly committed to ARM, and it didn’t send a very compelling message to developers or users. Apple is taking its ARM crossover much more seriously and more mac developers are bound to take notice. I expect apple will want to kill off rosetta at a future date in a few years to force straggler’s to natively support ARM, but obviously apple can’t do this until enough mac developers have already crossed the bridge.
I create certain of the “apps” that I rely on heavily, so all I need is access to a server and a compiler. 20 years ago I could compile and run my “apps” on Solaris/Sparc, IRIX/MIPS, AIX/Power, Tru64/AXP, and Linux/Intel. Big endian, lttle endian, 32 bits, 64 bits, didn’t matter. It was a great way to make your code squeaky clean. All of that fell away as Intel took over and the UNIX vendors lost their way. Maybe Mac OS XI/arm can reinvigorate the landscape. Or not. No server.
What does “squeaky clean” even mean?
javiercero1,
Some developers find that compiling & testing code on many architectures helps to find latent bugs that wouldn’t otherwise occur or be detected on other architectures particularly with unmanaged languages. Think byte alignment, byte endianness, race conditions, compilers, etc.
Theo de Raadt of openbsd is a big proponent of testing on different architectures:
https://marc.info/?l=openbsd-tech&m=138973312304511&w=2
While I don’t agree with his position of running so many machines while the electric bills are putting the project in serious financial jeopardy, but that’s a different topic, haha.
I’m pretty excited about this. I have iOS apps I’d like to have on the desktop. Some apps can benefit from the larger screen real estate of a laptop or desktop, and some don’t.
I guess this is the final test for the whole convergence idea. Determining if it has a real chance, or realizing mobile phones and desktop computers are just too different for it to make sense.
Have you used an Ipad recently? Some apps make great use of the additional space vs an iphone. Others are just a giant version of the phone app. I imagine it would be even worse than that for macs.
Everyone should pay close attention that Anandtech dropped the i9-10900k CPU on the multithreaded benchmarks and only listed the i7-1185G7, which is a low power CPU and is not a fast performer in intel’s lineup. If you’re just glancing over the charts you may not notice this. Maybe Anandtech did this to compare CPUs of similar energy budgets, which would have been fair enough. It’s no secret that Intel has done poorly on power even before M1. However I feel it’s misleading to only show the i9-10900k on the benchmarks where it performs worse and omit it from those where it performs better.
Anyways Intel’s failure to evolve the CPU process during the past several years is killing it. AMD overtook them, and apple has a chance to as well, however so far both geekbench and spec2017 show that intel (and AMD) still have better processors for multithreaded workloads.
spec2017 scores…
Ryzen 7 5800x INT = 47.89
i9-10900k INT = 47.35
M1 INT = 28.85
Ryzen 7 5800x FP = 52.10
i9-10900k FP = 48.59
M1 FP = 38.71
Don’t get me wrong, in all other areas the M1 seems to be awesome. Maybe the next generation will offer better multi-threaded performance. I’m thrilled to see that ARM processors finally becoming competitive for desktop performance! Unfortunately I get the feeling that apple will lock these down. If anyone knows for sure, please let me know. The only operating systems I’d consider using are linux and maybe BSD. I never want to be vendor locked again if I can help it.
The M1 is a quad core mobile part, and you’re comparing against 10 and 8 core high end parts. It should be no surprise it’s multithreaded performance is lower.
javiercero1,
No, I actually quoted the M1 8 core values. But here I’ll include the 4 core values:
And while the position that we’re comparing mobile part to a desktop part used to be more tenable when we were talking about the iphone, now that apple is putting it’s CPUs into PCs like the macbookpro, it’s much less of a stretch to legitimately compare the M1 to the x86 counterparts that it will be competing with in the professional space. Obviously the M1 has made a great showing for it’s introduction and it’s single core performance is fantastic. But the fact remains it still has some catching to do to match the high end x86 performance for multithreaded work. Obviously more high speed cores & hyperthreading would help apple out a lot. They may get there, but as of today these new products aren’t there yet. There’s no shame in that, I just think the expectations were a tad oversold. This is why I think it’s always wise to wait for benchmarks before believing the hype.
Stick a 10900K or 5800X inside a fanless ultrabook and see how long it survives in there. There’s a reason Apple is only using the M1 for its lower-end portables and the Mac mini, and not for higher-end desktops with greater thermal headroom and always-on power. Those will have much beefier chips, presumably without half of their cores being deliberately underpowered to save battery life. When those come it’ll be a much fairer comparison to make with Intel and AMD’s top desktop offerings.
that guy,
The macbook pros are not fanless, only the macbook airs are. Anyways, I am aware intel chips are bad at energy efficiency. This has been the case for a long time. Apple’s A13, A14, and now M1 are more efficient, amazon’s gravaton is more efficient, etc. For x86, AMD is more efficient. Intel’s larger process size is a significant liability for their chips. Apple beating intel on energy comes as no surprise to anyone. However it’s apple’s performance claims that raises eyebrows.
And now that the benchmarks are being published, it really does suggest that apple was trying to be misleading:
https://www.apple.com/newsroom/2020/11/introducing-the-next-generation-of-mac/
And if you look at the fine print, apple completely weaseled out of either providing the benchmarks or even identifying the systems it was compared to. Obviously every single person reading these claims is going to be interested in what systems apple was comparing to. Yet apple deliberately made sure their claims were unverifiable using meaningless gibberish rather than specific models information. We don’t know the CPUs, we don’t know the manufacturers, we know nothing. This is a prime case of stats that are meaningless.
Apple’s 3X or 5X claims were given without proof and probably cherry picked. They lacked the integrity to provide details and for that reason, I’m calling apple’s claims dishonest. Yet the media gobbled it up and it garnered great press for apple with headlines like:
“Apple claims new laptops with Apple Silicon inside outperform PC laptops.”
https://arstechnica.com/gadgets/2020/11/the-first-arm-based-mac-with-apple-silicon-is-tk-name/
So, it’s in this respect that I think everyone needs to step back from the hype. Now that the benchmarks are out we can do that. But obviously the hype already worked in apple’s favor regardless of whether their claims had merit.
https://www.extremetech.com/computing/317228-apples-new-m1-soc-looks-great-is-not-faster-than-98-percent-of-pc-laptops
What I really want to know from the tech media (assuming there is a way to find out) is how much of this performance/watt advantage is coming from the on die memory…
I have not seen any latency measurements, probably because no one has figured out how to get them. I have suspicions that alot of Apple’s performance and power advantage are coming from their rather unique on-die main memory configuration that is probably extremely low latency AND low power compared to anything in x86 land…
Not saying their CPU isn’t good in and of itself, it obviously is, but that whole on-die non-upgradable main memory thing is basically a complete non-starter everywhere beyond desktop/workstation computing. What happens when you need 128GB of memory?
I’m just saying it is a nice optimization where you can get away with it, but it will never work for anyone needing large memory configurations. I can’t see a reasonable way you can do what they are doing with say 128GB of memory…
there’s not “ondie” memory, there are large caches that’s for sure. But the memory is on package not on die.
correction: as javiercero1 points out, it is on package not on die… Either way though, my point still stands, there are certainly still latency and power advantages from having everything in one package…
I would expect latency to increase if you increase the number of physical modules that have to be placed on the package. Also. is anyone even capable of fabbing a chip package containing 128GB of memory? That is kind of what I was trying to get at, this approach doesn’t scale.
The M1 is a 4-core part (the other 4 cores are for low-power low compute scenarios). Whereas the Intel and AMD parts use heterogeneous 8 and 10 core architectures.
Also, the M1 is a mobile/low power part.
The closes comparison comes from single thread performance.
The benchmarks are here, and I told you a while back.Perhaps you simply refuse to believe Apple could have design and implemented a core that is that good.
javiercero1,
The fact is it’s an 8 core part, you need to be honest about that. Yes, half the cores are slow cores as I’ve stated all along. Apple may be able to produce a better chip in the future, but until they do this M1 is currently apple’s best ARM CPU. My point is that apples best ARM CPU still has not caught up to the best x86 CPUs (and intel doesn’t even hold that title anymore). Apple may create a better CPU with faster cores and maybe even hyperthreading in the future, which is great, but until that happens the data is unambiguous, the best CPU that apple has to offer today still has catching up to do on multithreaded workloads.
If you want to classify it as a midrange part, that’s fine. I have no problem with that. However then you should be agreeing with me that apple has been exaggerating the performance aspects.
Perhaps the one with issues is you.
At no point has apple claimed their 4+4C lown end ARM Mac chip would outperform a top of the line 10C x86.
What it is of significant is that Apple has now the fastest single thread performance at a better power/thermal envelope. The benchmarks are there.
I already told you; within the architecture community there’s no question regarding on what apple has been able to execute cpu-wise in the past 5 years. Apple now has their own competitive core.
You’re perhaps old, and stuck to an old paradigm. So you’re missing what’s happening here, because it’s not part of your computing “universe.” But Apple has now managed to put on a fanless ultra portable form factor, the same compute performance as their most recent “large” laptops.
That’s where the market is going. Developers, and us old folk still using desktops, are the vast minority.
Even if apple does not match the raw CPU performance of their current high end x86 mac pros, they;ll still have their own IP which accelerates the workloads most of those machines are used for anyways.
javiercero1,
Ugh, an ad hominem, seriously?
Go ahead and call it a low end ARM mac chip, see if I care
You’re rebutting a straw man. My complaint is over what apple actually did say, in particular apple’s claims that their new computers are 3X and 5X faster lacks specificity, these unproven claims were intentionally released without data to mislead the media and consumers. Don’t tell me that if intel did the same thing you wouldn’t be calling out their BS claims right now.
More ad hominem…
Ah, good, this question of where the industry is going is more interesting than the debate at hand. ! I’ll discuss it if you want to start a new thread.
I don’t think there is any point, if the argument is about weird claims Apple never did officially, So it creates this weird red herring, in where you;re comparing a mobile/low power part vs high end x86 desktop parts. So yeah, in all out compute, the M1 is not beating a desktop 10C part.
But you’re missing the point, with the weird drama/moralistic qualitative stuff, and completely miss what was actually delivered. That one of the world’s fastest cores is on a 10W part is pretty disruptive.
We’re witnessing the next level up, the minis we overtaken by the micros, and now the “nanos” are taking over the micros.
javiercero1,
Ok, but for the record, you are refuting a claim that nobody made. This is exactly what a strawman argument is.
That’s disingenuous. I’ve acknowledged apple’s power efficiency advantages many times by now.
It’s not clear what you mean by “nanos”. If you mean ARM. that’s certainly a possibility, time will tell. But as usual the proof will be in the pudding, and not proclamations.
I am sorry, but the one with the strawman is you.
You’re the one accusing apple of claims they never made and making comparisons which make no sense (e.g. 4C vs 10C).
The HW is already out, and so are the reviews/benchmarks. Yes, the Apple ARM core is that good. The level of performance Apple is achieving out of a <15W envelope is very disruptive, and you're completely missing the point
javiercero1,
I call out your strawman arguments for what they are and you retort with an “I know you are, what am I” response, that’s just childish. Once again you are inventing these false claims and then falsely attributing them to me. I’d like to hold the discussions here on osnews to a higher standard than that. Please always quote me directly in the future when claiming I said something.
Anyways, you don’t want us to compare high end ARM versus high end x86, but the truth is that we all want to know how well apple silicon stacks up against the x86 world today. It’s not unreasonable to compare the benchmarks and evidently both anandtech and arstechnica agree too. It’s clear as day that I struct a nerve with you, but I don’t actually think you’re annoyed with anything I did wrong. It seems the reason you’re annoyed at comparing these CPUs is because it wasn’t favorable to you. If the benchmark scores had been more favorable, you would be changing your tune right now.
And not for nothing, but your argument that we cannot compare CPUs with different cores is hypocritical. You have no problems comparing CPUs of unequal resources when it is in your favor…
http://www.osnews.com/story/131941/apple-transitions-the-mac-to-its-own-arm-processors/#comment-10408544
Then you were willing to compare a slower 32 core x86 AMD EPYC 7571 w/hyperthreading to a 64core ARM amazon graviton. And you know what, comparing those very different CPUs is fine as long as you acknowledge the contextual differences. So please quit the hypocricy. Everyone is watching the battle between ARM and x86 with great interest. I and others will continue to compare the best ARM versus the best x86 CPUs, and if you don’t like it, well tough because I’m not apologizing for it.
One more thing…if you are going to adamantly insist on classifying the M1 a 4 core part despite the fact that it’s an 8 core part and despite the fact that apple disagrees with you. Well then your “4 core CPU” is really getting an unfair boost from the additional 4 slower cores that you shouldn’t be counting. Therefor you ought to be penalizing the M1’s multithreaded benchmarks proportional to the aid that it’s receiving from the 4 slower cores. Thanks to the published data we can do that for some of these benchmarks.
Just to be clear, I personally would treat the M1 as the 8 core part that it is, but in order to indulge your opinion that only the M1’s fast cores count, then you should be applying a 17-28% penalty on top of any M1’s benchmarks that measured the performance of all 8 cores.
Alfman,
M1 is not high-end ARM though. It’s a chip for low-end ultracompact portables and SFF desktops. In the broader spectrum ranging from smartphones to workstations, M1 is somewhere in the middle. Apple’s high-end ARM chip, the one that presumably will power future iMacs and / or Mac Pros, has yet to make its appearance. So for now any comparison with high-end x86 desktop chips really is apples to oranges.
that guy,
I know that, but it doesn’t change the fact that these benchmarks are useful in making the determination of where a CPU’s performance ranks.
It’s always going to be an apples and oranges comparison and you’re always going to have people nitpicking the differences. The truth is we’ve got CPUs with vastly different feature-sets like cache, hyperthreading, architecture, micro-architecture, execution pipelines, register counts, register renaming, TLD buffers, asymetric cores, core speeds, process size, etc. All these things can make a difference, but at the end of the day there’s so much handwaving that standardized benchmarks have become the best tools at our disposal to make sense of how these CPUs rank on performance. Without benchmarks, nobody would know. How well does an itanium perform? You can look at the specs as closely as you want but until you run it through a benchmark gamut nobody has any idea if it performs competitively. Same of the M1.
I’m not saying any of this to knock the M1, but a typical end user doesn’t give a crap about the underlying reasons for a CPU’s performance, they just want to know how well it’s going to handle their workload. The M1 looks like a fantastic CPU especially when it comes to power efficiency and it’s single threaded performance is awesome too, which should translate to great performance in most desktop applications. For many users this may surpass their expectations and that’s great. These are fantastic inroads for ARM. But it’s important to look at both strengths and weaknesses. We agree that the next generation should bring improvements.
Alfman
Your understanding of what the term “strawman” seems to correlate with your understanding of computer architecture and benchmarks.
So here we are; you comparing the high end desktop CPU parts against a mobile/low end SoC with half the high performance cores in order to refute a claim Apple did not make.
So what is the comparison most buyers of this product would care about: that the M1 Macbook Air is not as fast as a top of the line Core I9 desktop, or that it is faster than the current highest end macbook pro?
So spare me. I already told you a while ago, that this was going to be a very good core since I work in this industry. You said “wait for the benchmarks” and so the benchmarks are here, and indeed it shows it’s a fantastic core. And now here you are, moving the goalposts…
javiercero1,
Another ad hominem attack.
As for strawman, here is an impartial dictionary definition:
https://www.merriam-webster.com/dictionary/straw%20man
When you ignore what’s said and choose instead to counter claims that you make up, that’s a strawman.
Well you have my full post history…either put up the quotes, or stop it with false accusations of me making up apple claims.
As for comparing the CPUs, well of course I do, this is how we rank CPUs. It feels like you are the wizard of oz telling me what data I’m allowed to use “no, you can’t look behind those curtains”, haha. Apple doesn’t get to choose what it’s allowed to be compared to. Everyone is going to do these comparisons, anandtech, arstechnica, and soon many more, etc. You’re way too sensitive about it.
For you to suggest that I’m moving the goalposts is laughable.
In this thread you compared a 64core ARM to a 32core x86 with hyperthreading.
http://www.osnews.com/story/131941/apple-transitions-the-mac-to-its-own-arm-processors/#comment-10408544
In this thread you compared the 8 core ARM A12X to the 6 core x86 i5-8400.
http://www.osnews.com/story/131941/apple-transitions-the-mac-to-its-own-arm-processors/#comment-10408590
Now that the shoe is on the other foot, suddenly now it’s oh so important to match core counts. I get it, you want to have your cake and eat it too. But it’s hypocritical as hell.
I do understand your points, but the problem is that you keep wanting to define the boxes in which people are allowed compare products and prohibit other comparisons. The thing with that is that you can always just move the box to contain the CPU you want to win while excluding CPUs that perform better. Don’t you see the problem with that? Had the M1 performed worse, you’d just move the goalposts to declare it the winner of it’s class anyways. What’s the point in that? I say let all products in the ring and let the performance benchmarks land wherever they may.
You have it in your head that you have to disagree with me and use verbal insults, but why? We’re in agreement on the data, we both agree the M1 did well on singlethreaded scores and energy efficiency. We both agree that apple should be able to improve its mutlithreaded performance in the future. So what’s the problem? It seems like we’re in agreement on everything, why do you feel the need to address me so aggressively? I don’t get it.
Alfman
I am honored that you went through my post history though…
You don’t seem to understand how debate works: you’re free to write all the bad comparisons and express your lack of grasp of computer architectures all you want. And I am free to point that out.
M1 is not 8 heterogeneous cores. It’s 4C+4C, and these 2 clusters are not intended for concurrent use. The low power cores are almost 1/10th the performance of the high end, So at best it’s a low end/mobile 4.5C part, in the rare cases where you force both clusters to be active (almost never). Thus comparing it against high end desktop 6C, 8C, or even 10C (which is what you did) is kind of missing the point, tremendously. We could do the same misrepresentation, and refer to the SMT as being a proper core, and voila those now become 12, 16, and 20 “core” parts.
So what exactly is your ground breaking “insight?” That a 10W mixed 4C+4C mobile SoC does not perform as fast a 95W homogeneous 10C hyperthreaded behemoth? Well, not shit Sherlock.
How does that disproof Apple’s supposed claims? We have no idea, those claims exist only in your head.
Now, if you want to talk about performance in the power envelope of the M1, which is what Apple has been harping about. Then you should have brought in the low/end 15W parts which compete with it in that space.
You may see that as me “trying to make fit into a box,” I see it as me helping you actually not miss the point entirely comparing apples to oranges…
javiercero1,
This is the Nth time in a row you’ve expressed a preference for resorting to personal insults rather than discussing things rationally. That’s not a good debate, that’s what trump would do but I think we should aspire to do better than him.
I’ve already responded on this, you simply ignored it. If you want to claim those extra cores don’t count for concurrent use then you should be applying a 17-28% penalty on top of the M1’s 8 core benchmarks.
I don’t pretend to have any ground breaking insight. When it comes to CPU benchmarks, it doesn’t matter why a cpu is slower or faster, just that it is. The M1 today is held back by slower cores, intel chips today are held back by legacy fab technology, and AMD chips today are leading the bunch. But for all of their differences, it’s the net performance alone that determines whether the CPU ranks high or low on the benchmarks. You can go to passmark.com and you see thousands of chips ranked by performance regardless of how the score was accomplished, moreover this provides useful information to consumers.
Clearly the innards matter to you, but to someone who just wants to get work done and not care about how, then it’s not that important that a CPU has 1, 4, 6, 8, 10, etc cores so much as how much performance you actually get out of the CPU in aggregate. Say for example I had a task that I want to speed up and I benchmark two new CPUs, one with X cores and the other with Y cores such that X != Y. In terms of improving my performance, is it more important to look at the specific values for X and Y or ranking the CPUs by the total measured performance? That’s what I mean: why a CPU is slow or fast isn’t nearly as important as the fact that it is slow or fast. Sure we can make technical excuses for a CPU’s performance, but none of these excuses fundamentally changes its ranking on a benchmark.
Once again, but your hands together for the straw man.
There’s nothing shameful about midrange MT performance, for many people that’s all they need. These mid-range users are the bread and butter for a lot of manufactures. Additionally MT performance isn’t the only useful metric, and indeed for some users it’s not even the most important or useful metric. Those who do need more computational power than M1 offers today still have something to look forward to next time. Honestly I really do think you agree with me on just about everything, you just can’t come terms with admitting it. So with that in mind, let’s wrap this up by agreeing to disagree *wink*
Alfman
…ironic you bringing up Trump, given your inability to concede way after your point was long lost and continual appeal to victims.
“I’ve already responded on this, you simply ignored it. If you want to claim those extra cores don’t count for concurrent use then you should be applying a 17-28% penalty on top of the M1’s 8 core benchmarks.”
Then apply that penalty. At the power envelopes Apple’s M1 is targeted, it gets better single and multithreaded performance than most of its competitors in that space it’s targeted at.
“I don’t pretend to have any ground breaking insight. When it comes to CPU benchmarks, it doesn’t matter why a cpu is slower or faster, just that it is.”
This idiotic, without context you can’t extract any meaningful extrapolations or learnings from the results.
” The M1 today is held back by slower cores,”
See, there’s the problem, you don’t realize that with a statement like that all you’re doing is letting me know you’re not understanding the architecture at all.
” intel chips today are held back by legacy fab technology,”
Again, held back in what regard? Intel’s current process is actually the fastest. Their process has a penalty of size/power. But intel has not trouble selling 5Ghz parts.
” and AMD chips today are leading the bunch.”
Again, leading in what and what bunch? Desktop sure. High end laptops, sure again. In the space where the M1 exists? It’s a mixed bag.
” But for all of their differences, it’s the net performance alone that determines whether the CPU ranks high or low on the benchmarks.”
Your definition of “performance” is ancient, Use case intrinsics, thermal, and power have been first order performance limiters for well over 2 decades. They 80s/90s were over long ago, time to move on.
Which incidentally explains why my points keep going over your head;. the metrics that make the most sense in the M1’s case are in terms of power/performance, since it’s geared for extremely thermal-constrained applications.
” You can go to passmark.com and you see thousands of chips ranked by performance regardless of how the score was accomplished, moreover this provides useful information to consumers.”
So what? Garbage in/garbage out is not a new thing.
“Clearly the innards matter to you, but to someone who just wants to get work done and not care about how, then it’s not that important that a CPU has 1, 4, 6, 8, 10, etc cores so much as how much performance you actually get out of the CPU in aggregate. ”
I care about it because I work in the field. I have no idea what your comment about your mythical customer making a qualitative call about how to approach buying a discrete CPU off the shelf has to do with a mobile SoC like the M1. But whatever…
“Say for example I had a task that I want to speed up and I benchmark two new CPUs, one with X cores and the other with Y cores such that X != Y. In terms of improving my performance, is it more important to look at the specific values for X and Y or ranking the CPUs by the total measured performance? That’s what I mean: why a CPU is slow or fast isn’t nearly as important as the fact that it is slow or fast.”
yes, if you don’t have to worry about thermal and power issues or have unlimited budget. I still don’t know what buying a high end server chip has to do with a mobile low power SoC like the M1.
” Sure we can make technical excuses for a CPU’s performance, but none of these excuses fundamentally changes its ranking on a benchmark.”
It’s not an “excuse.” It’s proper understanding of what the numbers mean. This is a place to discuss the mostly technical articles being posted, not the comments section of newegg.com
“There’s nothing shameful about midrange MT performance, for many people that’s all they need.”
Shame? What are you even talking about? This a tech discussion about freaking low end SoCs.
” Honestly I really do think you agree with me on just about everything, you just can’t come terms with admitting it. So with that in mind, let’s wrap this up by agreeing to disagree *wink*
Nope. I am sorry but I don’t think you even understand what’s going on here.
I am simply pointing out the hypocrisy of you using a very disingenuous comparison because you claimed Apple did some disingenuous claims.
Apple claimed they were going to deliver a few times faster performance for the products they replace, which they did: the M1 MacbookAir is clearly a few times faster than the x86 Macbook Air that it replaces.
This apparently threw you out on a loop. Maybe it’s the reality of a tech company actually delivering on what they claimed that you’re not used to?
*wink*
future readers: this entire thread is a masterpiece of cringe
MamiyaOtaru,
+1 agree. I want to have friendly discussions.
With all else in Apple’s failings, this doesn’t justify me purchasing the new product. Not being a hater, but I’m anti-proprietary and pro-privacy. My train of thought is, ” This is hardware. It does NOT equate to my doing what I want to with it, once I’ve purchased it.” Along with my train of thought, Unless an alternate OS can be installed, The unit somehow doesn’t “Phone home” to be disabeled by the mothership, since I’ve “Violated the EULA”, I’m not tracked , M1 Macs are not on my radar. I do see these as plausable for Animation houses, like SUN, and SGI before it. I say, Give it time and these will go the way of IBM and the original PC eventually.
Agreed. The tech is exciting, but the OS and everything else that comes with Apple is a bit of a headache. Simply being faster can’t justify the other things. My current PC is fast enough. If I need additional horse power, there are clouds for that.
I think this is where people start to mash the Intel vs AMD vs Arm tactics, because the perspective being taken in commentary is mostly desktop. It’s clear to me that Intel is turning industrial/commercial/server, back to it’s roots. It’s investment strategy makes that clear. One side of this debate is backing cloud as the ultimate future, the other is backing desktop, as such they are not really comparable. What would be interesting is is Apple make a server version of M1 available as a blade for server use, with Linux compatibility of course. I wonder how some of the M1 specific features would compare to dedicated server hardware!
Apple really doesn’t care to compete in the server market; the closest you’ll get is the rackmountable Mac Pro, and macOS Server has been dying a slow death for years. Xserve was more of a vanity project than a viable long-term product.
Also there are other ARM licensees (Ampere and Marvell) already competing with Intel in that space. I haven’t been following them closely, but if they can achieve similar or better performance at just a fraction of the power usage, then that will be a very compelling product indeed.
i was discussing different strategies and the effect that has on products, what one vendor or the other chooses to focus on is irrelevant.
Reading your comment that Apple doesn’t care about the server market sort of confirms the irrelevance of claiming it’s new M1 products strike at the heart of companies focused on cloud/server markets sectors.
Is Apple going to make it’s new die available to 3rd parties, I thought it was claiming that it didn’t want to be a silicon vendor?
Agreed,
If Apple was to enter the server market, it would be with their own OS, Services, and SW stacks. Not Linux/Solaris, etc. I guess it’s the SUN,Oracle/Cisco approach. I see little coming from that, as I dont see it as a cash cow for Apple, not that I agree with the verdict, just using a bit of common business sense.
Don’t forget nVidia
I’ll hold off on making commentary, there are so many issues here, fanless as a sustained performance, upgrade / repair, software compatibility, etc.,etc..
I was a massive fan of PowerPC, we moved probably 80% of our hardware to PowerPC when it was available, and now years later we are majority Win 10 on Intel. It was always the issue of software and/or driver catch up that ultimately cooked our PowerPC goose. As a professional users with a grid of systems, there were bugs that just weren’t slow to be fix but were never fixed, and over time we had to implement more and more workarounds until they became so onerous we gave up and switched.
So forgive me, but I wonder just how long it will take for the major software vendors to catch up to deliver workable solutions for M1, and how many bugs will appear as the library of software grow and grows, then how long it will take to fix them if ever, will they get Apple support to do so! No matter what we think, for some very serious vendors M1 will remain a very small percentage market segment for a long long time, which means supporting it to a high level is an overhead not a profit center.
I’d like to know what happens to Rosseta performance with respect to battery life on intensive legacy software. Whether it’s a fair test or not is irrelevant, as is the promise of native solutions, they may never arrive!
I’m unclear as to whether you’re placing blame on the PowerPC platform (possibly PReP/PAPR?), on OS X itself, or on third-party devs for the problems you experienced. Or maybe you’re upset that Wintel, being the ~75% share monopoly platform, invariably receives an overwhelming amount of third-party support compared to everyone else, with only a handful of bigger companies like MS and Adobe delivering decent Mac support. Even then there is a small but thriving community of boutique Mac developers like Omni Group and Panic.
that guy,
Relax, it’s not about Apple. I was commenting about the reasons why we had to get out of Apple hardware previously, it is a historical commentary that those of us who have been around long enough already know about.
All that is old is new again, if the new hardware gets stuck on old application versions, if they lag behind the majority or worse if patches never arrive, the same eventual outcome will apply. I notice several of the reviewers have already stated this fact.
So don’t get defensive you can still buy the new Air and email all you like!
cpcf,
Following your post, I looked up what Louis Rossman’s take would be on this.
http://www.youtube.com/watch?v=IWJE7eqVGQE
He says it will be a couple of years before he gets any new devices in for repair, and lot of the new devices he can’t repair anyways because he can’t find the parts without donor boards. So he doesn’t know how repairable these will be either.
“It’s a reminder that I’m in a business that goes out of it’s way to ensure I can’t do my job”.
We’d need a battery benchmark to be sure, but going by “Rosetta2: x86-64 Translation Performance” in the article, my guess would be that x86 applications will last 77% as long with 23% of battery life going to translation overhead. If you dig down to individual tests, the percentages ranged from ~50%-95%, which is a huge range, so obviously some code is more costly to translate. The GCC test specifically was the worst, I wonder why.
Alfman,
Yep it is interesting, cynically I become reluctant to take too much notice of benchmarks at face value, because as they become dominant the hardware vendors start designing for benchmarks and not for real world applications.
That 23% loss, or similar losses, the bigger the battery life you claim the bigger the loss becomes! 4 or 5 hours!
Benchmarks, as a measure of too much standardisation is bad, because it just constrains innovation to a new level of the standard mediocrity.
On face value the M1 seems good because it breaks the mold, but hardware is a very small part of the user experience. I’ll be shitty as if I buy M1 and spend the next 3 years burning through battery while waiting for my Apps to be ported!
However, purely as a consumption device, I can’t see why I wouldn’t Air or Mini almost immediately. Probably Mini ahead of the others, I always keep one leg on land while dipping the other in the water!
One thing we have to consider is OK you are a software developer and Apple introduced new fast hardware based on ARM architecture. You think to yourself OK lets port the software and lets find some instructions, tools … to help us get started. You quickly land on this Apple web page and it says you need to live in select set of countries and you need to pay 500 USD to get access:
https://developer.apple.com/programs/universal/
In addition you need to enroll in Apple Developer Program that costs 99 USD a year. As for selling the software you obviously need to give a cut to Apple and when developing software you need to follow a rather rigorous set of rules set by Apple, that change. You basically need to pay 599 USD to get started. Things like this for sure do make Apple a rich company, can’t blame them for that, but on the other hand one can easily understand on why Intel and AMD based hardware and software written for such hardware isn’t at risk.
That $500 program you linked to is a special program that gives access to pre-release hardware and one-on-one developer support.
Otherwise you could start here: https://developer.apple.com/documentation/xcode/porting_your_macos_apps_to_apple_silicon
There’s links to step-by-step guides and further documentation.
Thanks for the additional explanation. Yesterday i was investigating some popular FOSS in regard to M1 announcement and have seen LibreOffice mentioning it:
https://www.collaboraoffice.com/desktop/update-on-libreoffice-support-for-arm-based-macs/
It just didn’t feel right. That is the need to do the work and in addition to pay 500 USD for being able to do the work good.
LOL, that’s how opensource works. You’re buying the hardware and do work for free.
Don’t be ignorant.
“And obviously, I was so wrong I don’t even know where to start.”
I wouldn’t be so hard on myself, if I were you. M1 is great achievement, I’m glad there is now a serious competition to Intel and AMD, but as I see it, those reviews are written in a way in which M1 shines more than it should. For example in the ArsTechnica R23 bechmark they chose the AMD Ryzen 7 4700U which is slightly slower in MT than the M1 instead of the 4800U which is faster than M1 with the same TDP as the 4700U. Why? Because people want M1 to shine, so let’s make it shine.
With new generation of Zen coming soon, I think M1 will be overperformed per watt in few months. Which is great, because it will make Apple to improve and then everyone else as well and while I myself am out of this train (POWER9 here!), competition is always good for progress…
Not even close. It is barely faster than previous.
https://browser.geekbench.com/v5/cpu/compare/4829288?baseline=4658263
The m1 gets 7461 in MT, whereas the 4700u gets 4921, I don’t think I’d refer to a 50% difference as “sightly slower.”