For the M1 Pro, Apple promises 70 percent better CPU performance and twice the graphics performance compared to the M1. While the basic architecture is still the same on the M1 Pro, Apple is upping the hardware here in a big way, with a 10-core CPU that offers eight performance cores and two efficiency cores, along with a 16-core GPU with 2,048 execution units.
The new chip also supports more RAM, with configuration options up to 32GB (although, like the M1, memory is still integrated directly into the chip itself, instead of user-upgradable) with 200GB/s memory bandwidth. In total, the M1 Pro has 33.7 billion transistors, roughly twice the number of transistors that the M1 has.
But Apple isn’t stopping there: it also announced the even more powerful M1 Max, which has the same 10-core CPU configuration, with eight performance cores and two efficiency cores. But the M1 Max doubles the memory bandwidth (to 400GB/s), RAM (up to 64GB of memory) and GPU (with 32 cores, 4,096 execution units and four times the GPU performance of the original M1). The M1 Max features 57 billion transistors, making it the largest chip that Apple has made yet. The new chip also means that you can connect up to four external displays to a single device.
These are absolutely bonkers chips for in a laptop, and Apple once again puts the entire industry on notice. There’s nothing Intel, AMD, or Qualcomm can offer that comes even close to these new M1 Pro and Max chips, and Apple even seems to have managed to get within spitting distance of a laptop RTX 3080. It’s hard to fathom just how impressive these are.
The laptops they come in the new 14″ and 16″ MacBook Pro, with a new design that, for some reason, includes a notch even though there’s no FaceID. Apple is easily the best choice for most people when it comes to laptops now, since anything else will be just as expensive, but far, far less performant with far worse energy use.
But what do people actually do with it? I mean, yes, it’s a nice pretty tech jewellery box and everything, and gives people with otherwise dull and colourless lives something to talk about but what is changing here? Can I type faster? Write smarter? Watch videos better?
Better energy use so better battery life is nice but I use my laptops as a portable desktop and they are usually docked.
So MacOS Monterey only supports hardware from hardware on average fro around 2015? Why? Not so environment friendly after all.
When people have been nudged into habits by advertising for long enough I suspect their brains switch off.
There’s plenty of people who do “real” work on Macs. Video editing, rendering, graphics design etc.
Not everyone is you. great performance in a mobile package is a great selling point for people who like to work whilst they commute (on public transport, obviously) and there can be many times when you need a laptop but don’t always have power. Also, less power consumption is just generally better. The devices get less hot, batteries last longer, there’s less thermal cycles to damage and warp boards.
You say that like it’s news in the Apple-sphere. They’re always obsoleting hardware for rather arbitrary reasons, and have been for nearly 40 years.
There is definitely a “ooh, new and shiny!” effect with Apple hardware, but these new ARM based processors are seriously good performers. It’s been a long time since there’s been any company out there challenging the Intel and x86 dominance, with Apple having lead the last charge with PPC processors in the early 00’s. Competition sparks innovation, i’ll be very interested to see what Intel and AMD release in the mobile space in the next 5 years.
“There’s plenty of people who do “real” work on Macs. Video editing, rendering, graphics design etc.”
There are, but is that ideal? I suspect, based on the reviews of the original M1, many of those usecases are already working great with the macbook air M1. With the advent of very high speed bandwidth. I think less and less will be come cost effective to do locally on higher specd devices. If all you need is grunt cpu and or gpu, the cloud is great! If you also need massive data that is being worked on and updated in real time with low latency, its less great for now.
Bill Shooter of Bul,
Yeah, it’s a very interesting divide. I still think local resources are advantageous to “the cloud”, but it isn’t optimal to do the heavy lifting in a laptop/tablet/phone. Instead it’s probably better to have a heavy duty computer sitting in a closet or garage while the portable device is mostly just a thin client. This could be similar to what steam remote play can do by running games on a computer other than the one you’re playing on.
Under this model, you could even have a low power chromebook since on device computational power just doesn’t matter much for thin clients. Longer battery life is easier to achieve this way too. I could see the merit in this approach for video professionals. More storage/horsepower/etc than would be available out of a portable device, longer battery life, etc. No need to plug into eGPUs for more horsepower.
It’s a model I could see myself getting behind, but I’m not aware of tooling that consistently works well for this. The solutions that have been optimized for gaming such as stadia are too restricted to use generically (as far as I’m aware). The general purpose solutions I’ve seen like X forwarding, RDP, spice can work but tend to be sub-optimal for intensive real time applications. Anyone know of other solutions I should be looking at?
Any distributed system could have a billing module so business and consumer level owners of capable hardware could rent them out during downtime. No need to be dependant on a random megacorp with an off switch. There’s obviously an open standard lurking in there as well as an environmental calculation in but I’ll leave it to someone else to work out.
I am starting to suspect the principal talent of a lot of people in this site is to find something to bitch and neg about the item of technology at hand:
When it’s a technology that depends on the cloud. The cloud is obviously the spawn of Satan.
When it’s a technology that increases, significantly, the processing power in the local machine. Then the cloud should be the obvious answer.
For programmers, these CPUs are extremely useful. Many programmers do use Macs, certainly in some areas of the industry a far higher percentage than users who use them, and the M1 is extremely powerful for this.
Yeah, simply compiling Linux, Firefox, Chrome, mostly whatever project with more than 1 million of lines.
Wow. Wowwowow. It’s like they just went through my checklist of things I wanted and checked them all off.
And that they actually went back on the touchbar to physical keys. Love that. I hated the touchbar. It added so much expense for features I didn’t crave! A HDMI port! On a Mac laptop! Never thought I’d see the day. Also… Magsafe returned!
“Both laptops are also thicker and heavier than the models they replace”
Apple: I don’t care. Big is beautiful. You’ve made the right choice.
Oh, today is a happy day. Pro laptops are actually fitting Pro needs. Can’t wait to see the benchmarks on these things.
Regarding Apple’s competition… I think that the primary problem of Intel and AMD is that they essentially produce more and more expensive space heaters.
In a laptop form factor anyways. You get a gaming machine, and you end up with a huge chonky laptop that had to be plugged in to achieve it’s maximum power. Because they have not focused on performance per watt. They are behind the game.
If Apple’s numbers are to be believed, and the benchmarkers will come along and tell us, I don’t know how Intel and AMD can respond in the mobile space. Microsoft may be their last hope.
Not necessarily :
(The Best Gaming Laptop. Period. – Asus Zephyrus G15 Review)
https://www.youtube.com/watch?v=mrXuvoVMtSE
(Dang it Dell, you did it AGAIN!!… Alienware X17 Review)
https://www.youtube.com/watch?v=ruhZrv9ppJo
There are now only 2 reasons I would not buy one of these new macbooks….
1) the fugly notch. Just no… bezels are better than notches…
2) soldered in parts. They should at least allow upgradable storage…
leech,
I would add undocumented programming specs to the list, meaning alt-os devs for linux etc. have to reverse engineer apple’s hardware and hope apple doesn’t eventually lock 3rd parties out. Unfortunately when it comes to ARM this is par for the course. I’m stuck between wanting to embrace the CPU efficiency benefits of ARM over x86, yet the reality of having more standard and open x86 hardware.
@Alfman
That’s a good point. Given Apple obscuring its hardware a earlier dropping bootcamp support Apple don’t provide reliable platforms for running independent OS. No matter how nice the specification is on paper looking at things this way makes Apple hardware look like a potentially expensive paperweight.
Considering the artificially shortened buying cycles of forced obsolecence there’s a disguised higher actual price than the headline price. It’s a neat tricka nd perhaps once more shows the power of marketing to subvert critical thinking.
You do not need Bootcamp to run Windows on a MacBook unless it falls in to a specific list, and then you can absolutely run Bootcamp on it, but you don’t have audio due wo the way the driver support is delivered in Windows. But, yeah, I installed Windows 10 on my 2012 non-retina MacBook Pro last week. The only reason I decided to reinstall with Bootcamp was because I couldn’t be bothered to hack my audio driver to work. But yeah, all Bootcamp does is emulate the BIOS, and you really do not need that on any EFI bootable OS anymore.
Oh and regarding “lifecycles”, my 2012 MacBook pro is officially supported up to Catalina and it is trivial to install Big Sur on it. But it is mainly drivers that breaks stuff. Bear in mind, this MacBook is almost 10 years old and got over 8 years of official support. And still runs Catalina, and Windows 10 with no issues…. I mean, a lot of 10 year old computers are utterly obsolete now, so I don’t know why this is an issue for you?!
The thing is, they don’t have an obligation to support alternative OSes on their own hardware, and the fact that they don’t actively block it is a good thing. Asahi Linux is making huge leaps towards a working Linux environment on M1 hardware, and at least one of the devs is already running it as a daily workstation. I don’t like the idea that they can close the loophole any time they want, but it’s been a year since the M1 mini and MBP were released and they haven’t shown even the slightest inclination towards locking us out. It’s in Apple’s best interest to continue to ignore the Linux on M1 community efforts.
Speaking of forced obsolescence, the fact that Microsoft is setting such a nonsensensical high bar for its latest OS version combined with their continued efforts to lock down UEFI on commodity hardware means that there may come a time when it’s easier to install and run Linux on a Mac than on a PC. Strange times are a-coming.
Funny how people were screaming off the top of their lungs that x86 is evil and proprietary while ARM is good and “open”, while in reality ARM isn’t open (as in royalty-free) and most ARM systems need all kinds of binary blobs to boot and make use of the GPU and other peripherals. Meanwhile, on x86 we have come to the point of being able to boot a fully open-source OS (with only open-source drivers) on a “generic” Intel laptop and have most things work.
kurkosdr,
Indeed, I am disappointed. I think there are many of us who had higher hopes for open ARM computers for the masses. RISC-V may be the next candidate to be the future for open computing, but I have my doubts because while RISC-V is an open architecture, I don’t have confidence in the industry keeping it that way to benefit consumers with high level interoperability and alt-os support.
RISC-V has a chicken-and-egg problem at the moment. Whilst it’s great in theory, there’s very little hardware out there to build a system out of, and as such, there’s also very little software out there to run on it. Because there’s very little software that supports it, hardware manufacturers see no point developing RISC-V hardware.
The123king,
I agree with what you are saying. However I do want to point out that nvidia is using RISC-V cores in it’s GPUs.
https://riscv.org/wp-content/uploads/2017/05/Tue1345pm-NVIDIA-Sijstermans.pdf
So it’s there, but not in an accessible form where it can benefit users.
Now that i actually looked at their pictures, i don’t even understand why they need the notch to be there in the first place, there is still a black border around the image at a size where other manufactures can manage to squeeze in the camera stuff.
It is also funny how most of their pictures simply seem to have the top area as an extra black border, so probably the notch is only even there if the apps specifically support it.
What I thought. I’d almost prefer a webcam hidden behind the screen à-la Samsung Fold 3 for the few times I use it. It would be good enough.
1) I understand that Intel Macs provided a nice way to run Linux with all hardware supported (at least the first Intel Macs did). But it was a small niche. If you are buying a Mac to run Linux, you are not the target audience. Apple Macs are a vertically integrated product.
2) When buying a mobile system, portability always overpowers upgradeability. And quite honestly, I don’t see anything bad with it. I always hated those “mainstream” laptop-in-name-only computers which aren’t as upgradeable as a gaming desktop replacement but aren’t true laptops either. IMO laptops lost their way during the 2000s and the early 2010s. I once saw a friend’s Toshiba laptop and was amazed by how bulky the damn thing was. And mind you, this was not a gaming desktop replacement but a (supposedly) mainstream laptop. My nx9420 looked featherweight next to it. Glad to see that we are getting back to the idea that laptops should be, you know, laptops, and not things carried from desk to desk. This means internal upgrade slots have to go away. Again, it’s not a desktop replacement.
Thom Holwerda,
It sounds good, but I think the media is too easily manipulated into a feeding frenzy before any verifiable 3rd party benchmarks are published. Apple’s marketing teams have been guilty of being vague and not revealing the exact chips they are comparing against, which should always be a red flag, and this article specifically acknowledged that apple did that again here. Just like the M1’s performance was somewhat exaggerated, the parallel performance was disappointing and the M1 pro’s could potentially be too. I’m not privy to any benchmarking data yet, and I suspect that neither is anyone else, so performance claims are premature at this time. But still people want to ride the hype train, haha.
If the M1 pro’s new performance cores can offer sustained loads, that could really be useful However give the throttling bottlenecks associated with the M1 and easily reaching 98C in stock configurations, I’m really curious to see if they’ll pull it off or if the m1 pro will be equally bottlenecked.
http://www.youtube.com/watch?v=c1KZuVqodak
Applying passive thermal heat pads increased M1 performance up to 30%, which is huge, though it still did get hot.
http://www.youtube.com/watch?v=ObeDKc4DqNE
So presumably apple has some head-room just by improving laptop thermals as those guys demonstrated. But I’m still unsure that the M1 unified architecture can scale as well as discrete alternatives under heavy load due to heat. 32GB ram will be immensely beneficial for jobs that would have overflowed into swap before under the 16GB max, although it’s still less than the ram I have in some of my workstations.
The memory bandwidth of 200GB/s is great compared to typical DDR memory for CPUs. But at the same time it’s much less than the bandwidth available in modern GPUs. It’d be easy to brush this off except that the M1 architectures use an integrated GPU with shared ram, so both the CPU and GPU are using the same shared bandwidth, which is already slower than the memory used in discrete mobile GPUs…
https://www.techpowerup.com/gpu-specs/
I think shared memory will always be a disadvantage, but I need to be fair and see what the benchmarks say when they come out. For things like neural nets that are very memory intensive, I don’t expect these macs will be the best in class, even for laptops. But then again, the argument will be made that these aren’t targeting heavy duty GPU (and GPGPU) use cases.
As always, we need to wait until we have data in hand before making conclusions.
It’s 200GB/sec in the lower end model, and 400GB/sec in the higher end model.
Shared memory can be beneficial or detrimental depending on the use case. If you’re doing GPU processing or mixed CPU/GPU then it’s usually going to be worse, but for use cases where you’re using CPU with little/no GPU work it’s going to be faster.
There are also cases where the decreased CPU-GPU latency can provide benefits.
bert64,
Oh, yes I see that now. Thanks for mentioning that. I used the 200GB/sec quoted by Thom. That is closer to other mobile GPUs, but still only shared memory.
If you don’t use the GPU then sure shared memory won’t cause a bottleneck and you don’t need a discrete GPU either, but this is the opposite of the way things are going with more GPGPU enabled applications.
I’d say that’s a minority of cases though since most applications benefit more from off offloading tasks almost entirely to the GPU rather than tight coupling. Obviously there’s some coupling between video frames, but the existing latency is ok for existing PCI based discrete GPUs and it’s still getting better all the time.
The high end dedicated GPU is always going to have the edge once the data sets/textures are loaded into it’s memory.
However, the M1 has a non blocking crossbar memory controller, so they can share memory BW concurrently between the IPs in the SOC (CPU, GPU, NPU, VPU, etc).
What it is impressive is to get the performance they’re getting on the power envelope of the M1 Max. Getting close to discrete RTX performance on a single package means that Apple is going to have insanely better margins that competing x86 vendors,
To be fair, gaming on Mac is basically non-existent. So gamers will still go the discrete x86 gaming laptop route. But for professionals doing 3D, Video, and ML compute. These machines are really attractive. Getting 400GBps and 11Tflops for tensor apps in <65 watts is rather impressive.
You are simply lying and aware of this yourself too. The MacBook Air is throttling because it was basically designed to do that. It’s a demonstration that Apple chips can perform even with passive cooling.
It is lying with intent to highlight the cooling “issues” on the Air when Apple simultaneously released another laptop with the exact same chip but combined with active cooling.
sj87,
I was using the videos to show just how big a difference temperature makes for M1 performance. The M1 in the macbook air lacks fans and thermal throttles, which is no surprise, however even actively cooled m1 computers including imacs can reach and exceed 90C. As a PC builder that’s uncomfortably high. But even assuming apple has a reason to say such high temps are ok for M1 CPUs, it’s still a perfectly valid question to ask how well they’ll be able to cope with more memory/GPU units/cores in the short term. We’ll see soon enough once their new products are out this year.
Then there’s also the question of how much this limits long term scalability & lifespan. Aside: I wish all manufactures would be required by law to publish data on warranty claims & product lifespan. Most of the time consumers don’t have the benefit of this knowledge even though it would be very useful information to have.
Modern CPUs seem to be able to handle around 90 Celsius, so it isn’t news that Apple chips can go that high as well. Especially laptops will always be constrained on the cooling capacity even if they have fans and heat pipes built-in. The remaining fact is still that Apple chips run cooler under “casual” loads than corresponding Intel chips will.
sj87,
Yeah, but the lifespan for electronics tends to be inversely proportional to the operating temperatures so there are good reasons to keep temps below max, and a lot of systems do. Then again I haven’t seen comprehensive research on modern CPUs. That would be informative if you know of any.
Also we must keep in mind that the amount of power a CPU can use while staying under a max temperature will naturally vary with ambient cooling temps. So if we don’t give CPUs enough headroom then performance may vary at different times of day, days of the year, etc.
Agreed.
This has been true for a long time and is why I think ARM has been better for mobile in general. It’s not just mobile though, even data center operators have shown interest in ARM clusters because their efficiency helps with operational costs including electricity and cooling.
@sj87
Most desktop parts are certified to handle Tj of 95C no problem. Server parts tend to be certified for Tj=105C. And automotive Tj-125C.
On laptops the skin temperature is also used for mitigation. The usually average on Tskin=45C
javiercero1,
Do you have any sources?
All servers I’ve seen have take cooling to the extreme. Mine don’t even have an option to run CPUs at these high temperatures. I’m looking at xeons right now from the same processor family. And while I don’t know whether this is generally true, it seems that beyond 8 cores intel starts capping max temperatures per core. Take a look at these samples…
https://www.intel.com/content/www/us/en/products/details/processors/xeon/w.html
Xeon W-11155MRE 4C/8T 4.40GHz 45W 100C
Xeon W-1390T 8C/16T 4.90GHz 35W 100C
Xeon W-11955M 8C/16T 5.00GHz 45W 100C
Xeon W-1370 8C/16T 5.10GHz 80W 100C
Xeon W-3323 12C/24T 3.90GHz 220W 68C
Xeon W-3345 24C/48T 4.00GHz 250W 73C
Xeon W-3365 32C/64T 4.00GHz 270W 77C
Xeon W-3375 38C/76T 4.00GHz 270W 80C
For normal consumer CPUs, 80C is usually considered the healthy upper bound for normal operating temps even though intel puts the max at around 100C. I’ve seen it personally where a dead fan would cause CPUs to climb into the upper 90s and experience both throttling and crashing but it was quickly resolved just by replacing the fan. High temperatures does seem to have an effect on reliability.
https://www.pcgamer.com/cpu-temperature-overheat/
So while I can believe that some CPUs may handle such high temps without error, I do think it’s pushing their bounds and it’s better to run at lower temps whenever possible. It would be informative to read modern research on this topic.
You are basing this solely on the laptops, but in my experience with my M1 mini, there is no throttling going on there and it just keeps performing at top tier without a hitch. Obviously the larger, thicker metal case of the mini makes for an excellent heat sink and it has active cooling, but even under full bore 100% use on all cores including GPU, the fan on my mini is all but inaudible and the case is only slightly warm. I get that Apple may be overestimating how well the new laptops will perform, but when the M1 Max makes it into an actual desktop machine with no need for thermal throttling it’s going to be truly bonkers, as Thom put it.
Morgan,
Yeah, we need to wait for more data to come in about these CPUs. Obviously larger form factors should have better cooling, but it remains unclear by how much. It is temping to assume desktop cooling isn’t a problem, but in fact there is always a temperature gradient so ambient temperatures can still be a bottleneck. To see what I mean, look at the extreme overclockers who resort to delidding to cool down CPUs further. They’ve already reached bottlenecks with watercooling exposing the CPU case to ambient temperatures.
https://www.tomshardware.com/reviews/core-i9-7900x-overclock-ln2,5618-4.html
Just a tiny metal case and thermal material can contribute significantly to heat buildup when the core is putting out that much heat. Of course this was done for intel processors, and I’m curious about the real limits apple has with it’s processors, but the fact that they’re running CPU cores + GPU cores + memory in the CPU package means that heat could genuinely pose greater scalability challenges for apple compared to a design that has separate components. Separate components implies they can all be cooled better. Consider that both Intel and AMD offered way more cores for years than apple’s upcoming CPU variants will. This could be because apple isn’t interested in targeting HPC use cases, I’ve heard this argument before, but it could also be because apple hasn’t been able to overcome the thermal constraints of the M1’s design. To me the M1’s future scalability remains an open question.
It will boost up to an RTX 3080 for 5 seconds and then perform like an AMD APU. You can’t escape physics.
kurkosdr,
I agree, although I think you said “RTX 3080” when you probably meant to say “RTX 3080 mobile” or “laptop RTX 3080” like Thom said. The non-mobile cards have significant memory and performance leads over the mobile versions. The M1 Pro will not be able to approach the RTX 3080’s 912.4 GB/s bandwidth according to apple’s own specs.
It’s on a <70Watt envelope. Which is within the capabilities for a semi decent thermal solution for that form factor.
I read another article which fleshed out more marketing claims including ProRes and 4K video. The vast majority of people won’t need this. Many will but it’s niche.
The other bullet point is a proper keyboard. My laptop has a proper keyboard and as it spends most of its time in a dock I use a full sized keyboard and full sized display.
Personally I’m more interested in whether a modular industry laptop standard can be developed. Samsung Dex was a good idea I wish had continued. Bearing in mind how performant older OS and hardware was I’m wondering how all the power of modern machines has been soaked up. Like, we used to get work done on 286’s and DX2/66’s that if running a modern OS and applications would be a slideshow. Yes I accept some applications do need a lot more power behind them but running something which can process 4K video in realtime and a modern FPS game when your needs are business applications whose power needs peaked a decade ago seems a bit daft. It’s like going shopping in an F1 car.
Yes; but it’s probably worse than that. I believe that one of the most often quoted benchmarks (Cinebench R23) is an outright scam.
What I know:
a) Most benchmarks show that Intel beats M1 in most (single-core and multi-core) benchmarks, including Cinebench R20, by a significant margin; which is exactly what you’d expect for “15 W vs. ~95W” for thermally constrained modern chips.
b) Cinebench R23 was modified to “support Apple’s M1-powered computing systems”, but it’s proprietary/closed source and the publisher is very vague about what that actually means.
c) For Cinebench R23, M1 suddenly started beating Intel by a significant margin.
d) Apple’s M1 SoC includes CPU cores, GPU cores, and a bunch of accelerators. Most specifically, it includes a matrix coprocessor (see https://medium.com/swlh/apples-m1-secret-coprocessor-6599492fc1e1 ) that could dramatically improve the performance of things like software rendering.
My hypothesis is that Cinebench R23 uses M1’s accelerators – essentially, it’s comparing “Intel CPU vs. Apple CPU + Apple accelerators” and then trying to deceive people into thinking it’s a fair “CPU vs. CPU” benchmark; and then a whole industry full of suckers are left thinking that M1 CPU’s are “as fast” as Intel CPUs when they’re not even close.
Brendan,
I’m not aware of how cinebench operates, but if that’s the case you’re right it would be misleading to compare completely different code paths while not indicating that’s the case. I wouldn’t accuse apple or cinebench of this without more facts, but it is conceivable. This is why there’s value in looking at lots of benchmarks to look for trends. It doesn’t always have to be synthetic benchmarks either.
I do think the M1 does very well in single threading, but it’s done quite poorly in multithreading. Part of this is down to the M1’s lack of high performance cores. But there are also more nuanced points such as those discussed in this article about SMT:
https://www.extremetech.com/computing/318020-flaw-current-measurements-x86-versus-apple-m1-performance
Because AMD and intel optimized each core to run two threads instead of one, it means a single threaded workload or benchmark only loads a single core by about 80% on average. The remaining 20% is there but unused because the CPU is waiting on the x86 prefetcher to fetching & decode new instructions. SMT is a clever way to mitigate prefetch latency and keep execution units busy (GPUs use a similar trick too with even more threads to overcome memory latency). So while AMD and Intel have widely adopted SMT to increase core utilization and it works, it opens to the question of whether we should be benchmarking core for core, or thread for thread. There are arguments both ways, but ultimately if we don’t allow an x86 core to run two threads, it’s execution units are less likely to run at 100% capacity.
Cinebench R20 is only compiled for x86, it’s performance on M1 is going to be severely hampered by having to run through Rosetta.
R23 has been compiled as native code for M1. It’s bound to have a significant performance improvement over any non-native version. You can see the same effect yourself if you take any open source benchmark code and compile it yourself – native code runs massively faster than code through rosetta, even if no specific optimizations have been applied.
In terms of using accelerators built in to the CPU – whats wrong with that? You are comparing two CPUs, if one has features which improve performance for specific use cases then those features are fair game, and thats why benchmarks test a variety of different things. Intel is no different, their CPUs include features like SSE, AVX, AES-NI etc which are basically accelerators for specific classes of work.
Many programs already have multiple code paths to take advantage of newer CPU features like those above. A lot of those only have optimizations for x86, and when running on M1 fall back to a generic implementation in C.
In my own tests using an M1 air and a 2018 6-core macbook pro with i7, the air is considerably faster given like for like (ie code compiled natively) but can fall behind with non native (rosetta) code, or programs which include optimized asm for x86 but generic compiled versions for ARM etc (john the ripper for instance).
bert64,
Nothing is wrong with that assuming it’s clearly indicated that it’s running a different code path. Otherwise changing multiple variables at the same time (changing both code and CPUs) would be completely misleading if users are left under the impression that the scores reflect the CPUs changing alone.
Again, that’s fine as long as it’s made clear you are testing completely different algorithms on different CPUs and not the same algorithms on different CPUs.
For example it could be useful to benchmark the x86’s sha instructions against generic C algorithm and see how much speedup there is. But to take the accellerated x86 score, compare it to a non-optimized M1 score could be very misleading if it isn’t clearly communicated that the two aren’t running the same code. The same is true in either direction.
I don’t know if what Brendan is asserting is true, but if so then he’s got a valid point.
I don’t know, it should be obvious to technical users that different cpus run different code that was optimized for how each cpu works. That’s the job of developers of the application and the compiler. As a benchmark developer too, I think its totally valid to say ” we want to test how fast these systems will render this project” and use whatever cpu instructions to achieve that. That’s the goal to compare how well the systems accomplish real-ish world task. Not to see who can do a floating point multiply faster or whatever.
Bill Shooter of Bul,
It’s not obvious to me. Benchmarks usually only change one variable between tests to see the effect that variable has. If you start changing multiple variables, you loose the ability to correlate scores to that change, potentially making it less useful.
Sure, there’s merit in comparing completely different implementations. Like asking how fast can avid versus premiere versus final cut pro render a project. They may use different encoders, and maybe even different platforms. This is fine as long as these differences are properly disclosed. But if a benchmark switches algorithms used between platforms without full disclosure of the fact, I’d call that misleading.
Let me leave you with this thought: If a vendor tested their own system with DDR4-4000 and their competitors system with DDR4-3000 and published the scores in their promotional material without disclosing the difference and giving themselves a secret advantage, isn’t that very misleading? Yes, it objectively is since hidden variables were changed without disclosure.
Alfman
I guess you’re looking for a something odd I would never really consider important. It doesn’t matter to me at all how fast any part of a cpu is or various apis or techniques that can be used to accomplish a task. And that’s because I don’t code that low level on a regular basis. Its way too much work for too little benefit that might get blown away with the next hardware refresh. But I as a consumer of devices, want to know how well it will work with software package A. The relative fairness of the software or how well its been optimized for a specific cpu doesn’t really matter. I’m choosing between CPU 1 or CPU 2, which works best with this software for these tasks? That’s what a lot of benchmarks, including this one try to show.
Bill Shooter of Bul Gold Supporter
Alfman
Why would be odd to benchmark a CPU’s ability to run standard portable code? I think many if not most people will assume the benchmarks are testing the exact same algorithms. Otherwise changing hidden variables opens the doors to abuse.
Remember, my point isn’t that you can’t benchmark target optimized code and acceleration. There is merit in testing CPU acceleration too, but the fact that your doing so needs to be disclosed, otherwise the benchmarks WILL mislead people who aren’t aware that some of the scores are boosted through optimization. Similar optimization effort may or may not have been done for the competition.
I don’t know if you remember but this is exactly what happened many years ago when intel was caught optimizing intel code paths while depriving AMD of the same benefit. It mislead people into believing intel CPUs performed better because intel CPUs were better rather than because intel was running a more optimal algorithm. We should not dismiss this! This was done to intentionally mislead, and it accomplished that, at least until they were caught. These kinds of omissions on benchmarks are NOT ok. I see no reason that we shouldn’t all be in agreement here, objectively these optimizations should be disclosed and that not doing so is inherently misleading.
It’s obvious that it’s running a different code path since it’s a different processor.
If you want to run the same code path, then M1 will perform slower when running emulated x86 code, and x86 will not perform at all running ARM code because it doesn’t have a way to emulate ARM by default.
While it’s useful to know the level of performance under emulation, running emulated code is only temporary – the emulator will be dropped at some point like the PPC one was, and actively supported software will be ported if it hasn’t already been.
Insisting on running exactly the same code is stupid, and basically impossible for processors which are not directly compatible as even with emulation, the emulation code itself is going to be different. It’s likely that the high level (ie source code) is exactly the same between both x86 and ARM, and the same C code has been recompiled. It’s not clear if cinebench includes optimized ASM for any specific functions, but if it does it’s more likely to include them for x86 than ARM as the platform has been around longer. I severely doubt there is any part of cinebench (or any other benchmark tool) which has been hand optimized for ARM but hasn’t been similarly optimized for x86.
In my own tests, taking the same open source C code and compiling it natively for either ARM or x86 on the same version of macos shows extremely good performance on my macbook air, which runs significantly faster than my slightly older i7 based macbook pro.
Upgrading from one cpu revision to another will also run a different code path in many cases – a modern x86 system will not be running generic i386 code, it will be using code that makes use of SSE3 at a minimum (since no x86 mac ever shipped with a pre-SSE3 cpu) and is also likely to have multiple codepaths to provide improved performance on newer machines with AVX, SSE4 etc support.
Also keep in mind that the M1 is the lowest common denominator ARM Mac (the dev machines dont count as they were never commercially available). Any features it has can be considered standard parts of the platform since every ARM Mac has them and will continue to have them going forwards. In contrast to x86, where older macs lack AVX or SSE4 etc.
bert64,
No we’re not talking about emulation, but rather native code compilation. Meaning compiling the same program and having it use the same algorithm on each processor, ideally using the same compiler.
While there may be some optimization differences (like compiler’s auto vectorization), it still gives a decent idea for the relative performance differences to expect on average for the same portable code running on different processors.
We don’t necessarily need to use native code to make this point either. We could just as well be benchmarking python code too. The fact that they use different native instructions isn’t really what we’re looking at if we just want to estimate the average performance for python code running on different CPUs.
The point being if you change the program being benchmarked on one CPU but use the old program on the other CPU, it’d be misleading to people who expect benchmarks to reflect the same python by default on both systems. This is why it’s important to disclose any such changes for the benchmark.
That would make some difference; but well designed JIT typically only makes things 10% to 20% slower (especially when things like cache coherency is compatible) and not the “50 times slower” that would be implied by the differences between Cinebench R20 and Cinebench R23 results.
There’s 2 different cases.
If the accelerator is not built into the CPU (and isn’t just more instructions supported by the CPU), and is more like (e.g.) a GPU that is separate to the CPU (but in the same package); then it’s extremely misleading (potentially as bad as comparing “CPU vs. CPU+GPU”).
If the accelerator is not an accelerator and actually is an instruction set extension (using all the CPU’s “fetch, decode, execute, retirement” pipeline); then it creates concerns about how well optimized the benchmark is for each specific CPU (is it an “optimized Apple code vs. unoptimized 80×86 code” benchmark?).
Note: I’d expect it’s actually both – an instruction set extension/new instructions to communicate with the “not part of the CPU core” accelerator (mostly because that’s how Intel’s Advanced Matrix Extension will be implemented).
For either case (accelerator or instruction set extension) there’s a third concern – how well does an “easily accelerated” benchmark match the performance you get for normal (“not easily accelerated”) software? In other words, can we say that M1 is slow (except for rare niches), and that Cinebench R23 is that rare niche?
Note that Intel are planning to release their “Advanced Matrix Extension” next year. It will be interesting to see if a new version of Cinebench is released to support Intel’s AMX (or not); so we can determine if Cinebench is dodgy (“optimized Apple code vs. unoptimized 80×86 code”) or if Cinebench is dodgy (benchmarking a niche that isn’t indicative of the performance of most software).
I don’t believe “considerable faster”, especially not for multi-threaded workloads. The M1 Pro has the potential to change that (due to significantly higher memory bandwidth and the increase in the number of high performance cores).
I do believe “considerable better perf/watt” though.
Well don’t take my word for it, try it for yourself if you don’t believe me.
I have an M1 air with 16GB, a 2018 pro with 6 core i7 2.2ghz/32gb, and a 2017 pro with 4 core i7 2.9ghz/16gb all in front of me right now.
The air is considerably quicker at most things i’ve done – compiling, synthetic benchmarks etc. Ignoring the obviously biased benchmarks published by apple and intel, all of the independent benchmarks seem to show the original M1 outperforming even the latest intel models in many benchmarks and its highly likely the new M1 pro/max models will extend that further.
Emulation of x86 code varies depending what the original code was, Some applications include multiple code paths to take advantage of newer cpus (AVX, SSE4 etc) but i don’t believe rosetta supports these instructions, so these applications tend to use slower fallback code intended for older x86 macs etc. Some applications perform well under emulation, some have a significant performance hit or don’t run at all.
People who are discussing the notch still are missing the forest for the trees- leaving out the notch/menu pixels at the top, the screens are 16:10 ratios. They added the notch both because people don’t like bezels and because it doesn’t cost them anything in terms of screen real estate- it’s all area that they would’ve used for a camera anyways, and this way they give the bonus of being able to let the app take up the area the menu bar would’ve used previously.
haus,
Never seen this on a laptop, but my wife’s phone has a notch. She didn’t make much of it when she bought it but after a few weeks she remarked that it’s kind of ugly and she doesn’t want a notch in the future. Some people may feel differently, but I agree with her, I don’t like it and it’s not for me if I have the choice.
Who are these people who hate bezels on laptops? Or screens in general? I understand on phones, makes sense to try and get as much screen as possible (though curved displays suck for many reasons, they should have remained flat). But on a laptop especially, I feel the bezel gives greater protection to the screen over something that has tiny bezels. A larger screen is nice, but not at the point where I’d want a crappy notch!
Everything in the specs (minus the soldered storage) is actually quite nice outside of that damned notch. 16:10 should actually be a thing. Maybe if they release a notchless/webcamless version, I’d buy one (seriously do not need/want a built in webcam.)
@leech
I wonder if the same people who hate bezels may be the same people who curse us with lots of unnecessary white space.
Urgh, whitespace and flat desgn. The two most pointless, useless “design paradigms” of the last 10 years. The quickest way to make an app unusable is make it so you can’t see where one window ends and another opens (looking at you, Windows 10), and losing context markers for the sake of simplicity (looking at you, iOS)
The amount of customers who don’t want a webcam on a laptop, especially on the current work from home environment, is too small to warrant a special SKU.
I am one of those people who abhors bezels on a laptop. Having the most screen real state possible is a better value proposition, to me at least.
“within spitting distance of a laptop RTX 3080”
If you only pay attention to benchmark results here you’d fail to notice the current M1 GPU is abysmal at pushing polygons. It’s the same for how iGPUs in Intel/AMD chips give great benchmarking results, yet an MX450 easily has double the frame rates with much higher detail settings when actually running games.
The most funny part is everyone saying “they are fudging the benchmarks, Intel and AMD crush the M1 blah, blah, blah”
The key here is Apple has made the whole industry follow them as always. Intel talking about they gonna try to get Apples business back making better chips and then trying to roast Apple with bad marketing. If Apples chips suck then talk about that Intel. No they making ads about Apple saying that there are no touch screen macs???? (No duh)
While In the mean time everyone else now running out to make their own chips similar to Apples to try to follow Apples lead. ♂ (Yes I know other companies make chips but Apple now has them shook)
I bet everyone gonna be trying to copy these pro chips in the near future. Watch.
Windows Sucks,
Apple doesn’t really make any chips though, they are fabless like AMD and even use the same fabs. So in that sense, apple’s really been “copying” AMD. “Copying” is a loaded word though and it can wrongly imply that apple didn’t put any work in, when of course they did. Nevertheless it’s clear TSMC deserves most of the credit for apple’s fab advantages.
Fabs aside, we can look at other factors. So far apple has a lead on singlethreaded, but still has to catch up on multithreaded. I posted a link earlier about how/why x86 is optimized for SMT. It is conceivable for apple to copy SMT from intel & amd rather than the other way around because there could be more parallelism to gain by having it. Who knows though. Then there’s ARM versus x86, my own opinion is that ARM has some slight advantages over x86 with less complexity, less instruction latency, and improved efficiency. Yet it would be existentially hard for x86 vendors to copy that though because of how important x86 is to the PC industry. Then there’s difference of discrete versus unified CPU designs. There are many tradeoffs between these two philosophies. The M1 has much shorter buses, helping latency, however you have limited memory and GPU options and the lack of scalability and customization is an objective downside. Thermals may be problematic too. I predict M1 will continue to lag the performance of a discrete GPU system.
IMHO we’re likely to see more hybrid approaches in the future. We’ll see chips with a lot more on-chip cache, but also have off-chip expandable DRAM. PCI lanes will continue to improve, reducing the benefit of on-chip processors. I think it’s good for the industry that we have different companies trying different approaches rather than all doing the same thing. IMHO we ought to steer clear of monoculture.
Having someone else manufacture something another company has designed isn’t exactly new. Thousands of M1 Garand’s were manufactured during WWII by GM, but we don’t call them the M1 GM.
The123king,
Yes, it’s not new at all, OEM stuff is done all the time.
FWIW Apple has been a fabless vendor for way longer than AMD
I mean we can all say what we want but Intel isn’t going at Apple about their chips being better.
According to Intel they need to make better chips cause they want Apples business back.
https://www.techspot.com/news/91802-gelsinger-intel-wants-get-apple-business-back-outcompeting.html
Intel it’s self doesn’t think they are making better chips currently.
Also Intel and AMD have been making chips for decades for Apple to pop up and shake up the industry is amazing.
Windows Sucks,
Again though apple doesn’t make chips. So when we’re talking about physical advantages it becomes more about Intel versus TSMC, and TSMC is winning.
This is not new, nor is is secret, both AMD and Apple got a big boost from Intel’s fabrication failures for the past few years.
TSMC is not some secret sauce that only Apple has access to.
AMD processors are manufactured by TSMC, as are many other ARM designs. Intel choose not to have TSMC manufacture processors for them but that’s a choice they made, TSMC would happily produce processors for intel
Apple’s ARM CPUs are performing considerably better than other ARM designs also being manufactured by TSMC.
@ bert64
Intel has made a choice to manufacture some of their processors using TSMC.
A big chunk of Atoms have been done on TSMC for years. The new i3s are done at TSMC.
BTW.
bert64,
I was just saying TSMC deserves more credit than it’s getting. Most of the public are oblivious to TSMC’s role in beating intel fabs.
I would point out that demand has exceeded supply for over a year and it’s very probable that the corporations with deeper pockets are chosen first.
It would be interesting if you have data supporting this, I’m not sure if all of TSMC’s customers have access to their top fabs, Conceivably apple could be negotiating for exclusive rights to the latest fabs at TSMC for some period of time. Of course I’m not privy to the contracts, but it would give apple an advantage over competitors pushing them to older fab technology. These kinds of exclusivity arrangements are incompatible with idealized meritocracy, but somethings this is how things work.
Intel is not getting Apple business back anytime soon.
They no longer have any value proposition for Apple. Apple is basically at least 1 year ahead in both architecture and fab process. Which is an insane leapfrogging consider that intel had a ledarship in both fronts for decades.
To be fair, it is not that Apple came out of nowhere. They’ve been doing their own SoCs for almost 15 years, and PA-Semi was made up of lots of veterans in the field.
Yeah. People love to trash apple, but usually they don’t comprehend the level of competence their engineering teams bring to the table in certain areas.
Right now Apple has perhaps one of the leading architecture teams, and they have basically made it clear they don’t need a 3rd party CPU vendor to satisfy their needs.
Neither Intel nor AMD have anything that can go toe to toe with the M1 Max for at least another year.
There’s a tectonic shift happening, it’s the usual higher level of integration leveraging larger market wins the race. Apple is leveraging the larger mobile (phone/tablet) market to make their SoCs as performant as entire “traditional” PC.
Just as minis took over when they were able to put multiple transistors on one single package. And then the PCs/Workstations where able to put a whole discrete CPU on a single package. And now the whole system is on the package. The previous iteration is left in the dust.
The people who were vested in the technology that is on it’s way out also display the similar grief process: denial, bargaining, anger, depression, acceptance.
Intel and AMD will still have a nice business going for a while as the desktop and server is not going anywhere soon. But the SoC vendors: Apple, Qualcomm, and NVIDIA (plus whatever is coming from the East) may become significantly larger than the x86 vendors by the end of the decade.
There’s an entire generation of users already whose main computing device is a phone or a tablet.
I don’t think a lot of people grasp how disruptive the M1 really was. To get a desktop-level single thread performance on sub <20W power envelope applications.
javiercero1,
It’s not so much about trashing apple as it is critiquing the lack of objectivity often associated in covering them. There’s a lot of bias and I’ve found the only way to be fair is to insist claims be backed by solid data.
(My emphasis)
The problem with your assertion is that it’s just too absolute, and as such it stands a high chance of being rebuked later on when the benchmarks are out. I think you should qualify your statements up front rather than making them absolutes, something like “nobody has anything that can go toe to toe with the M1 Max in the same power footprint for at least another year”. See, this is probably what you meant anyways and it’s likely to hold up far better than your original assertion in absolute terms. As is, your assertion is open to being contradicted by *any* hardware running intel/amd.
As it stands right now, not a single SoC from Intel or AMD can match the combined CPU-GPU-NPU-VPU and Bandwidth performance of the M1 Max, much less so at its current power envelope.
The AMD/Intel parts that have better CPU performance have lower performant GPUs/NPUs/VPUs. Or in the case of AMD no NPU at all and their VPU sucks. Both AMD/Intel end up at higher power envelope, and with less integration in their SoC than Apple offers.
The M1 is that disruptive.
javiercero1,
Everyone with serious GPU needs goes for a dedicated GPU, even some M1 users wish that dedicated eGPUs were available to them since SoC & shared memory were a downgrade.
https://discussions.apple.com/thread/253074386
Seriously if you feel it’s that important to lower the bar and only compare the M1 to Intel/AMD SoC solutions with iGPUs, then go right ahead…be my guest. But make no mistake these are low end PCs you want to compare the M1 to, not high end workstations used by professionals and even mid range markets.
It’s disruptive enough to beat low end PCs with iGPUs…yeah ok I’ll give you that one :-). But believe it or not I think you’d be underselling the M1 by only comparing it to low performing SoCs.
I think the M1 deserves to be compared against higher end workstations even though it can’t beat all of them on all specs. I cannot comprehend why you get so defensive about this all the time. The M1 can still be a great CPU for millions of users even if it doesn’t win every category.
I’m comparing Apple’s highest performance SoC with Intel and AMD’s highest performance SoCs. I.e. “toe to toe.” Really not a hard concept to grasp, alas.
FWIW M1 Max offers Ryzen 5800x CPU performance with RTX 2060/2070 GPU performance. That’s a pretty beefy PC. And to do so on a single package, and running off battery. Yeah, that’s pretty damn disruptive.
javiercero1,
So you insist on explicitly defining a market that rules out comparisons to mid & high range workstations. Well in that case we both can agree M1 max is very competitive with low end PCs. Congrats.
Cite a benchmark. Failing that this is implied to be speculative on your part.
Every time I make the mistake of interacting with you, it only reinforces the notion of you being on the spectrum or suffering from a Cluster B personality disorder.
Geekbench:
ST: M1 max= 1749 5800x = 1673 M1 is 5% faster for ST
MT: M1 max = 11542 5800x = 10374 M1 is 11% faster for MT
M1 max/32 core GPU = 10.4 Tflops
RTX 2080 = 10.07 Tflops
A Ryzen 5800x/RTX 2080 is a pretty beefy PC configuration. Getting that level of performance from a single package on a < 60Watt power envelope is pretty disruptive.
javiercero1,
BTW I took the average of the last 6 entries and got 1.5% faster for ST and 11.3% faster for MT,
This compares the M1’s 10 cores to Ryzen 7’s 8 cores. I realize 2 of the M1 cores are low power cores, but it still contributed to the 11% gain. In any case we can agree they have similar performance.
I don’t know where 10.4 tflops comes from. The original m1 was rated at 2.6tflops, and apple marketing says it’s 4X faster and I’ve seen some news outlets multiply these to get 10.4, but I cannot tell if this is actually a real spec or just hand waving from the marketing department. Never the less, Based on theoretical figures, the M1 max seems to be on par with the RTX 2080. The 2080 has a theoretical 12% bandwidth advantage, but all and all the seem pretty close. The benchmarks will prove interesting because the designs are still quite different from each other.
I want to see independent reviewers confirm the details, but I don’t disagree with you that this is pretty good considering the all-in-one CPU form factor.
We are comparing it against an 8 core CPU released one year ago and a 2080 GPU released 3 years ago, and for these reasons I don’t know that M1 max is going to appeal to as much to the high performance crowds, but it’s good to see apple tackling performance per watt, which x86 computers haven’t done as well with.
See, we can agree on things
@ Alfman
We are not agreeing boomer. You’re the one driving in the slow lane having trouble catching up once again.
I mean what I said: neither intel nor AMD have a comparable SoC to Apple’s M1 max. You asked for benchmarks, I provided them.
javiercero1,
These ageist and adhominum attacks are lame, javiercero1.
Anyways you are creating an arbitrary box around the M1 that excludes 100% of high performance PCs. Obviously I understand that your motive for doing so, but it’s just not that convincing to be honest and it comes across as an extremely biased salesman pitch.
@ Alfman
OK boomer. LOL
javiercero1,
That’s as lame as ever. Even if I were a boomer, it’s pathetic of you to use age as an insult. There mere fact that you feel the need to insult people who disagree with you here on osnews is childish and degrades the comments. Grow up.
@ Alfman
And yet after your idiotic excursion to nowhere my original point still stands.
Neither Intel nor AMD will have a comparable SoC for another year, at the very least.
Now kindly go waste somebody else’s time.
javiercero1,
You have no problems comparing apple CPUs to older CPUs and discrete GPUs when the M1 can match or beat them, but whenever it comes to technology that beats the M1 max’s performance, it’s oh so important to protect the M1’s reputation with a mental barrier designed isolate the M1 from comparisons to high end competitors. Your hypocrisy stinks. And moreover users don’t care if a product uses one chip or more as long as it delivers the goods. In other words, your reasoning is 100% contrived.
Whether you like it or not M1 max benchmarks will be and should be compared to higher performing computers because at the end of the day the internal workings of a system are a means to an end and it’s the end results that matter.
IMHO the M1’s strong point today is efficiency and battery life, focus on those things instead of coming up with BS excuses for why the M1’s peformance can’t be compared to more powerful workstations.
To be fair though, I nailed it at the top. You made an exaggerated assertion and the rest of this thread is a byproduct of that.
@ Alfman
your narcissism always leads you astray through the same route: lost in an argument that only exists in your head.
The whole point is about power/performance efficiency since I was referring to SoC’s.
Again, right now neither Intel nor AMD have an SoC that matches Apple’s M1 Max performance on that power envelope.
I suspect one of your main talents is the obfuscation of the obvious.
FWIW I actually work for a competitor of Apple, so I have no incentive to put their products on an exaggerated pedestal. But credit where it is due,
Take a look in the mirror.
I don’t contest that there are efficiency benefits of ARM CPUs and M1 in particular, I’ve said that from the very beginning, so we are in agreement.
Again though your distinction is both meaningless and hypocritical. It’s like selling a lower spec product and calling it the highest spec product because it has one doohickey instead of two under the cover or some nonsense like that. That salesman crud might work with uninformed consumers, but it certainly doesn’t work on me or anyone who knows that a product’s merit doesn’t stem from obsessive talking points but instead from actual data points and benchmarks. Speaking of data, you did provide you’re own benchmarks for the m1 max, and I thank you for that. However your data makes my point: the M1 max may not be that competitive against high end workstations. Can we at least agree on this? If we’re being objective, your data gives my position a lot of credibility.
Meh, I had you pegged from the get go. At this point I’m just amused by the pathology on display.
So once again here we are. This time you had to taking exception with neither AMD nor Intel having currently a comparable SoC in the same performance/power envelope as Apple’s M1 Max.
I already show you the benchmark results. There’s nothing more to discuss.
But you can’t help yourself, and end up engaging in these bizarre mental gymnastics. You went as far as literally having a conversation with yourself going off about professional users, discrete GPUS, and whatever nonsense.
It’s clear that when we interact, a lot of what I talk about is stuff that goes way over your head. I would treat you with respect if you had engaged with any shred of intention to learn. Instead, You simply assume that if you don’t know about it, it must be wrong.
I mean it’s comical at this point the interactions we have had. You thinking you get to arbiter stuff about microarchitecture, semiconductor manufacturing, what a SoC actually is, etc. Stuff you have not much of a clue about.
Oh, well.
You’re pointing to things that we already agree on…moving on…
Great, I’m pleased that you’ve acknowledged the existence of data that proves my point. Since we agree about the data, you should objectively agree with my point. The only reason you are upset at me is because of what it means for me to be right: apple still has some catching up to do at the high end. You’d rather be in denial than face truth that is self-evident from the very data you’ve cited.
Apple is doing very well and making progress every generation. but you want to pretend that they’ve won every battle when they haven’t, at least not yet. This is the truth and everyone who looks at benchmarks will know it. We both know I’m right, but you’re having trouble admitting it. Oh well, it is what it is.
Yawn.
I was talking about SoC’s. Neither AMD nor Intel have a SoC that matches Apple’s premium tier SoC right now at that power envelope. End of story.
The only reason why you think this was opened for discussion is just your usual mental gymnastics and tangential crap, you come up with, to win the debate that is only going on in your head mate.
javiercero1,
You keep repeating this, but no it’s not the end of story at all, not unless you choose to be in denial.
If you’re a professional in need of a workstation to handle massive compile jobs, or massive vector processing, do you opt for an M1 pro or max with lower CPU and/or GPU performance simply because it’s a SoC? No, virtually no consumers will be thinking about that. In actuality they may look at things like performance, battery life, build quality, platform, cost, maybe even prestige, etc.
They may end up choosing an apple computer. But let’s get real, apple users never gave a crap about sticking with SoC CPUs before now. The only reason you’re obsessing over it is because you’re looking for any arbitrary reason to reject comparisons to more powerful workstations and not because it’s logical for consumers to do so.
Boomer,
What are you even going on about? I keep repeating the point because you’re having one of your senior moments and you kept getting lost in an argument that only exists in your head.
The reason for this article is about Apple releasing new laptops and the SoCs powering them. I simply said there’s no x86 SoC equivalent in performance right now to the M1 max.
Apparently you lack basic comprehension skills like context and scope. JFC
javiercero1
…and another ad hominem attack.
Not at all, when new CPUs come out it’s completely normal and standard practice to see where they sit in relation to the rest of the industry. To suggest otherwise is silly. You just want an exemption from critical thinking in order to justify statements like “Neither Intel nor AMD have anything that can go toe to toe with the M1 Max for at least another year.” without having to concede the fact that there are already intel and AMD machines that can go toe to toe with the M1 max based on benchmarks and your data proves it.
The M1 max is very impressive without salesman gimmicks and exaggerations, so why are you so adamant to use them as though the M1 needs security blankets every time someone points out the fact that M1 is not the best at everything. Let me ask you strait up, is there ever a scenario when a consumer is better off going to apple competitors with higher specs? If you answer “no”, then that speaks to your bias. If you answer “yes”, even in the slightest, then that concession is good enough for me.
Calling you a boomer is not an ad hominem, but a statement of fact.
You’re from that generation where at some point you were told that the customer is always right, and you took it to heart to mean you’re omniscient.
There’s no x86 SoC that has higher CPU/GPU performance than Apple’s M1 Max right now. Period.
That there are higher performing discrete desktop CPUs or GPUs is irrelevant tot the context of SoC. Just as it would be irrelevant to talk about a 10 ton Megawatt supercomputer when discussing an laptop’s SoC.
You seem to be projecting some insecurity that’s just bizarre.
javiercero1,
Well, your facts are wrong, and the way in which your trying to use it as an insult is an ad hominem.
We’ve already established that everyone’s got doohickeys in different places. Great, but now it’s time to move ahead and ask how well they perform. It’s obvious the reason you’re so apprehensive about moving forward is because you know the benchmarks don’t favor SoC iGPUs including M1’s. You had no problem comparing M1 max to discrete cards when the benchmarks are favorable, but when they aren’t suddenly we can’t…well that’s hypocritical BS.
What happens when apple themselves decide their SoC is too thermally constrained to achieve better performance and they need discrete chips as well? I’m sure you’ll be flip flopping everywhere in order to say that SoC doesn’t matter after all.
The problem is that you have a preconceived opinion about who the winner is and the facts don’t matter because you selectively cherry pick criteria and data suitable for you while dismissing everything else. Your thought process follows that of a flat earther; I am right to call out your lack of objectivity.
Great, that concession is all I’m looking for. Things might change in the future, but for now we agree that some users will get better performance elsewhere, which is what I’ve been saying.
And I am keeping telling you that what you have been saying is irrelevant for the topic I was talking about: Neither AMD nor Intel currently have a SoC that matches the CPU/GPU performance of Apple’s M1 max.
I keep repeating it because It;s just bizarre how you always create these strawmen in your head so you can be right about something that wasn’t the matter at hand.
javiercero1 ,
Great, as long as you concede that discrete solutions can offer better performance, which you already have, that’s good enough for me.
Now, if you want to argue that better performing solutions are irrelevant, well I think that’s silly and highlights your own bias, but hey you do you.
I keep reminding you about the topic at hand, and you keep interjecting that bizarre debate that only exists in your head.
This is a post about an apple laptop and the SoC power it. Instead you might as well be talking about the pricing of frozen orange juice futures.
It’s always the same negging tangential nonsense with some of you old farts.
If it’s an article about something that uses the cloud. Y’all have to talk about the desktop.
If it’s something about Windows, y’all babble on about Linux.
If it’s about Android, y’all go off on Windows Phone or Palm.
If it’s about Linux, then it’s a good time to talk about BeOS
If it’s about powerful local compute, then everything now must be done on the cloud.
If it’s about a laptop and a mobile SoC, let’s talk about high end desktops and discrete components.
Some of you are the geek equivalent of an energy vampire. LOL.
javiercero1,
No, you only say that because my point is an inconvenient truth. It’s not just in my head either, obviously some people really will want more performance than apple’s SoC is able to deliver. Heck I’ve already posted a link about an M1 user who wished he could use an eGPU to regain lost performance. It’s likely going to be the same with the M1 max given what the specs appear to be. Since apple doesn’t support eGPUs with the M1, it may not be the best option for pro users who want to be able to upgrade GPU performance.
Anyways if you want to argue that people don’t want more power, I don’t really care about that. People will go with whatever they want and whatever they choose is not my concern. My point was only that there are workstations with more power particularly with discrete GPUs for those who want/need it, and we’ve already settled that as true. So do you have anything else to talk about other than throwing regurgitated insults my way?
Boomer, I didn’t say anything about people not wanting more power. You keep going on tangential discussions that only exist in your head so you can be correct about stuff nobody is talking about except you.
javiercero1,
Every one of your ad hominem attacks degrades osnews, just stop it man.
So that’s settled, you don’t disagree with me that some professionals will want more than an M1 max like I’ve been saying. So it seems to me that you’ve got nothing more to disagree with me about.
Your continuous need to introduce inane unrelated arguments to cater to debates that only exist in your head degrade the quality of this site infinitely more than me referring to your generation.
This news item is about laptops using a specific family of SoCs. The fact is that neither AMD nor Intel have a mobile SoC that matches the CPU/GPU performance of Apple’s premium tier mobile SoC, Furthermore that SoC also manages to match the performance of other mobile solutions using discrete components at a lower power envelop. All of that is factual information, which helps understand the context of the product: where it places in the performance scale of competing products, why it is disruptive, etc.
That there are some users out there that require more power than this SoC can offer is just a subjective qualitative argument that adds nothing to the debate. Because that applies to all mobile SoC regardless of vendor. In fact, that literally applies to any product.
javiercero1,
Nope, sorry that’s not how it works. You can cry about comparisons being unrelated all day long, but that does not make them unrelated and I can assure you many people for whom performance matters will be making the relations as well. Not only is it valid to do, but it’s actually necessary in order to be informed.
In what world will anything else be just as expensive!? The cheapest 14 inch macbook pro is 33% (EUR550) more expensive than a 16 GB Dell XPS 15. Sure this would change if i had to try to match the specific specs, but i am comparing two laptops where design and build quality can at least be compared, and i think those base specs are just fine for most people.
The notch covers exactly the screen area used by date/clock and notification area in GNOME. Not very usable :/
Not like you’ll be using GNOME on one any time soon.
It may be sooner than you think for bare metal Linux, and right now you can run virtualized Linux full screen on any M1 Mac via Parallels, and when in full screen mode it will definitely be an issue. There are GNOME extensions that can move the clock and notification area of course.
Or you could use something sane, like XFCE; no plugin needed to move clocks, notifitaction areas and so on.
Agreed, I’ve never been a fan of GNOME3.
Or you configure parallels to not expose the top part of the screen to the virtual machine, and instead have the screen area either side of the camera black.
That way you get exactly what you had before – a rectangular screen at a high resolution, and a black bezel at the top of the screen with a camera in the middle of it. Your GNOME clock will sit under the camera.
Don’t see the notch as “taking away part of the screen”, see it as “giving you extra space either side of the camera that would previously have been an unusable bezel area”. Worst case – make that part of the screen black and ignore it.
I don’t know people have linux booting on M1 macs. If you want to, you should be able to in the future, Apple isn’t going to get rid of the design for at least a few years.
There are extensions to move the clock to the right.
This makes me even more curious and excited about the M2. This isn’t their next-gen cpu, it’s basically an M1 with more cores and memory bandwidth.
At least we kind of see the strategy Apple is going to be using with their high end SoCs
So the M2 will be using the same bining for creating the tiers. Probably at the 3nm (or whatever next node from TSMC is going to be).
They already have a 1 year lead on anything AMD/Intel can put out. So I don’t see that changing.
I expected their MacPro to remain intel for at least a couple of years. Perhaps by then the M2 scales up to 20+ cores.
A great piece from Steve Sinofsky (Win7 / Office) fame, i thought was quite insightful
https://medium.learningbyshipping.com/apples-long-journey-to-the-m1-pro-chip-250309905358?gi=e2284a7fb78b
Different people have different needs. For me personally such Apple device is a no-go because i can’t install GNU/Linux on it and after leveraging the hardware in it’s full potential. For tasks related to utilizing parallelization on CPU/GPU. But i like to see progress as the market will adapt and there will be more ARM based solutions in the future available for GNU/Linux. Still i doubt we will move away from current situation regarding PC architectures anytime soon. Apple can afford this up to a point due to the way they do business. It will likely be hard even for them, on the long run, to stay ahead of the competition.
Finally, Apple makes something compelling. Seems Apple is making all the right hardware moves by bringing ports back and ditching the touchpad. And their own SOCs are undeniable fantastic.
But I see nobody complaining about the eyewatering prices. 2000+ and these are only starting prices! Almost certainly not enough RAM and storage for the professional. It will be closer to 3000 when that’s added in.
Am I the only one not having enough money? That’s twice the amount I spend on a car! So not for me still
Maybe a 2nd hand Mac Mini M1 in 2 years time when I can wipe MacOS for Linux??
Well, these are their pro models. These machines are not really that expensive if you’re making money with them. They’re a waste of money if all you’re doing is consuming content, there are far cheaper options like the 13 inch or Air models. Or just get an Ipad pro.
At $3K for the high end models, these machines are clearly not for the consumer market.
I can see the appeal for a graphics designer, publishing or CAD guy. But indeed even there: wouldn’t you buy a desktop? What do you recommend to buy if you’re a developer, you know IDE + some local compile jobs? Maybe a VM for server emulation? The 1500+ Air (after adding RAM and storage) or the 3000+ MBP?
I don’t know what kind of development you do. But the 13 inch M1 Mbook Pro seems to be the sweet spot.
Well, I think for the scenario above (IDE+compile jobs+VM), a MBP is vastly overpriced at 3000 euro/pound/dollars
Similar performance for 1000 less with a nice Lenovo or Dell
Well, then the high end MBP is not for you.
I can see the need for some use cases, but in my org we now use cloud services for heavy graphics processing for things like 3D capture and imaging used in our field. There’s really not a need for more power at the workstation level. Even older hardware on the x86 side is barely getting used by our users even with all the bloat and software inefficiencies present. Apple is also expensive and doesn’t support most solutions used in the automation and electrical engineering industry to begin with so we don’t have any deployed and that’s not changing. I am sure it will be useful for some and the brand conscience who have a lot of money to spend.
Personally I haven’t had a apple product since a Mac clone I had as a kid. The continued decision to avoid Apple only gets easier knowing the ethics of supporting a giant company like Apple today. I feel better spending money with Microsoft which is crazy thinking back years ago and have far things have changed.
Agreed, my organisation is the same, the cloud is the trend for heavy lifting nobody is going backwards, having a super-computer in your armpit has become a bit passé!
I can understand that there is a push from some legacy software providers to make traditional desktop software perform like the cloud, but it’s a pointless race because no matter what you do on the mobility side the cloud just improves at the same or faster rate. The very fact that traditional desktop suites are moving to the cloud should be enough of a tell!
The security argument doesn’t cut it either, because the people suffering violations are unlikely to maintain a personal device with sensible updates and parameters anyway. The slink around like a spy with a device cloaked under a jacket, but it is a useless device if they are invested in the technology enough to learn to protect it, spending more for faster better is a smokescreen of security.
So I’m a bit with @Alfman on the concept of a private HPC Client / Server setup. With the increasing ubiquity and reliability of high speed broadband it’s becoming self-evident that a responsive lightweight all day thin client connect to a private HPC seems to be the way to go. The very very rare network downtime makes little or no difference, it’s not even a consideration anymore for 99.999% of users, because reliability is so high!
There is also some irony in a company that pushes a utilitarian clean design approach pushing hardware basically designed to make it’s bling zip. They are going to render a 1mm continuous tone semi-transparent bevel on my frames at 450 dpi hell or high water! Your requirement to 3D render the next heart-repair gadget was never a consideration, but it is a side-effect. It’s just a fundamental contradiction that is never questioned by the devotees!
Good luck trying to edit a multi gigabyte 8K pro-res project on “the cloud.”
There have been solutions for doing just this for many years now, most of them developed for medical or scientific imaging that routinely deals with TB scale data, and the next generation of such is being made ready for PB scale image data taken from facilities like the SKA or LSST. Pixel arrays far beyond little bit of video!
They don’t serve raw image data in it’s totality, you only have to serve the data can you see on your local display.
Nobody is doing video editing on the cloud. It would take longer to upload it on the cloud, than process it locally.
For SAAS use cases or apps where you generate most of the the data in the cloud, sure it makes sense to process it there.
But there’s still a need for powerful clients, specially for stuff like video production, 3D modeling/Content creation. That’s why there are still mobile workstations, and nvidia makes a pretty penny with Quadro on the pro market. These are just the Apple equivalent for that sector of their client ecosystem.
Yep, thankfully my org is located in a city that has amazing municipal fiber internet. The speed is far beyond what you can get from the private sector options. So the data transfer time is reasonable then the work is done with a cloud editor. Our use case is probably different from content creators as it’s for 3d imaging based on point clouds created with laser scanning devices. That said even if we wanted to do it locally or when we use autodesk revit, apple is not supported by these vendors.
We do use Apple mobile devices for some users as some users find it next to impossible to operate a mobile device they are not familiar with.
kepta,
Yeah, it is remarkable how much of a gap there is in internet services in areas where there’s competition versus not. Of course in the US some rural areas are under-served with no broadband at all. If they’re lucky they can use mobile data, but then there even the “unlimited plans” are quite limited. My parents had no choice but to use moble service for internet. Dispite the unlimited plan they faced severe throttling and paid extra $10/6GB/month for tethering. Even just unwanted video ads can cost real money.
At least things aren’t that bad for me, but our provider is a monopoly and it’s $80/mo for 60mbps service, there are faster packages, but it comes at a price.
Any other laptop will be just as expensive? Ie. $1999 or more? What absolute crap. You can get laptops for a couple of hundred dollars. Thom should give up and let someone competent run the site.
It’s pretty obvious he meant any laptop in the same class as the Apple devices. That $200 laptop isn’t going to come anywhere near the performance, battery life, screen quality, and light weight of even the least expensive Apple laptop.
My bad, you’re just trolling. Carry on.
Cloud or Client Server, nobody I know doing heavy image or video editing does it on a single powerful desktop or workstation, it’s an absurdity, they nearly all use a client server model, cloud or otherwise built on HPC. 8K is somewhat trivial compared to industrial and scientific requirements, I’ve associates making what some might describe as 12K redundant. Imaging systems with 100 megapixel sensors that capture 10um resolution data ( 2540 DPI ) in proprietary image formats that can be a stack of single wavelength frames or calibration frames 4,5 or up to 9 layers deep, that can then be tessellated to form a larger image, animation or movie. All done as client server with the client anywhere on the planet, you could do your work on any OS even a Chromebook sitting in a shack in Lapland given a broadband connection! Nobody in their right mind is going to do this stuff on a powerful workstation or MacBook Pro, the suggestion they need to is an absurdity!
Tons of video editing is done using FCP on the mac.
Little Jimmy’s 30s TikTok video or any other Final Cut Prosumer content is not really what Pro is meant to mean!
For years I was a Mac Pro advocate, one of my my side-gigs was managing a room full of professional operators and engineers producing reference grade scientific and industrial content. Apple basically discarded that sector for more than a decade, in fact almost two decades, then tried to win us back a few years ago with a new offering. We also had a fleet of MacBook Pro’s we used in meeting rooms for review and approval meetings with content delivered off local NAS, Back then, way back then, the content was already way too big for a laptop to serve, nothing has changed.
But it’s all too late for Apple, nothing they can offer in a single workstation is going to displace the rack of Linux servers that now deliver massively parallel processing performance at a fraction of the price.
A MacBook Pro type laptop followed along by a suitcase sized external drive is useless to professionals and not much more than some rich kids toy like a fancy watch or a nice fountain pen!
Your word salads are fascinating, please tell me more about all those linux racks running FCP/Premiere/MediaComposer….
There is a world outside of the Apple store!