“Codenamed Kenstfield (Core 2) and Clovertown (Xeon), Intel’s new quad-core processors will dramatically increase the amount of processing power you can have in a single system. Given that the Mac Pro features two LGA-771 sockets, you could theoretically drop two Clovertown processors in there and you’d have an 8-core Mac Pro. Without a doubt Apple will release a quad-core version of the Mac Pro, either by the end of this year or early next year, but are users who buy the Mac Pro today missing out? While we’re still a couple of months away from being able to test a retail Clovertown CPU in the Mac Pro, we wanted to see if the current engineering samples of the chip would work.”
No every app takes advantage of the extra cores, but in properly multithreaded apps, the difference is HUGE:
http://anandtech.com/mac/showdoc.aspx?i=2832&p=11
Who out there owns a top of the line 4 core mac pro right now and thinks they need a faster system right now?
Who out there owns a top of the line 4 core mac pro right now and thinks they need a faster system right now?
Ask the same question in a year and you’re likely to get a different answer from many people. It’s a corollary in computing that applications will continue to consume all possible resources over time. Sophisticated users of video editing, 3D modeling & rendering, and other CPU-intensive apps will jump on this capability as soon as its available.
True. What’s funny too is that multi-core processors are all the rage and part of what’s making them possible is the increased energy efficiency. Just like how software developers will “adjust” the programs to the performance level available to them over time, it seems now that we’ve gone ahead and made the processors more energy and heat efficient so now we can stuff more into one box. We’re going to land ourselves in the same spot with machines that perform only “OK” because software has caught up to the performance level and systems that produce too much heat and use too much power.
Anyone doing a lot of 3D rendering or movie production (which is essentially rendering and converting of raw images in to compressed ones) will enjoy benefits of the extra cores.
They colloquial term for the systems that process such things today is “render farms”, implying dozens to hundreds of machines all running in parallel.
The whole multi core bandwagon just gives developers more insentives to multi-thread their applications.
Even something like a modern IDE can take advantage of multiple cores. One thread for handling keyboard input, one handling syntax formatting, one handling autocompletion, one compiling your code in the background, etc.
As we get more and more machines with multiple cores, those threads can become more and more powerful with less chance of stalling the primary, editing thread of the IDE.
I agree with the sentiment, who needs a machine with 8 cores. Who needs a machine with 16GB of RAM. Who needs a machine with 2TB of disk space.
But, for me, that’s the beauty of a machine like the Mac Pro. CPU’s are not getting dramatically faster, they’ve plateaued and now we’re addressing performance with multiple cores. I, personally, do not see machines getting dramatically more powerful in the next 5 years compared to what we’ve seen happen in the last 5 years.
And I can easily see a modern Mac Pro, with it’s just insane expandability, to seriously be a contender as a “last machine I’d ever need” save for a laptop.
Add in that I can double the CPU performance with 4 more cores in a year, and it’s even more insanely expandable.
I see glass cielings on most other, particularly consumer, machines. 2GB on iMacs, (3GB on the 24″), etc.
I can see getting a better video card (or two) in the future, the only place modern consumer machine are actually readily advancing, but other than that — hard to imagine a machine for Human Beings getting dramatically better than this one for a long time to come.
> Even something like a modern IDE can take advantage of multiple cores. One thread for handling keyboard input, one handling syntax formatting, one handling autocompletion, one compiling your code in the background, etc.
And ten extra developers per thread to debug the new, hugely complicated app that has problems that no one can seem to figure out.
Get ready for the buggiest software in computing history, courtesy of Intel’s and AMD’s need for a new marketting strategy to sell us new processors.
Personally, I think that while you can come up with places that multicore can help desktop performance, most desktops apps don’t really lend themselves to multi-threading. You’re going to end up with one thread doing the work, and all the other (7?) threads just doing a bit here and there that the user will never notice… until the app crashes due to some synchronization problem.
What I really don’t understand is why people here, who should know better, are seemingly just eating this multi-core marketting propaganda up, uncritically.
Edited 2006-09-13 23:26
Most desktop apps don’t even use up a single core at full speed, so there is no reason to make them multithreaded. The only apps that should be are those that are easily parallelized and take huge amounts of computing power like rendering, etc., and apps that have intensive tasks that can block the UI for a while. These are often I/O bound activities since a lot can be done very quickly in the cpu.
Sure, you could make Notepad multithreaded, but what is the point? This way you can run 8 seperate apps and they each get their own core instead of running 8 apps with 100 threads all vying for attention on 8 cores.
You’re thinking too much like an engineer and too little like a sales guy. While your point is correct, I can see many companies adding multithreating to their apps simply so they can brag about it in their sales litterature. Sure it will add bugs, but it will also let you sell more uppgrades to people who don’t really understand the pros and cons of multithreading.
Most desktop apps don’t even use up a single core at full speed, so there is no reason to make them multithreaded. The only apps that should be are those that are easily parallelized and take huge amounts of computing power like rendering, etc., and apps that have intensive tasks that can block the UI for a while. These are often I/O bound activities since a lot can be done very quickly in the cpu.
Oh, au contraire.
If the routine that you need to call is, essentially, more expensive than the creation of the thread, and can run independently (i.e. it’s doesn’t have to stall while waiting on some other component), then why the heck not fork a thread and run it in parallel?
There’s obviously some reasonable level of granularity where this will occur, but with mutli-core machines becoming more common, multi-threaded operations gain just that much more over not using them.
Because now, even for small tasks, you get the benefit of code running simultaneously, and your code performance scales far faster than a single CPU.
That means, in this example, IDEs can now be more responsive. There’s no reason that you have to only thread CPU intensive tasks. If I have a single meta task that requires, say, 4 independent steps, and those steps are expensive enough to warrant threading, then I’m going to get those 4 steps done twice as fast on a 2 core machine, and 4 times as fast on a 4 core machine. Regardless of clock rate.
Certainly multi-threaded programming is more difficult than single threaded programming. Threads have their own overhead that boils all the way down in to the kernels scheduler. You have to deal with synchronization issues, etc.
But, before, when the predominant machine was a single core design, the only reason to create a thread was for the potential abstraction benefits they provide, or simply to get “background” processing.
Now with multi-core machines becoming generally available and common, there is a pure performance motivation above and beyond the other benefits of threads.
Multi-core machines will make everyday programs perform better in the future as developers begin to adopt and leverage the architecture more and more.
To answer your question very specifically, users of Newtek’s Lightwave 3D. When you go into the render options, you can select the number of threads created to do the render work. I believe the max is 16.
Although you don’t need to specify this value with a 1-to-1 ratio of the number of cores your system has, you’ll only see the benefit with more cores obviously.
For a lot of amateur animators at home, this would be more feasible than a render farm.
Edited 2006-09-13 17:55
id say it was a good test that apple have gone with the 771 standard. Makes me more inclined, in the future, to buy one as i can upgrade it as and when i need to
The system would be a great server!
Yup running Combustion would be nice on one of those. Or that next-generation of Shake when it comes out to compete with Autodesk highend tools.
Hi,
I just wonder how well MacOSX supports 8 processors. Maximum processor amount which MacOSX had to support was 4, we’ve seen how much effort the Linux guys had to put into their OS (the transition from kernel 2.2 to 2.4) to make their OS threadable and remove the locks, which prevent parallel usage of all processors at the same time. Sure MacOSX is capabable of using all 8 cores and probably more, the question is how efficent it is.
Regards,
Anton
From the article, right on the front page, it uses them just fine, there is even a graphic for those uninclined, or without enough time to read.
The 4 processor limit is not the maximum OS X can leverage. It is the ceiling imposed upon by Apple to meet their design criteria.
When they augment the design requirements the CPU limit will go up.
the real question i think it’s does dailly application (web browser, antivirus email, text editor…..) will really take a boost with multicore?
this kind of application habitually already use many thread….
in word or openoffice…. syntax check run when you write…
I don’t know but I would love to see that the machine would be able to set a core aside always for “emergencies”, when an app locks up and eats up CPU, so the system is always responsive.
I don’t know if it’s possible to do that (I don’t own a dual core machine, so I don’t know if they generally do that), but to me this is one of those “holy grails” to have a computer that always responds no matter what you throw at it.
Sure there is also disk swapping and waiting for other devices behind bottlenecks, but this would be great.
If that is your daily use, then no.
My daily use is compiling and debugging large software projects, and compiling on 8 cores vs compiling on 4 cores will make huge difference. In fact, I’m already using Xcode’s distributed builds to use my coworker’s 4 core Mac Pro for compiling.
it’s not really a joe sixpak daily use
Joe Sixpack won’t be buying a Mac Pro and swap out CPUs by hand either.
In fact, I’m already using Xcode’s distributed builds to use my coworker’s 4 core Mac Pro for compiling.
Does he know??
I don’t see quads becoming too popular on desktops. There are diminishing returns as complexity increases. Quad SLI is sort of an example: even ignoring how it brings nothing unless you run at ludicrous resolution, the added complexity gave nVidia a bit of trouble with the drivers. There’s the whole added heat thing too, just as we were moving away from presler and its ilk.
Some apps will benefit, but for me on the desktop, I don’t see myself doing much more than encoding a video while playing a game. Also most games will have fun enough getting set to run on two cores.
Then again, with 4 cores, I could play a game that’s good at using two and still encode a video or two!
That would hardly be a typical usage pattern though. For most people, having 4 cores would simply mean less spyware processes per core
3D content providers will love this and they should see them in the Xserve first, where rendering farms would enjoy speeding up deliveries.
The Mac Pro could be more useful in Final Cut Studio, especially since Motion requires more power than other applications.
I suspect there will be some techno-geek who will have to have an 8 core system just for bragging rights, even if compiles won’t go much faster.
Imagine how fast you can receive e-mail.
First we had the Megahertz myth, now we seem to get the number-of-CPUs myth …
Thre only one who really benefits from this are the companies that supply us with electricity …
That doesn’t even begin to cover it – but one doesn’t expect much from art {sexual preference slur}’s who drop $2500 on what is basically less than $1200 in hardware – especially since that’s with what amounts to a $70 bargain basement video, a $70 bargain basement SATA drive, an optical that’s not as good as what I can get for $35 bucks… and that’s not even before you tack monitors onto the deal.
I’m still trying to figure out how in blazes they can justify charging a full grand more than the hardware is worth – is the ikea-like cultish appearance REALLY worth that kind of money to people?
Wait – dumb question, nevermind. P.T. Barnum was right, customer born every minute.