Intel is expected to release on Monday development tools designed to help programmers at software companies take advantage of the added computing power available on multicore systems.
Intel is expected to release on Monday development tools designed to help programmers at software companies take advantage of the added computing power available on multicore systems.
The article doesn’t cover: What is TBB, how does it work, what does it look like? But you can find that here:
http://www3.intel.com/cd/software/products/asmo-na/eng/perflib/buil…
Reminds me of Java “synchronized”. Or maybe sort of like a Boost-style threads lib, but with profiling tools so one wouldn’t need other instrumentation utilities to monitor MT perf issues. Very cool.
I took a peek at the link and the supplied documentation at the intel site. Anything easing the development of threaded software is welcome as I have gone through this hell myself. It’s a pity it is not free or opensourced to speed up adoption rate. But the library seems to be enourmously versatile and powerful if used in conjunction with a profiler so it is worth its money for sure. I would like to see some of those things included and used in the standard c++ library in the future.
I wonder how many hacks people will layer onto c++ before they realize the language sucks for MT programming.
What do you mean by “layer onto C++”. The mentioned software is an ordinary library.
Because it’s not included in the already bloated standard which indicates that you have even less guarantee of it being portable.
Of course, I suppose portability from Intel isn’t something you should really expect, and maybe not something you should even ask for in this case.
People already know that C/C++ suck for multithreaded programming, but everything else sucks in some different way.
Threaded programming gets a bad rap and it shouldn’t. It’s something you have to think about from day one, or _very_ carefully architect it in.
You, OBVIOUSLY, can’t just spawn threads “willy nilly” and expect things to work like you hope.
But I think writing it into the language is probably the way to go. But for the languages which didn’t choose to do that, which hundreds of popular applications use and can’t migrate from overnight, Intel is, apparently, attempting to provide a helpful tool.
It does? I don’t think so myself. Like everything else you do in C and C++, you need to think about what you’re doing, but writing threaded code in C or C++ is simple once you understand the concepts, and it’s not as if there is a lot to understand. I really don’t understand the big deal about using threads.
Hacking in c++, now that’s funny.
The provided building blocks are good and welcome but more high level constructs such as futures ( http://en.wikipedia.org/wiki/Future_%28programming%29 ) would be nice. Someone wrote about futures in C++ here ( http://resonanz.wordpress.com/2006/08/23/c-futures/ ). Looks interesting. Herb Sutter and the Concur project are also examining high level concurrency in C++.
While we are on performance, what is this I hear about how FB-DIMMS used on the woodcrest setup that Apple has that supposedly kills performance? Apologize for the rather off topic post but I thought when we were talking about performance and so on, maybe that should be mentioned. I dont have much info about woodcrest, but surely not all systems use FB-DIMMs?
-1 for off-topic
Might be interesting but there are enough apple stories to put this comment to .
Multithreaded programming is IMO too interesting to be hijacked by your comment .
Sorry if I seem rude – but multicore is the present & future & Im “waiting” for a technical discussion to appear in the comments section because I know much to nothing about it.
maybe what also needs to be done is to encourage people to use VM languages which already has multi CPU support inheritely.. such as JVM or CLI
I remember the good old day when I was doing multi-threading using Allegro in C. There was no protection scheme, nothing to screw your machine up. If something went wrong it was because your workflow was badly setuped. Nowoday there is so much restriction. People tend to get stuppier and programming language have to become stricter and there is need for protection everywhere because people do more error.
s/setuped/setup
s/stuppier/more stupid
s/language/languages
s/need/a need
s/do more error/make more errors
and down with Joy?
How much has Intel Contributed to Making GCC better than Intel CC?
Edited 2006-08-29 03:57
How much has Intel Contributed to Making GCC better than Intel CC?
Who cares; whats the alternative? AMD and its refusal to opensource their video drives.
Don’t give me the crap ‘oh, but patents’ – there is NOTHING stopping them from ATLEAST, if not opening up the source of the drivers, at the very least, provide the relevant specifications to allow the opensource community to implement the driver.
“here is NOTHING stopping them from ATLEAST, if not opening up the source of the drivers, at the very least, provide the relevant specifications to allow the opensource community to implement the driver.”
Because some device drivers by definition may need to know a lot about the device they are abstracting. And maybe, just maybe, the manufacturer does not want you to know how the device works. Whether the OSS wants to realize it or not, they are a marginal number of customers -very vocal that is for sure, but still marginal- to warrant the effort and investment that may be needed to opensource some drivers.
I find these headlines funny – like we should be surprised that we need to start building apps that use multiple threads if you want to take best advantage of new processors.
It was obvious for decades that we would eventually need to move to parallel processing – that we will reach a point where we just can’t make single processors deal with instructions any faster. Deeper pipelines and multiple execution units have led us to increasingly complex processors; we shouldn’t be too surprised that we’ve reached a point where this approach can’t really yield any more results. Indeed the costs of that approach have been increasingly complex chip designs, lower yields, increasing performance costs for branch mis-predictions, smaller performance gains, and more expensive chips.
I’d imagine that once heavily threaded software is the norm then smaller simpler processing cores with short pipelines and single execution units will make a come-back. These simpler cores should be able to be run at a significantly faster clock speed than the current monsters since the problems of synchronisation across a large core will be greatly reduced. You’d also be able to pack more cores onto a chip if they’re smaller and simpler.
Edited 2006-08-29 17:40