“It’s cool to say you have a dual-core machine, but in the grand scheme of things, having it doesn’t make much difference in performance. If, however, your software can share the computational burden across your collection of processor cores, it’s a different kettle of fish. Then you actually get more work done in a given amount of time. The technique is called thread-level parallelism, more commonly known as threading, and it requires some mental adjustment on the part of the developer.”
> You also have to make sure that the threads aren’t stomping on each other’s data
> (avoid global variables, Reinders says), and that communication between them
> is disciplined. In some ways, it’s like the old days of DOS programming; you have to
> pay careful attention that threads aren’t intruding on each other’s memory space, or
> if they share, that the appropriate locks and other checks are in place. Otherwise,
> you could set up race conditions, or create deadlocks if each thread thinks it
> is waiting for the other.
At least the article tells us about the dangers of threading. However, just using “disciplined comunication” and nothing else will be the death blow for performance, not to mention the inevitable zoo of bugs to be introduced due to overlooked race conditions. The article never mentions the key to high performance by threading, namely the required lack of data dependencies between the different threads.
– Morin
I thought it was an empty commercial stating the obvious. Use threads so the multi processor machine can run in parallel. Use inter-process communication so they don’t trash each other. Don’t use global data. Look both ways when crossing the street. Don’t run with scissors. Stay out of the rain.
You can easily avoid all problems mentioned by simply designing the program before you code.
How many people run two or more applications at the same time? There’s your benefit to dual-core.
I’m not saying multi-threaded apps are useless, but the blurb makes it out to seem like they’re a necessity.
But if you want to increase the performance of a *single* app, it will have to be more aggressively multi-threaded.
Agreed.
But I can still play Half-Life 2 and encode a DVD at the same time, even if both apps are single-threaded.
Agreed. Not sure I want all my apps taking advantage of multiple cores. One of the things I look forward to with dual core (I’m a little behind on upgrading) is the ability to do some processor intensive task (like video encoding) and still have my other programs respond well.
I’d also love to have a movie piped to one monitor (or projector ) for the rest of the family to watch while I do work on the other monitor, without worrying about causing skips in their show.
Granted, for batch jobs or and stuff to be done while I am not at the computer, apps using both cores would be cool, getting things done faster and all. I guess as long as encoding apps and the like give me an option whether or not to use multiple cores I can have it both ways
In the old days, before computer’s memory (RAM) was virtual (the virtual address mode, on top of real memory), all software had to worry lots about memory. Today it just works, mostly. A good kernel and programming language runtime (and reliable hardware) will let you (the user) forget much about “managing” your software (how much memory it needs, where in memory to live, how much CPU it needs, etc).
Lets hope for a new generation of systems that makes our lives easier. (Imagine that! In many ways everything’s gotten much more complex.) With good realtime properties so we don’t have to worry about skips in music, ever.
Wow, I don’t know if it’s just me, but partitioning your software seems a little obvious, no? I think more on ways to split your data and using a different approach to communicate between threads would have been beneficial. Most importantly, the differences between threading on a single CPU vs. multicore. Many people are used to threading on a single CPU. So the changes would be nice to hear about.
For example, on a single core, you don’t want to stop a thread (ie. having too many of them running at the same time) unless it’s waiting or taking too much time. In multi-core, this is not always the case as you can have two or more threads run at the same time. The requirements are very different. That kind of talk would have been more beneficial I think.
Man, I think I need to take my brain in to a repair shop after reading some of that stuff! Talk about complicated!
It would be nice if in the future virtual machines could execute code in parallel automagically for us.
Threading is always good especially in apps that do multiple IO read/writes.
But I think this article kinda stated the obvious, none the less its was a good read.