“While 64-bit support is now considered common for both Intel and AMD processors, many Linux (as well as Windows) users are uncertain whether to use a 32-bit or 64-bit operating system with there being advantages for both paths. With this being the last Phoronix article for 2006, we decided to take this opportunity to look at the common question of whether to use 32-bit or 64-bit software. In this article, we will be comparing the i386 and x86_64 performance with Ubuntu 6.10 Edgy Eft and Ubuntu 7.04 Feisty Fawn Herd 1 to see how the numbers truly stack up.”
As the results of this article illustrate, system-level performance actually has little to do with raw CPU cycle speed.
Common bottlenecks such as front-side bus speed, memory access speed and I/O rates are throttling throughput as much as, if not more than, CPU math processing. Once you have to go off-chip, all bets are off. For throughput, I’d trade a honking big L1/L2/Ln cache for wider CPU datapaths.
That being said, the 64-bit model cleverly avoids the 3GB per-process VM limitation of the 32-bit world. Users needed large application spaces would do well to look to 64-bit machines.
…on my 64-bit Turion Laptop is that it’s a bit of a pain to configure win32 video codecs + Flash to work (since they don’t have 64-bit versions). Sure, you can do it by setting up 32-bit chroots (and that in itself shows the power of Linux), but I didn’t get enough of a performance boost to justify the hassle, personally.
Indeed. I’ve got an Athlon64 and I’ve never used a 64-bit version. I’ve always gave them a try once Ubuntu releases a new version of it’s OS but the problems that the proprietary software brings to the table are just a too much trouble to enjoy such a small return.
I recently switched from 32 bit to 64 bit Debian, and haven’t had many problems this time. At first, I missed Flash for Youtube, then I discovered http://gwenole.beauchesne.info/en/projects/nspluginwrapper which lets you use 32 bit plugins in 64 bit Firefox. It’s a bit of a hassle to get it to work, but it does work. Quake 3 32 bit works as it is. What more do you need?
Most videos play fine without the win32 codec pack.
Ah the sad nature of closed source applications, and people wonder why I only use free software.
Yeah…there’s always an absolute guarantee that there would be a 64bit version if it was open sourced.
If a program is open source and there is a demand for a 64bit version then it is likely that there will be a 64bit port created.
“As the results of this article illustrate, system-level performance actually has little to do with raw CPU cycle speed. ”
How did you get that conclusion from the article??
It’s interesting to notice how Linux got slower between the 2 releases.
You mean Ubuntu, not Linux, right?
You should be aware that Feisty Fawn isn’t out yet. It’s still in pre-beta, which means that there are bound to be some performance issues.
well, i do think he’s right. the biggest part of the performance difference is (i think) the kernel – 2.6.19 seems to be slower than 2.6.17. there are other explanations possible, but to me it seems the kernel is to blame. compiling the kernel got >20s slower, and encoding an mp3 took a little longer. of course, the compile could be due to a slower gcc, encoding because lame got a bit slower. and the mem benchmark is faster, as is gzip. so it’s not decisive at all…
Zenja wrote:
It’s interesting to notice how Linux got slower between the 2 releases.
RTFA:
it is also worth noting that at this time there are no real speed increases between Edgy Eft and Feisty Fawn; however, this was only the first Alpha (Herd) release of many to come before the April 2007 release of Ubuntu 7.04.
If you had bothered to look at the performance graphs (those funny looking pictures with bars and numbers) alone you would have seen there was overall virtually no difference in performance. Some tests ran a little faster on Ubuntu 7.04 Feisty Fawn Herd 1 than Ubuntu 6.10 Edgy Eft and some a little slower.
I would appreciate it if you refrained from making ignorant and ill informed comments of limited veracity.
Edited 2006-12-30 01:07
It’s like waving a red flag to a bull
FTFA we get the following benchmarks (Edgy vs Feisty):
Unreal Tournament is slower: 24.46/24.30
GZIP is slower: 69.19/68.13
RAMSpeed is slower: 2840/2834 and 2818/2792
But hey, its more fun to insult original poster than actually looking at the coloured graphs you’re referring to.
Yeah, there is a definite, although tiny performance regression. Upvoting people who are wrong and downvoting people who are correct won’t change that
However, Feisty isn’t released yet.
FTFA we get the following benchmarks (Edgy vs Feisty):
Unreal Tournament is slower: 24.46/24.30
That’s a 0.65% difference. I would define it “noise”. Do you agree?
GZIP is slower: 69.19/68.13
That’s a 1.5% difference. Again, I would define it “noise”
RAMSpeed is slower: 2840/2834 and 2818/2792
0.77% and 1.4%.
So, it’s hard for me to say whether this is just noise or the end results of using a beta version or the release that is getting 0.X% slower.
It’s interesting to notice how Linux got slower between the 2 releases.
Good thing Windows Vista performs just as well as Windows XP does on the same hardware .
Who cares.
Then Jimbo woke up.
Nice try though.
I think Jimbo was being sarcastic….benefit of the doubt???
“Good thing Windows Vista performs just as well as Windows XP does on the same hardware .”
Actually you’re mixing things here.
First of all they’re comparing the same application on different systems, what you’re comparing is completely different systems (which have different applications). Yet again it depends.
For example graphics performance of Vista is better than XP on the same machine if you have a mid to high end card.
The desktop GUI, at least feels faster than XP counterpart, even when transparency is enabled.
Also games give higher FPS because of the optimizations done on DirectX core. (The new version uses better data paths that result in less data moved around).
But if you’re talking about general application performance you may see an obvious decrease. This is because Vista uses more RAM in idle configuration. (Yet it will probably run faster than XP if you happen to have > 1GB of RAM).
While you’re at it, try to look at: http://en.wikipedia.org/wiki/Features_new_to_Windows_Vista#System_p…
Oops, sorry… I shouldn’t have fed the troll…
Edited 2006-12-30 05:31
Interesting… Seems like defending Windows on a Linux article gets you modded down. I shall be more careful then.
(I know, this one will probably become -1 too)
“Interesting… Seems like defending Windows on a Linux article gets you modded down. I shall be more careful then.
(I know, this one will probably become -1 too) ”
Although some people don’t like pro-Windows posts, they don’t mind people complaining about their moderating iron (penguin) hand. Same on Slashdot.
this benchmark shows is that running 64bit Linux doesn’t matter yet for Joe Sixpack because most user space software doesn’t utilise the bigger address space. You can’t generally conclude anything about Ubuntu nor Linux itself nor can you claim that speed has nothing todo with raw CPU power. If there would have been included a test for number crunching through some geo data set you would see the difference, though definitely not x2.
I’m not surprised. Outside of specialist environments the aforementioned memory boundaries and the deeper register size don’t play a large role. OTOH, the benefits of more SSE and POH registers also apply to i386 software. Once software starts to utilize the advantages of 64 bit registers in larger numbers we will see AMD64 software outperform i386 software, but this is probably still a few good years down the pipe.
When you move from 32 bit x86 to 64 bit x86, two things important to performance change:
1. Words become twice as large. When you have to swap registers around, you have to move twice as much data. This makes the system slow down.
2. You now have twice as many general purpose registers (16 instead of 8). On a register-starved architecture like x86, this means you have to swap registers around much less often, reducing requirements on the bus and decreasing the number of instructions. This makes the system speed up.
Whether (2) counterbalances (1) enough to make your system actually faster depends a great deal on the program you are running, as these benchmarks show.
“1. Words become twice as large. When you have to swap registers around, you have to move twice as much data. This makes the system slow down.”
The words are twice as wide but so are the data paths.
Net result is no difference.
Well, it does if you have to save them to main memory (context switches, for instance). Inside the CPU, as you say, it makes no difference.
I should also have noted in my original post that 64 bit architectures vastly speed up computational work involving 64-bit integerss or very precise floating point values. This is really noticeable in scientific applications (physics simulations for instance).
There should be no impact on floating point operations, since the x86 FPU registers are 80 bits wide and is capable of handling the common 64 bit double precision floating point with ease (with the additional 16 bits for overflow). This is why you rarely see any improvement in FP heavy code when moving from x86 – x86-64.
The move from 32 bit to 64 bit doesn’t affect floating point.
Net result is no difference.
Not quite true. 64-bit programs use more memory than 32-bit ones. As such, less memory is available for the system to use, more bandwidth is used shuttling the program around in memory, and since less memory is available, less things can be cached, etc.
Actually x86 chips have used larger datapaths under the hood for years. They will write/read combine multiple 32bit accesses together into larger chunks (e.g. 64bit) – so it’s moot. The first post is right – on one hand you get more registers, on the other you need to throw more memory around if it contains pointers of any sort.
Absolutely.
Because everything that Ubuntu has is also available in Debian and SuSE and Fedora.
this doesn’t really apply to most ubuntu users but if you do a lot of compiling, it is beneficial to go with 64. I know from personal experience that in some cases i experience up to 30% faster compile times with 64 bit over the same installation in 32 bit on the same athlon64 box.
So basically, the findings of the tests reveal no performance boost going from 32 to 64bit OS. I would have liked to have seen Windows XP-Pro and Pro-64bit thrown in there for coloring. Still, the frame rates for Unreal were terrible on all versions. I’d attribute that to the low end video card they used.
My Core2 is running 32bit simply because I’m fearful of the 64bit drivers that I’ve heard had problems. I just like the fact that the 64bit architecture allows for hardware DEP, or whatever the actual non-Microsoft term is, for it.
That 64bit can be faster, it’s been proven when properly written for. Testing games that have not been written to support x86_64 is useless, Far Cry has a AMD64 patch that adds more content without no frame rate loss.
Soon as people start writing apps/games to take advantage of 64bit then we will see the difference.
Theory is nice, but in reality most of the 32 bit vs 64 bit desktop benchmarks I have seen show 32 bit equal or slightly better than 64 bit, with occasional areas like game performance where 32 bit is much faster because the 64 bit drivers are not mature yet.
Additionally, native 64 bit versions tend to use more RAM to store the same data
There are many reasons to hold off on 64 bit for the desktop currently, and the only major reasons I can think of to make the jump to a 64 bit system is if you need more than 4 gig of RAM on your desktop or need to develop 64 bit apps.
Most people are better off using a 32 bit system for the next couple years.
I wonder whether the author realises that compiling the same vanilla kernel tarball on i386 and x86_64 produces completely different results because a lot of kernel features are specific to the arch it’s being compiled on. Any attempt at comparing compilation times is moot. Just run a “make allyesconfig” command on both archs for the same kernel version and compare the resulting .config files to get an idea.
In other words, the 32-bit version will always take longer to compile. You could make a better comparison by cross-compiling instead.
I had some serious problems getting the recent 64 bit (k)ubuntu versions to install at all, something is broken in the diskpartioner on 64 bit..
32 bit versions works, still have to replace ati with radeon in the /etc/X11/xorg.conf file.
The Unreal Tournament 2004 is 32bit, why compared it with 64bit? Crazy people..
Just for the fun to know:
For personal use it seems to be a possible solution to run 64 bits variants. For specific (!) tasks, a 64 bit version at companies also has it’s use, but up till now, we install the 32 bits versions all the times, irrelevant of the hardware platform, unless seriously asked by our customers to do different.
For now it is almost of no use in our work.
Run Videos fine under VLC.
Audio, can use it fine.
OpenGL and WINE, check, all good.
Quake 4 No Problem haven’t bothered with UT2004.
DVDRip, works fine.
Open Office, check.
JACK and Ardour, working but I need to learn more about them.
Deluge, (torrent client) running fine as well.
Only thing I miss is Opera with mouse guestures but I can run the static 32bit version but I hate the font rendering doing that.
More success with 64 Linux than 64 Windows which I’ve only tried under Vista and drivers are very spartan for that. I can’t get audio as well but that what you get with relying on hardware companies to produce the goods. That being said, Nvidia’s 64bit Linux driver is very good, just need to look into a little vertical tearing on fullscreen TV out with fast paced low light scenes.
What I would like to see is these same tests performed on a recent Intel CPU. Right now, I’m not sure how Intel’s x86-64 implementation stacks against its own IA32 implementation.
If you look at Unixs that have been 64bit for last 10years or more you will notice that they tend to have a mix of 32bit/64bit software. Kernel would be 64bit but you have all the basic tools (shell commands etc.) running as 32bit after all what difference does a 64bit version of “cp” make?
On Irix for example there are 3 runtime abi’s n64 n32 and o32. n64 been 64bit using latest and greateast of underlying CPU architecture (MIPS III or MIPS IV). n32 is basically just the 32bit version of n64 so you get the improvements in underlying CPU arch but it’s 32bit binaries produced. o32 is for old binaries that were compiled on MIPS I system.
Now if IRIX and OS that hasn’t seen a major new release in over 8years can do this, how come Linux systems tend to get it arse ways.
Do i really need a 64bit version of firefox after all? and don’t be complaining that ye have to install a 32bit version of glibc etc with disc space been so cheap it’s hardly a valid argument anymore.
Unlike Debian based distros, not all Linux distros have a problem. For example, Fedora and Red Hat do multilib junk just fine.
The main reason to switch from 32-bit to 64-bit is NOT performance. 64-bit is the future, let’s move on.
It’s too early to tell how effective 64-bit is. Yes, Ubuntu has a 64-Bit distro, but how many applications are actually rewritten to take advantage of the 64-bit processor? I think the test is premature, and too early to tell.
Why are the memory tests done with integers? The differences will be greater than you moving around double precision floating point numbers, not so much integers. This was a poor test: lets test the 32-bit processor, against the 64-bit processor in a 16-bit test as not to show any of its weaknesses.
In Unreal Tournament, did anyone really think that using the 64-bit processor register would make as much difference as buying a video card?
The Lame compilation was also a poor test. What kinds of floating point math is used in compiling a program?
If you want to test the 64-bit extensions, I want to see 2D and 3D CAD tests, I want to see large spreadsheet number crunching, I want to see science programs tested such as astronomy charting software, Fast fourier transformations, computational flow dynamics, and fractal generators–all rewritten for 64-bit. It’s not enough to recompile 32-Bit code for 64-Bit.
We are still living in this world where we may not care what 4195835/3145727 is, but as time goes one, we will.
If you have called in the jury on 64-bit processors, you haven’t heard all the evidence.
which desktop apps will benefit most from 64 bit? games and video. unfortunately, those aren’t strong points for linux yet
The tested programs were 32 bits programs compiled in 64 bits… this does not change in any way the actual programming.
If you have a 8 bit algorithm, try building it up in 32 or 64 bits… it is still a 8 bit limited program with an 8 bit algorithm.
However, some applications can use and afford 64 bit programming… then you will find the performance more than doubled… (in the parts of the program that use that code). So we get a mix… that will be near the double or less…
Ity all depends of the data you process and the algorithm. Period.
Expecting magic is plain nonsense.
You get what you tested. If you do not do what you are doing or how things work… You get “amazing” results that only reveal your ignorance.
Sorry to be rude but “Articles” made of vapor deserve to be denounced as a bad service to all and the spread of trash. It dis-informs people. No-good… You’ll agree.