“What Intel giveth, Microsoft taketh away. Such has been the conventional wisdom surrounding the Windows/Intel (aka Wintel) duopoly since the early days of Windows 95. In practical terms, it means that performance advancements on the hardware side are quickly consumed by the ever-increasing complexity of the Windows/Office code base. Case in point: Microsoft Office 2007, which, when deployed on Windows Vista, consumes more than 12 times as much memory and nearly three times as much processing power as the version that graced PCs just seven short years ago, Office 2000. Despite years of real-world experience with both sides of the duopoly, few organizations have taken the time to directly quantify what my colleagues and I at Intel used to call The Great Moore’s Law Compensator (TGMLC). In fact, the hard numbers above represent what is perhaps the first-ever attempt to accurately measure the evolution of the Windows/Office platform in terms of real-world hardware system requirements and resource consumption. In this article I hope to further quantify the impact of TGMLC and to track its effects across four distinct generations of Microsoft’s desktop computing software stack.”
While i agree about the bloat, this phenomenon is not exclusive to Microsoft. For example OS X and Apps consume RAM and CPU-Power like mad, too.
Some bloat may be explained by advanced features of certain apps or OS, some may be no optimization in code or concept at all.
Edited 2008-04-15 20:24 UTC
Indeed, the “bloat” is universal and not only due to advanced featores or no optimization but often due to frameworks decreasing development time.
Of course everything ran fast when developed in c or assembly, but when you start using java or python or whatever, development costs go down and speed decreases (which is offset by hardware improvements)
Hopefully Moore’s law decline will bring back the great coders from the 80s that used to be able to write a real time 3D game for a monochrome 8bit microcomputer with 64 k of addressable memory.
If people cannot just upgrade to a new computer, fast programs will be profitable again.
Linux, Apple and Microsoft will have to change their ways.
Linux is not “one size fits all” unlike OSX and Vista.
Linux runs on mainframes and all the way down to mobile phones, photo frames and even wristwatches.
http://www.freeos.com/articles/3800
http://www.top500.org/charts/list/30/osfam
As for speed and lack of bloat on desktops running current versions of applications:
http://www.puppylinux.org/user/viewpage.php?page_id=1
http://www.zenwalk.org/
but that’s linux the kernel, not linux “as we know it” which includes X, at least qt and kde or gtk+ and gnome and a bunch of other tools including a distro’s package management scheme. You need all that to run modern linux apps.
http://www.puppylinux.org/user/viewpage.php?page_id=1
Xvesa is lighter than Xorg … Puppy has both!
http://en.wikipedia.org/wiki/Puppy_Linux
Puppy Linux fits in about 100 MByte for a full distribution.
“The standard edition uses AbiWord as the word processor and is 68 MB; a live-CD ISO file with Mozilla Firefox is 52.4 MB; with the full Mozilla suite it is 55.3 MB; with Opera it is 49.6 MB. A 96.1 MB ‘Chubby Puppy’ version includes the OpenOffice.org suite as well.”
That is just 0.1 GByte. One seventh of a data CDROM.
strange, you must be misinformed… i know linux only as the kernel. what you are talking about is a gui centered distribution bundled with a linux kernel such as elive or ubuntu.
Clustick – people have been calling Linux based operating systems just ‘Linux’ for years. It’s simpler, amongst other things.
Small correction: Apple and Microsoft will have to change their ways.
A Debian 4.0 Linux install (current stable release) runs on my 1.2 GHz single processor Athlon (bought cheap in 2003) with 512 MB RAM quite satisfactory.
At my workplace I have to use a Windows XP on a 3GB double Dual Xeon with 3 GHz, which should have approximately 6 to 8 times higher performance. But in reality it “feels” like being only 50% faster. It is easily outdone by a Linux computer which I also can use at work with a dual 2.5GHz P4 setup.
No, Linux already has a different way, and that is: Choose your level of bloat!
You want 3D effects and all the graphical whizbang you can think about? No problem, if you have the machine for it, go for it.
You want to have a decent low-cost Office+Internet machine with limited RAM? Get a stripped down, optimized for speed Distro.
Today I can put myself a quite silent PC together with a processor which has similar speed as a 1 GHz Celeron, 1 GB RAM and a 100 GB Hard disk for as much as EUR 250,- (including Software).
Just because something is written in C or assembly doesn’t make it fast. One can write crappy and slow C/ASM just like they can write crappy and slow Java code.
Case in point: GNOME
True, but one generally has to out of one’s way to make C or Assembly code run slow – at least once one gets past the learning curve (but even during the learning curve it can be hard to do so).
Oddly enough:
That said, Linux is not immune to the ‘bloat’ either. In fact, F/OSS can sometimes be worse at bloat because every developer chooses their preferred system to build upon (Qt, KDE, Gtk, WxWidgets, SDL, Motif, X11, to name a few) so you end up with a lot of duplicated effort.
Granted, under Windows you have everyone and their brother providing the same copy of the same library all over the place (b/c of how Microsoft handles library versions – e.g. mfc42.dll, mfc60.dll, mfc71.dll, etc. – or rather, the lack for proper support of it).
Sadly, this is true for most modern Linux distributions and modern Linux / UNIX applications in general. In some cases you can say: You cannot take advantage of faster hardware and grown ressources because they are squandered by underlying OSes, libraries, or the grown requirements of the said applications.
Furthermore, I agree to your last statement. I thing some developers don’t care about optimization of their own code (“Hey, it runs fast enough on my machine!”) or they simply rely on the functionalities provided by a library that they just have to include and call functions from, so existing bloat is “inherited”, and they don’t care about it.
Another thing is the tendency to integrate as much functionalities into one application / an OS as possible.
In a result, there is a imaginable quotient:
hardware ressources
———————————- = overall usage speed
application requirements
Due to technical development, the numerator increases, and due to bloat, the denominator increases, too. The quotient seems to stay the same over the years. Yesterday’s applications are as fast on yesterdays machines as today’s applications are on today’s machines. To benefit of the faster hardware of today, you seem to need to run older software on it. Simple math.
There’s a difference between utilising new hardware resources as they become available, for example, Compiz, the new KWin and various 3D enhancements that have been made, and wholesale requiring said resources and then eating them. I don’t know of any distribution that has the graphical hardware requirements of Vista.
Vista basically requires at least a couple of gigabytes of RAM if you want to run any applications at all, especially Office 2007. I don’t know of any Linux distribution that has that requirement to run well. If XP ran well on it then a Linux distribution certainly will.
Heck I’d go further than that, I have an ancient Dell P3 128 MB ram 500Mhz laptop that originally had Win98, but is now running plain Ubuntu 7.10 installation. No trimming of unnecessary startup items or config options to turn off. I left font smoothing on, have CPU, Mem, & Net meters and weather applet running at boot, and it is still highly usable. I can do about three applications at a time before it starts chugging.
As far as the rest of what you said, yup. Totally agree with you.
Same here.
Using CentOS5 to power my 12 (?) y/o Dell Inspiron 7000 (PII366/256MB) machine.
FF2 (custom build) is a big sluggish, but this problem should be solved once FF3 hits the mirrors (RHEL/CentOs 5.2 update)
OpenOffice takes ~30 seconds to start, but works just fine once loaded.
Granted, I’m using a low-weight DE (IceWM + iDesk) – but that’s the beauty of Linux!
– Gilboa
Your hitting the nail on the head. There are other issues as well, most programmers find it difficult to take advantage of multiple core CPU’s which seem like the direction that CPU manufacturers are having to go in as clock speeds are not increasing like they used to. Hopefully the problems programmers are having will change, but its going to take some effort on the coders part. Some will get better and they will probably see improvements in market share because of it. I think users are at a point where they have enough with simply adding more features and would like to simply have the OS or application developers work more and more on getting the product to be more responsive.
Actually, each major release of Mac OS X has become more efficient and, for most people, has run faster than the previous release on the same equipment. Leopard (10.5) seems to be the exception for me, as it runs significantly slower on my not so current machine than Tiger (10.4) did though the Intel-based machines are doing much better with Leopard.
RAM usage has been pretty consistently high, but Leopard seems to be higher than previous releases. You could always get decent performance out of a Windows machine running applications with only 384 MB while a PowerPC machine would need double that.
Bloat is inevitable as people have forgotten how to write efficient code and depend solely on the compiler’s optimisations to make things better. There was a time when that worked, such as IBM getting better performance out of Win31 within OS/2 by using the Watcom C compiler. Microsoft’s version 5.1 C compiler often had to have optimisations disabled to produce correct code.
Apple is a hardware company so here’s my take on what happened. First of all, mac os x started out very slow. They were still on ppc at the time and it was looking really bad for them as that cpu just couldn’t keep up with intel. So they had a huge push to optimize their code on the one hand while claiming that the ppc was somehow better than x86.
Now, mac hardware is basically like pc’s so it is apples to apples. They don’t need to worry about being slower than Windows anymore so they add features to show off their stuff.
And that is good software engineering in practice. You ship a product that gets refined over time instead of shipping a product that gets more and more bloated and worse over time.
My own anecdote:
Microsoft Office 2003 under the latest wine starts waayyyy faster than the latest OpenOffice on my Ubuntu Hardy system. Not only that but its just a much more responsive and effective piece of software.
Something to think about.
Edited 2008-04-16 04:26 UTC
My anecdote:
Any part of OpenOffice under Linux opens in roughly the same time as any part of MS Office under Windows XP on the exact same dual-boot machine, and faster than any part of MS Office under Windows Vista.
Any part of KOffice under Linux opens waayyyy faster than any part of MS Office under Windows XP on the exact same dual-boot machine.
Any part of GNOME Office (Abiword+Gnumeric) under Linux opens waayyyy faster than any part of MS Office under Windows XP on the exact same dual-boot machine.
Yes, because everyone knows that “opening” is the authoritative benchmark for operating system platforms.
What you’re talking about is disk cache. Nothing more.
Have you tried OpenOffice.org Quickstarter? Or other configuration options to make OpenOffice work faster? See, for example, here: http://www.zolved.com/synapse/view_content/28209/How_to_make_OpenOf…
Or, if your system has a good amount of memory and you aren’t using it to capacity, you could enable preload.
http://www.osnews.com/story/19385/Preload:_the_Linux_SuperFetch_Pre…
http://www.techthrob.com/tech/preload.php
That will speed everything up, not just OpenOffice.
I’d like to know where you are getting your facts from.
Seriously…
I run os x leopard server on a 9 year old apple g4 450mhz cube w/ 1.5gb of memory
It runs like a dead dog. Yet it still runs better than 10.4 Tiger server did (and even that was usable as a server)
Sure, I cant use 3D graphics stuff… but then I do only VNC into it. Also the transparency effects are disabled… but hey *shrugs*
Point is, it still runs faster than tiger, yes it does have the alex voice bloat (600mb single voice library) but what is a cd amongst friends?
I’d LOVE to see how Vista ran on equivalent hardware .. oh wait, it wont!
Vista running on almost 10 year old hardware? just not going to happen is it?
I suspect OS X is on par with Windows XP and perhaps less than Vista in regards to memory consumption in a non scientific sort of feel I get from using the two systems. I also figure at least with XP and Mac OS X that most new systems have more than enough memory for the majority of users. I sort of assumed Mac OS X would be more hungry, but they have made great leaps in memory management from 10.0 (they HAD to, it was awful slow), and I keep an eye on memory consumption in Mac OS X. I have a gig of ram in my G5, and I have not used all the memory while doing development work in XCode and the usual things like checking email/web etc that most people do, so I am pretty happy with it. It seems to hang around the 500 meg mark or so max. I think Windows feels about the same. I do notice that the mac double buffering the windows may make it seem a little slow or unresponsive on occasion, but I like the way it draws its stuff (especially the PDF quality of the graphics) and am willing to accept that VS Windows, which does seem like a mishmash in comparison. I don’t see that as eye candy which I often hear people state, its more of an elegant way that the windows are drawn VS Windows. Some people could care less. Vista may have improved that, but I had so much trouble with drivers I went back to XP. I also like Linux as much as Windows, so I use that whenever I can. The biggest problem I see with Windows is virus/malware and all that. I am not a fan of having to run all this stuff to protect my machine. That problem may come to Mac in the future, but right now it is not as in your face so that always feels like a relief and is a plus. I do have problems with task management in Mac VS Windows. Windows, especially sound stuff seems to have less hiccups and problems. This may have more to do with Mac OS X not getting the best drivers, but I can do more sound stuff on the Windows box where the Mac one seems to hiccup. I really don’t like flamewars and arguments about what system is better. I do think what is out there gives much more then old systems which used less memory/cpu. I do agree with the feeling that all of this could be MUCH faster and more responsive, but I suspect that making programming more reusable and general purpose is at least one factor that led to bloat in code. Concurrency in programming is going to become more and more of an important tool to combat this VS trying to get higher clock speed.
It’s a lack of love.
When only a certain number of hours can be spent on a piece of software because money dictates that so, then it will only aim to be mediocre.
No programmer personally wants to write awful code. It is within us to naturally try our best. Whilst we may not always do our best, we do learn.
If you’ve had that love bleached out of you from too many deadlines and “meetings”, then software is clearly going to suffer.
Vista is the product of a mismanagement nightmare; and it shows. Firefox 3 is a product of love, and it shows.
If Firefox 3 was born from Firefox 2, perhaps there is hope for Windows 7.
lack of love comes close, but i think it is more a lack of proudness. nobody is proud of their software or hardware nowadays (except maybe in the game console area).
in the old days people wil show how cool their amiga/beos/c64 is, nowadays people say : “it works good enough”.
i think it has mostly to do with the fact that small teams don’t cut it anymore and more managers are involved. therefore dream projects and visionary leaders are a thing of the past. except for apple with prophet steve.
And, quite frequently, lack of skill. Many of those coders couldn’t do it, even if they had the opportunity.
I empathize with you, I sort of miss the competition with different hardware. There are benefits to one hardware platform choice, but I am afraid that innovation is not one of them.
While I was reading your post I was wondering when the mandatory Vista bash would come.
It’s almost like a “PS: Vista sucks” appended to every one of your comments.
Glad to see you didn’t disappoint =).
Well, how about I say Internet Explorer instead? It’s just as true is it not? Vista is a disappointment, such a disappointment that this Windows user for life switched to a Mac. I think the “bash” is well founded in the three years of following Vista to within an inch and then finding it out to be a giant bloated piece of crap.
There’s another reason for bloat in applications written in object languages. You’ve got a lot of ready-made functions at you disposal. Let’s say that everyone is O(n). Combining them makes an elegant, but slow piece of code. O(n^2) or so.
I have seen Smalltalk one-liners that calculated something in O(n^2) while it would be trival to do the same in O(n) but in 5-10 lines.
Sorry no. That’s not how it works at all. In general, existing functions are going to be more optimized and less buggy than what you write yourself. Yes it is possible to optimize for specific cases, but that has absolutely nothing to do with algorithm complexity. Read up: http://en.wikipedia.org/wiki/Big_O_notation
That’s because most of the time these sorts of optimizations don’t matter. Premature optimization and all that. Optimization needs to be weighed against code complexity and development time. Most of the time it’s not worth the effort. The only proper way to profile is with a profiler. Futzing around optimizing random steps of your code is a huge waste of time.
Usually, but not always. Some string algorithms in PHP are naive.
Maybe such situation is not very common, but possible. I know what O notation means.
That says a lot less about using existing functions than it does about how sh*t PHP string algorithms are.
That’s for sure. Anything written in a language that is >20x slower (Perl, Python, PHP) than C/C++ should be instantly rejected by users on those grounds alone. And a language as slow as Ruby shouldn’t have any code outside of toy programs written in it. A couple extra language parlor tricks (which as an end user, you don’t benefit from anyway) aren’t worth turning your 3Ghz P4 into a 300Mhz PII.
See for yourself: http://shootout.alioth.debian.org/
What an absolutely silly and narrow minded point of view. There are plenty of situations where the difference in speed is completely unnoticeable, unmeasurable, or unimportant. There are also situations in which I could blow the doors off of the performance of existing C code with a Python replacement.
And, of course, performance is only one of many factors for users and programmers to consider.
Well sure, if the original C code used a slow algorithm. But the same code will always be faster in C than in anything higher level. (Aside from the occasional academic example that exploits some property of the interpreter/JIT compiler).
Think about what you mean by “the same code”.
Read in 1,000,000 pairs of floating point numbers from disk. Sort them in ascending order by the second number in the pair, and then write the pairs back out to disk.
I could write that up in a few lines of Python and you would have a *very* hard time matching my performance in C, C++, or any other language you want to try. And that is *without* a JIT. With psyco, I’ve been able to get within 90% of C’s performance (gcc -O3) for finding prime numbers using exactly the same algorithm, which of course involves a tight loop, where C should shine.
The original claim in this thread was that users should always reject Python applications because Python is over 20x slower than C or C++.
Equivalent code with the only difference being language constructs.
Lines is completely irrelevant. Higher level languages are generally more compact, that’s a given. Has nothing to do with speed.
Nonsense. Of course the writing of the code would be more effort in C, but the performance would be better (by some factor > 1) Since your example is mostly disk limited, it is a terrible benchmark for language speed anyway.
Couple things here.
With a JIT, you’re still slower than C, which pretty much proves the point.
The JIT is good at optimizing for toy problems like finding prime numbers. Real life code has nothing in common with that example, and real life performance has nothing in common with what you’re measuring.
Well that’s silly of course. Python will be slower, but always by different amounts. And in many cases it doesn’t make a damn bit of difference if its 100x slower because the speed is still fast enough.
You are really missing the point. Here is something concrete which may make things clearer. In C, C++, or assembler, beat the python snippet below for reading in one million pairs of random floats, and sorting by the second member of the pair. I believe that this simple exercise should make my point far more clearly than any amount of theoretical “back and forth” in this thread.
=====
import cPickle, gdbm, operator
dbmIn = gdbm.open(‘float_pairs_in.pickel’)
print “Reading pairs…”
pairs = cPickle.loads(dbmIn[‘pairs’])
print “Sorting pairs…”
pairs.sort(key=operator.itemgetter(1))
print “Done!”
=====
You can use this to create the input file:
=====
import random, gdbm, cPickle
print “Creating pairs file…”
pairs = [(random.random(), random.random(),) for pair in range(0,1000000)]
dbmOut = gdbm.open(‘float_pairs_in.pickel’, ‘n’)
dbmOut[‘pairs’] = cPickle.dumps(pairs, 2)
dbmOut.close()
print “Done!”
=====
BTW, any comments from Python programmers regarding my code, above, are welcome. I’m not a master Python programmer.
Vista+Office 2007+IE7 is slow on *any* machine and resource hungry that’s the bottom line. I remember discussions *cough* on here about gnome vs XP’s mythical “fresh install” and how they run in 256mb. The bottom line is alternative products simply are less bloated[sic] than Microsoft’s offerings or even earlier incantations of its own products…its not even news. The reality is their is very little to gain for the user on a Vista+Office 2007+IE7 vs one with Microsoft 2000+Office 2000+IE6. The only thing up for discussion is whether features added between versions actually make enough of a difference to warrant the resource increase.
The bottom line is for those that are technically able, or are able to use alternatives they are voting with the money by buying eee pc’s and Apple computers, and the trend is set to continue. That’s ignoring the many hundreds of millions that are holding off upgrading either because they don’t have the money; want to make better use of those they have; or simply are locked into an earlier version due to technical limitations oddly one of the few reasons these products sell…that and and the crapware(Microsoft slang not mine) installed on Vista.
The answer that a new smaller version may appear between 1-4 years time is simply not a good enough answer.
XP needs more memory to run smoothly, this I won’t deny, but if you double that you will notice that XP is doing something with that “bloat”. You must disable the useless themes and eye candy for both of course. Also, don’t use (anti)Viruses, they are malicious code in case you didn’t notice.
A real case: XP SP2 vs Debian Sid hand-picked XFCE Desktop in a 2.6GHz no-HT P4, 512Mb of RAM, ATI R200(OSS drivers available). With XFCE the speed is comparable, yet the Linux system, using the same OSS applications, feels consistently slower and less responsive. This also applies to text-mode apps such as GCC vs MINGW.
The disease in Vista is called bloat and Linux and the open source world in general have caught it as well – even if it is in a less severe degree.
Sun’s Writer 2.3 is at Word 6.0 level in functionality yet runs slower than Office 2003 because it has been written in 2008.
Don’t get me wrong. I criticize OSS products because I want them to get better. I am amazed at how far the Linux Desktop has come, but I hope every large OSS product gets an Optimization, Redesign and Rewriting department to clean up the mass destruction that the creeping featurism has brought.
1) disable cpu throttling, almost all distros do it to conserver energy/heat..windows only throttles if you use a laptop profile. BIG SPEED DIF
2) while out the gate day one they are certainly comparable, XPsp2 is very nimble out the gate, problwm comes with the config rot. Normally goes like this:
Day1: Fast as hell!
Week1: Fast as Heck!
Month1: WTH!
windows is highly sensitive to config fot ad uninstalling/reinstalling, configuring, eventually takes it’s toll over a few months and you machine runs 1/3 as fast as before if you’re careful.
So while I agree, XPsp2 VS Linux out the box, XP makes a VERY impressive statement, virus or a few bad installs later, you headed for reload land.
I’m calling bullshit on this one. Ever used something like ‘git’ on Windows? or any other heavy shell script on MinGW? it’s f—ing glacial. The way Windows is designed, processes are expensive to create, and shell scripting creates hundreds of processes.
On Unix, according to a benchmark I did a while back, fork()ing a new process from a warm cache is only slightly more expensive than creating a thread (and in the old days of LinuxThreads, they were actually 100% equivalent). On windows, threads are order of magnitudes cheaper than creating processes.
Simply based on design issues, it’s impossible for MinGW programs to be noticably more responsive than the equivalent programs on Linux.
huh?
– MINGW is the Windows-native version of GCC.
– MSYS is the “good” shell replacement derived from Cygwin to ease compilation of “portable” apps.
– Cygwin is the slow and cumbersome POSIX emulator you need to run git and to compile and run some other “”portable”” apps. git is slow on windows because it needs POSIX functionality not available in windows to run.
– If you read the git FAQ, you will learn that the scripts are being translated to C so that git can be a native app too. So, if you wait, it will be at least as fast as the linux version.
finally,
– The make process shouldn’t be spending more time in scripts than in actual compilation.
Talking here from a fresh install of SMGL with Gnome 2.22 and Firefox 2, currently the entire system is clocking in at 197MB of used memory. But then this isn’t any ordinary system, all software was compiled and optimized correctly for my system.
The same setup on Ubuntu would use about 400MB, and my other computer with Paldo Linux starts up with about 300MB used (with Firefox).
If distros wanted to/had time to, they could optimize their releases a lot better.
Edit: I should also mention that the SMGL install in blazingly fast.
Edited 2008-04-16 00:06 UTC
A friend of mines bought a new Vista Home Premium laptop and had no office suite on it.
I suggested Open Office and they tested Microsoft Office the latest.Said it was too bulky and slow with only eye candy.
Took me up on the Open Office suggestion and is very happy with the snap and response.They are even looking forward to the upcoming release.
The bloat is gone,
Like snow in spring,
Haiku there is.
http://www.haiku-os.org/
Haiku, while lacking most apps and everything, is truely a glimpse of hope. At least i can dream a little bit of it. Man, i was so disappointed when BeOS went belly up. It ran like a champ on my 200mhz PPC 604e Mac.
It still runs (Zeta 1.2) perfectly on my laptop, much faster than XP to boot, and I didn’t need to add 512M of RAM unlike for installing Ubuntu.
[qand I didn’t need to add 512M of RAM unlike for installing Ubuntu. [/q]
Ubuntu 7.10 installs and runs nicely in 256MB. 512MB is noticeably faster. But 256MB is quite adequate.
You’re kidding me, right?
I get -horrendous- performance out of Ubuntu 7.10 on my dual P3 1gzh, 512MB, plenty o’ disk on UWSCSI box at home. Oh, with GeForce FX5200, compiz enabled.
If I disable compiz, I can -watch- the windows draw. Enabling compositing at least delays the update until the offscreen buffer is drawn to then does a quick blit operation. There’s still a lot of lag, but at least it’s a fixed amount of lag and I don’t have to watch dirty regions on the screen redraw.
BeOS and Haiku on that box both perform better. Go figure.
Heck, WinXP runs better.
I’m an avid OS junkie, but the claims that Linux (especially Ubuntu) is happy as a clam on old hardware are bunk. I’d rather have my old 400mhz G4 with Panther back than have to run Ubuntu on my dual 1ghz box.
I get -horrendous- performance out of Ubuntu 7.10 on my dual P3 1gzh, 512MB, plenty o’ disk on UWSCSI box at home. Oh, with GeForce FX5200, compiz enabled.
That’s a LOT better hardware than some of the machines I have here and my Linux installation runs just peachy without any hickups or such…but then again, I don’t use Ubuntu You might just out of curiosity try some other distro, I have gotten to like Mandriva these days but there are other good distros out there, too.
PS. Just to give some pointer to as what hardware I have, I mostly use a P4 Mobile laptop clocked at 1.4ghz, 512MB ram and integrated GeForce 4 Go 4200. I did once time how long it takes for the laptop and my server, Athlon 1ghz with 256mb RAM, to compile FireFox followed by encoding a video.. And the 1ghz Athlon beat my laptop quite well Still, I can run Compiz with all the nifty features enabled on the laptop.
Everyone has a different definition of performance. In your case, there is evidently something seriously wrong with your graphics drivers. Your painting performance is very poor, and that makes you think everything is slow. Try switching to the other driver (nvidia->nv or vice versa).
I’ll agree here – screen refresh/repaint performance is a *huge* perception issue. It makes just about anything seem slower.
Personally, I run Xubuntu on my older machines – which gets rid of some of the glitz and improves the perceivable speed of the graphical system.
For raw performance, however, Linux is definitely fast. Compile times are very quick, running intensive computational programs yields excellent results while keeping the system mostly usable for other things.
I run it on a variety of P3 and newer machines – mostly as an environment to build and install Haiku for testing But I also do some basic web browsing, IRC chatting, etc, and find it quite usable.
A bad video driver is an absolute disaster – getting some of the older video chips to work has been challenging and painful.
Yes, Ubuntu is quite slow. I can’t quite point my finger on where the problem is but damn, that thing is slow! Debian runs circles around it so I wonder what do they change so much in Sid that makes it that slow.
And GTK+ probably has something to do with it as I could literally watch it draw widgets on the screen on the old PC that I have here and there are lots of tearing when moving windows around and scrolling on GTK applications (I’m not referring to Firefox as Gecko has its own performance problems, too). But there’s always someone out there that swears that Ubuntu is the quickest distro on this planet and blame the hardware and/or the drivers, never mind that same hardware/drivers plays ball just fine with other systems… Go figure!
Having said that, there must be something odd with your system as these delays are a lot harder to tell with a reasonable fast rig like yours are. On a single sub-1 Ghz processor system with an onboard video chipset or something along these lines all that slowdown is practically taken for granted but on your case I am really inclined to believe that it must be a driver issue of some sort.
Edited 2008-04-16 20:32 UTC
Something sounds a little strange there (DMA? video drivers?). I used Ubuntu for years on a 500 mhz laptop with no graphics acceleration, and it worked better than that. The parent poster, as has been mentioned, obviously has something terribly broken.
Debian is faster, though. At least, the desktop is, for certain; when using GNOME with Ubuntu, I feel noticable lag when opening menus (maybe they use .svg icons?) which isn’t present in Debian.
‘course, I don’t use unecessary bloat like menus except during a reinstall.
256MB maybe, but not 192MB.
I had to use the alternate CD and it was slow even without the useless 3D effects.
Coincidence? I think not.
I was thinking of the bare system booting and then navigating through folders, using the tracker, using netpositive and the other stuff.
By “lacking most apps” i meant the apps i would like to have on BeOS like Photoshop/Quark/Freehand which were my main Apps these days.
But compared to my Apple BeOS was faster, more fluid, for example using the tracker. The whole experience was fluid, compared to other OS back in the day.
Yes, I remember BeOS too. It was wonderful how fast it was.
Like nails on a chalk board,
Haiku posts get old fast.
This is not Slashdot.
You two do not know
How to write a real haiku.
This makes me quite sad.
Excuse me, does “real” count as one syllable? Otherwise you have one too many…
I’m American, so the answer to that would be yes.
OK, thanks!
The rules for haikus apply to japanese anyway, they cannot fit transliterations or ones in other languages.
Wirth’s law rules!
I thought this article seemed familiar..
http://www.osnews.com/story/18931/What_Intel_Giveth_Microsoft_Taket…
looks like a re-publishing of the blog entry that was posted on here late last year
Yeah… I thought so too. It’s the same people with their same Devil Mountain Software Clarity Suite that they want to sell.
I don’t actually know what the point of their strategy is.
I just look at (say) the Gnumeric spreadsheet compared to Excel. Gnumeric is lightning-fast and has all of Excel’s functions plus 150 that Excel *doesn’t* have.
About the only thing that Excel has (that Gnumeric doesn’t) is pivot-tables. Even they may be taken care of in the latest SoC – see here –
http://live.gnome.org/Gnumeric/GSOC2008
As for MS Word, that’s pretty bloated too. Abiword is a nice lightweight replacement.
I agree, however, with those who say OO.org is bloated too. The code for that could indeed do with some really serious pruning…
Edited 2008-04-16 04:32 UTC
yes, gnumeric is great and has been so for many years. Sadly, Abiword, its word processing counterpart never caught up with it. Else they would make a great duo on the GNOME desktop and make OpenOffice unnecessary for all those who do not need presentations. I for one don’t.
My first laptop was a Compaq 1200XL (1999) with Celeron 450Mhz and 256Mb of memory and 40Gb harddisk (upgrade) hopelessly underpowered for its task. I still have this one and modern linux is really to heavy for this one. I use it as Jukebox
My second laptop was a HP nx9105 (2004), amd64 3000+ and 1Gb memory, 120Gb harddisk (upgrade). I still have this one and use it. In fact it is that my employer gave me a new one, because I can’t see this one will be underpowered in the next 3 years for my tasks. The strangest thing is that the last year the memory in this machine is made more the enough, by replacing beagle with tracker and firefox 2 with firefox 3 the avarage memory consumption hovers around 512Mb while doing my tasks. And everything under compiz, very smooth
So now my current laptop, a Lenovo T61p, dual Core 2,2Ghz, 4Gb memory, 100Gb 7200 rpm harddisk. If I can’t see not using my previous laptop in the next 3 years, how can I even imagine this one going to be underpowered.
Unlike Windows Vista, the linux desktop can make a 3 year old laptop a very usable machine, with compiz
I don’t really see any significant changes from Win 2k. I mean the only reason I use XP is because of Clear Type and because a lot of the newer apps don’t work on Win 2k, since it is now too small of a target for developers to write for it.
I’m still struggling to find a reason to switch to Vista and replace all my hardware. I’d rather replace my hardware with a Mac and try something new in the process, even if there is a chance that I might not like it, or might not be able to work with it.
It’s the same story with Office. I see no innovation from Office 97, at least none that I or any other average Office user would use.
So, what is mind boggling is that while features have stagnated over the past 10 years in both products, requirements have grown exponentially. And people still buy into their new lines of products.
So the question … can Microsoft actually EVER screw up so badly that it loses its monopoly?
They are doing it right now. That’s why Linux and OSX is gaining on windows, firefox is gaining on IE etc.
so, what the heck apple and linux alliance are doing these years? they spent nearly 10yrs and each have their own technical advantage but still can’t overtake Microsoft (at least in market share)…
Open office is much slower than Microsoft office. It is also very slow in linux. Sad but true.
You need to buy a new stopwatch.
Bullshit.
It’s highly configuration/memory/distro dependent.
E.g. (Tested on two of my workstations at work)
Windows:
Dual Xeon 2.8Ghz (4 threads using HT), 2GB, IDE RAID, Windows XP/SP2.
Memory usage: 640MB (Cache: 300MB, Visual Studio 2003, SSHD server, Norton AV):
Starting winword from cmd.exe.
(Times measured with a stop watch)
Cold start: 5.1s
Second run: 2.7s.
Third run: 1.9s.
Linux:
Dual Opteron 275 (4 x 2.2Ghz cores), 8GB, SATA RAID, Fedora Core 8/x86_64, SELinux, KDE 3.5.9.
Starting oowriter from konsole.
Memory usage: ~5GB (~1.8GB buffers, vmware, firefox, bunch of vim windows; number of network servers)
Cold start: 1.8s [1].
Second run: 0.5s [2].
Third run: 0.6s [3].
Granted, the Linux machine is twice as fast (and has faster drives to boot), but starting an application (a bad benchmark to begin with) doesn’t really scale beyond one core.
– Gilboa
[1]$ time oowriter
real 0m1.844s
user 0m0.103s
sys 0m0.068s
[2] $ time oowriter
real 0m0.570s
user 0m0.083s
sys 0m0.043s
[3] $ time oowriter
real 0m0.611s
user 0m0.092s
sys 0m0.046s
I’d say the I/O situation is important too. And whether or not you had OpenOffice prestarter loaded. I think Office used to have a preloader too, but they got rid of it by 2003 or maybe 2000.
On a pretty slow laptop HD in Vista the Winword launch time is 3 seconds. This performance mostly due to superfetch. But none of this matters because you typically only launch the software once and then if you’re doing any significant work you care about the continuous performance of the editing and paging tasks once the software is running.
“It’s capitalism, stupid.” That could be the moto. Vista, OS X, Intel, bloat, horsepower and all other are, in my opinion, symptoms of the basic principles of capitalism. First you produce artificial “needs”, than you sell “wants” and voila – a never ending spiral of consumerism. I don’t expect Apple, Microsoft or any other company to change their ways. The only alternative that could brake the spiral – not in a short run though – is linux. That is my philosophical reason for using it. At least a part of it.
linux is great for capitalism! there are many new products and services that have become available as a result of linux: tivo, mobile phones and devices, reliable servers, etc. There are many people who use it for free but that helps those who want to make money from it.
Even ICQ waste more memory on my notebook then word 2007
….
Edited 2008-04-16 13:57 UTC
Software bloat is one of my major pet peeves. As software consumers, buying or using bloated software is like telling software development companies that it’s OK for them to throw our money in the trash. This is effectively what’s occurring when you have to go buy a new machine every 2-3 years just to run your apps at the same speed. A meager 5 years ago, it would’ve sounded insane for any office productivity application to use the amount of resources they do today. Now, it takes 256MB of RAM and a 3Ghz CPU just to store a list of email addresses.
Windows, OSX, and most of the popular *nix GUI apps are all guilty of this, so apart from illustrative purposes, it can be misleading to focus on just MS. In the Windows world, 3rd party developers are just as guilty (e.g. Nero, Norton, etc).
Of course, all it would take is users intelligent enough to stop throwing their money (and time) away on bloatware. There’s more than enough minimal apps out there for all OSes, so that if you know what you’re doing, you can reap the benefits of modern hardware. Meanwhile, you can sit back and watch the masses run the upgrade treadmill, squandering any gains their hardware upgrades might grant by simultaneously upgrading their software versions. The fact that these users keep buying faster hardware may be helping to drive hardware development, after all.