“The Intel 965GM Express Chipset represents the first mobile product that
implements fourth generation Intel graphics architecture. Designed to
support advanced rendering features in modern graphics APIs, this chipset
includes support for programmable vertex, geometry, and fragment shaders. Extending Intel’s commitment to work with the X.org and Mesa communities to continuously improve and enhance the drivers, support for this new chipset is provided through the X.org 2.0 Intel driver and the Mesa 6.5.3 releases. These drivers represent significant work by both Intel and the broader open source community.”
Intel is doing really good job in convincing me that my next laptop should have VGA made by them
Amen to that. I installed Ubuntu Feisty on my friend’s Centrino Acer, and it is like a breath of fresh air to use on that machine. ACPI, Compiz, everything, out of the box.
Yes. I just saw HP release a new 17″ laptop with the Intel graphics chip and it supports 1680×1024 . It’s on my to buy list.
I don’t need the power and the heat of nVidia and ATI can bite me with their driver policy (where’s the FreeBSD support) and quality (probably would suck anyway).
nVidia does a great job for FreeBSD support and I use them on my desktop.
VGA? Damn all the overloaded TLAs in IT. I’m sure everyone can figure out what I read that as the first time.
This shows that open source graphic drivers are possible. In fact Intel is in disadvantage in the graphics field, still they don’t mind show the code.
ATI, NVIDIA, f–k off. ATI drivers are just crap, nvidia drivers are fast but often unstable. And both have stopped supporting old hardware: I can’t use anything but opensource drivers for my ATI 9200 because the propietary ati drivers have stopped supporting it. Not only in Linux, in windows aswell. In fact I only can install Vista by using the crappy and slow VESA driver.
Next time I’m buying Intel. They release open source drivers also for SATA, network cards, wireles…and their CPUs right now are faster and cooler than AMD’s – and if AMD release better CPUs, a CPU a bit slower and hotter will be a price I’ll be willing to pay to get great opensource drtiver support.
Actually, based upon recent research that I needed, Intel has a roughly 40% market share in graphics chips.
Compare that to both nVidia and ATI at about 23% each.
I too have become frustrated with nVidia. I have had Dell laptops with nVidia cards running Linux since RH 8.0.
The now lingering (> 6 months) issue of black windows when running Beryl has facilitated an increasing level of frustration on the nVidia forums and elsewhere. This is due to a TurboCache bug in the current generations of drivers from them.
The official word is that ‘it is being worked on’, but even the recently released beta versions of their Linux drivers were largely to support new cards, not to fix open bugs.
In the mean time, their official resolution is to disable Beryl. Some kind community folks have spent a fair amount of time trying to debug the issue using the various settings in the Beryl manage with some success, but most of the solutions don’t seem to work for cards with modest amounts of VRAM (ie. <128 Mb).
nVidia seems to be more focused on supporting Linux on new cards, rather than doing what they need to do to keep existing customers happy.
When time comes to replace my current laptop, if nVidia’s attitude and the status of these open issues have not changed, I will be buying a laptop with Intel graphics chips.
The now lingering (> 6 months) issue of black windows when running Beryl has facilitated an increasing level of frustration on the nVidia forums and elsewhere… Some kind community folks have spent a fair amount of time trying to debug the issue using the various settings in the Beryl manage with some success…
FOSS is great at rapidly addressing these kinds of frustrating issues. While enabling the sales of new hardware is on the top of the priority list for nVidia, providing a satisfying experience on existing hardware is the top priority for the community. Imagine if they could work together…
The problem for nVidia is that part of enabling new sales is to ensure that your existing clients become repeat clients.
If you focus on new sales at the expense of existing clients, eventually there will be no new clients to purchase your products, as prospective purchasers will become disenchanted and look elsewhere.
Of course the fundamental problem for Linux users is that we hold no financial influence when it comes to our marketshare as compared to Windows purchasers.
Thus, we have no fundamental ability to influence nVidia’s priorities or their allocation of technical resources in this situation.
So they focus on sales to OEMS to ensure that folks like Dell can sell XPS type systems with high end cards or on the aftermarket for high end cards for gamers who are upgrading existing desktop HW.
When it comes to Linux, they are now focusing on supporting new high end gamer’s cards at the expense of supporting the rest of us.
None of which frankly, do I care about.
I disagree; as ‘technical users’ we’re the ideal customer – we don’t ask for technical support, we tend to purchase those cards with good margins, and we evangelise when good support is provided for our operating system of choice – free PR for the company
More importantly, we have a large influence over end users by virtue of being the ‘go to guy’ when people want advice over what hardware or software to purchase – we give advice, we install it for them, we configure it for them.
All I say to the likes of Ati, Nvidia and Intel – fear the geek, because we have great influence over our friend’s and family’s purchasing decisions – piss us off with an anti-UNIX stance by virtue of failing to provide drivers for our operating system of choice, and face people getting advice not to purchase hardware from your company.
“and their CPUs right now are faster and cooler than AMD’s – and if AMD release better CPUs, a CPU a bit slower and hotter will be a price I’ll be willing to pay to get great opensource drtiver support.”
Yeah I hate the closed source cpu drivers…
“CPU drivers”? What are you talking about?
I was talking about choosing Intel because that’s the one way of getting a “Intel system” with an intel SATA/network/graphics/wireless controller.
With AMD CPUs you’re stuck with ati or nvidia. Basically, buying Intel is the one way to get a hardware platform with oipen source drivers. Thats was what I was talking about.
I think that my attitude is going to change. Although I consider Intel as an monopolist, their support for FLOSS seems to be a few orders greater than AMD’s. After AMD/Ati merger they have promised to open up their drivers, yet they didn’t even get their binary drivers right until today. In contrast, Intel provides decent support for their wireless drivers, video chip sets etc and they do it almost on par with Windows equivalents in both quality and time.
AMD/ATI have never promised to open up their drivers.
Intel has had some business practices in the past that arguably constitute predatory pricing. But this is not the reason why they outperform AMD through most of each product generation. Intel is 6-12 months ahead of any other vendor in semiconductor fabrication. It took an ill-conceived processor architecture to allow AMD to have a window of sustained performance advantage, during which time they were unable to capitalize on demand due to limited production capacity.
I’ve wanted AMD to succeed for some time, but their long-term prospects don’t look so hot. They currently enjoy a bandwidth advantage in the multi-socket market, but that won’t last through 2008 as Intel moves to a serialized bus architecture and an on-die memory controller. They’re ramping up production capacity for 2008/9, but they needed this badly in 2004. If AMD continues to take a beating in performance per watt, they run the risk of having excess capacity burning a hole in their pocket.
AMD is failing to execute in spectacular fashion in the graphics space, with the ATi acquisition causing a hiccup in the release cadence that this market doesn’t tolerate. Where’s the DirectX 10 part? With lack of competition comes lack of pricing pressures, which nVidia is using to redefine the price point for high-end graphics. If AMD was doing their part, we wouldn’t have $850 graphics cards today. If this continues, the lucky few will be getting $1000 GeForce cards for Christmas.
Intel can compete with nVidia–eventually. That’s more than I can say about AMD going forward. They have discrete graphics in the works, and they have the industry clout to make the form factor adjustments necessary to account for GPUs with power envelopes that dwarf those of CPUs. They also have the process technology to do something about runaway GPU thermals.
Finally, Intel is going to leverage OSS as they take on the midrange and high-end of the graphics market. It’s public knowledge that these GPUs are glorified stream processors with a few special-purpose hardware accelerators and highly evolved drivers. Letting the OSS community participate in the development of these drivers will bear fruit for Intel. With an OSS starting point from Intel and full hardware specs, we can make better drivers than nVidia or AMD and establish leadership in the graphics market.
What makes you think that OSS people have any idea what to do with a stream processor or how to write a graphics driver? The kinds of people who are really good at this stuff are few and far between. And I’m willing to bet that they want the big bucks.
The high end gaming graphics market that’s out there grew up around a highly proprietary synergy between Microsoft and NVidia/ATI. OpenGL was not going anywhere in the 3D gaming market until after DirectX had been established. 3dfx was the exception, and it seems like they did a lot to get OGL working on hardware.
OSS seems to be a means for corporate success only when one is selling hardware. Otherwise, open-sourcing the software is just a tool for companies that are going out of business or simply can’t compete to sell their software. The linux kernel is a big exception to this, but I’m not so sure it’s doing a great job when it comes to things that require more centralized design and careful planning such as suspend/resume.
What makes you think that people don’t take advanced computer graphics courses in school and then decide to do something else with their careers? There are community projects reverse engineering graphics cards and rewriting the drivers from scratch. That’s much harder than exploring a fully-functional OSS graphics driver and finding room for improvement.
Some people just don’t know how to think like a programmer. But for those who do, picking up new specialties just isn’t as difficult as one might think. Programmers with different backgrounds can add a great deal of value to a team. Someone with a background in filesystems might have different insights than someone who has been doing graphics since high school.
One of the challenges facing any software vendor is figuring out what their customers want. Proprietary vendors have various strategies for doing this: Competitive analysis, focus groups, customer interviews, and beta programs. OSS vendors provide SVN repositories, nightly builds, download mirrors, bug trackers, mailing lists, IRC channels, forums, wikis, and blogs. Which method do you think best addresses the challenge of staying in touch with the needs of the userbase?
However, vendors that throw their code over the wall but fail to build a community and engage it in the development process will not realize the potential of the OSS model. Ask the OpenDarwin folks about what happens when OSS goes horribly wrong. Sun has positioned themselves as an OSS powerhouse, but while they have many millions of lines of OSS, their communities still in the incubator phase. They’re gunna need a lot more eyes to cover that voluminous software library.
But back to graphics. OpenGL was 3D gaming up until the third or fourth release of DirectX. The first few versions of DirectX were somewhere between a joke and a good try. Most of the functionality to support OpenGL or DirectX is in the driver stack, and this is becoming more true over time. The hardware might be optimized for the kinds of instruction mixes that these programming models tend to produce. But most of the reason why nVidia tends to have an advantage over AMD/ATi in OpenGL is because their drivers do a better job of code generation for OpenGL. As I said before, most of the magic that makes those chips render 3D scenes–as opposed to solving some other kind of vector computation problem–is in the drivers.
The kind of “highly proprietary synergy” you refer to is the reason why we still use these chips predominantly as graphics accelerators as opposed to using them for accelerating any code that can be vectorized. If the hardware wasn’t closed, we would have optimizing compilers that generate code for “graphics” chips wherever possible. But instead we rely on the graphics vendors to implement specialized routines in their drivers. nVidia can now offload some video codec operations if you use their driver interface. But if I want to do some slightly different block transform, I’m out of luck, because they didn’t account for that.
As for things like suspend/resume, this is part of a class of systems programming problems where the basic approach is to have callback hooks for which various components can supply routines. When the system suspends, it runs through the list of registered callbacks, and the drivers (for example) are responsible for doing whatever magic they need to do in order to support suspend. An analogous process happens when we resume. This takes discipline, and it involves little pieces of code scattered across various parts of the source tree. But I’m not sure this is inherently more challenging for OSS than it is for proprietary vendors. For one thing, third-party proprietary kernel modules mean all bets are off. There’s no way to tell if the module will properly suspend and resume, or if it might barf all over other parts of memory if we try.
I wouldn’t worry about whether the OSS community has enough skills or the discipline to deliver high-quality software. We’ve got plenty of talent, probably more than any one of those big proprietary vendors. They’ve got managers to decide who’s to blame when deadlines are missed, requirements are overlooked, or teams aren’t on the same page. OSS developers rely on each other, and they pick up the ball if someone else drops it.
Software has more value in how it can be reassembled and extended to solve new problems than it has in and of itself. That’s why OSS makes sense.
Edited 2007-05-12 06:01
I’m sorry, I think I was a excessively hateful toward OSS people with my last post. I had just come off a really annoying argument with a zealot which I just should have avoided after the first post.
The reason that I think an open NVidia or ATI graphics driver will be hard is because the interface between one of the fast chips and the software that runs them appears to be complicated, non-obvious, and messy. And this level of graphics hardware design does not seem to be something one learns in school. It’s easy to imagine a large OSS developer community building up around a kernel, or a database, or a compiler, because these are all topics that every CS student learns and for which there are a lot of educational materials.
I’ve never done any 3D graphics myself, so perhaps this knowledge of how the chips work is out there and learnable too.
The reason that I think an open NVidia or ATI graphics driver will be hard is because the interface between one of the fast chips and the software that runs them appears to be complicated, non-obvious, and messy.
And it’s something that a number of OSS developers have been able to figure out for the r300/r400 ATI cards simply through reverse engineering (and that others are working on for nvidia cards through reverse engineering). Imagine how much more they’d be able to accomplish with specs or source code to already fully functional drivers.
“””
I’m sorry, I think I was a excessively hateful toward OSS people with my last post. I had just come off a really annoying argument with a zealot which I just should have avoided after the first post.
“””
I’m not aware of that exchange. But it sounds like a perfect example of how well meaning, but overenthusiastic people employing bad advocacy can do far more harm to their cause than good.
Try to remember that *most* of us try to respect others’ points of view, exercise a reasonable amount of courtesy, and don’t go out of our way to offend.
I’d rather agree to disagree than make an active enemy.
Edited 2007-05-12 20:48
{What makes you think that OSS people have any idea what to do with a stream processor or how to write a graphics driver?}
What makes you think they don’t?
{The high end gaming graphics market that’s out there grew up around a highly proprietary synergy between Microsoft and NVidia/ATI. OpenGL was not going anywhere in the 3D gaming market until after DirectX had been established. 3dfx was the exception, and it seems like they did a lot to get OGL working on hardware.}
History revisionism. DirectX is strictly second-rate compared with OpenGL, to the extent that Microsoft had to eventually buy some OpenGL patents from the death throes of SGI, and then make OpenGL run on top of DirectX in Vista to kill its performance in order to try to kill OpenGL.
Still OpenGL rules in high-end visual systems.
The high end of graphics in PCs is not PC games, it is more like this sort of gear:
http://www.es.com/
“I’m not so sure it’s doing a great job when it comes to things that require more centralized design and careful planning such as suspend/resume.”
Suspend resume problems are all due to failures of motherboard & BIOS to stick to the ACPI specification. Even Windows has problems, and most BIOSes are written with Windows in mind.
Suspend/resume works perfectly in Linux (say on Ubuntu Feisty) on any machine with a correct ACPI implementation.
Edited 2007-05-12 07:25
While they seem to have made a habbit of providing “free”-drivers, they also seem to have a habbit of relying on non-free firmwares. Has this changed?
Firmware doesent really matter, sure, i wouldnt mind having it as free, but its not as important to me, as that is something i consider part of the device, and it also runs on the device. however their choice of the region-controlling daemon was really poor, luckily they are rectifying that.
I always bought AMD cpu’s and Nvidia graphics, this is gonna change, im getting intel for both graphics and cpu now. Why? i want freedom, and i want stability, my life just isnt long enough that i care to put up with the crap nvidia or ati would have me do.
How many vendors ship free firmware? Right now, as far as I know that’s limited to a couple of SCSI controllers. Your hard drive contains non-free firmware. If you have a laptop, the embedded controller contains non-free firmware. Non-free firmware is probably loaded into your CPU at startup. Almost all wireless cards require non-free firmware. While it would certainly be preferable to have free firmware, Intel are hardly especially evil here.
As long as they have the docs and allow the firmware blobs to be freely downloaded it’s not a problem. OpenBSD even has no issue with firmware blobs as long as they’re redistributable.
I know that their network and wifi cards have required this, the graphics cards though, at least my Intel 945 hasn’t required non-free firmware, as I run gNewSense and get good 3D performance. I’m interested in seeing if it’s the same with this card, as I’m wanting to get a new laptop with this more powerful graphics card.
Okay, I realize that conspiracy may be a harsh word for this.
I recall reading quite a while ago that one of the reasons nVidia hasn’t opened their drivers was due to Intel stating that they would sue them for some IP with the AGP code for their chipsets.
Now could this be because Intel all along was planning on trying to be the favored child of the Linux community?
I’m not saying that they’re evil for this, just that I think they were smart to start on the bandwagon first and hey, if they can also prevent others from being a favorite of the upcoming Linux users, then so be it.
ATI’s drivers on the other hand just suck. We know that they only release specs on their cards that are quite old and that’s only because they completely drop support for them.
I wish more of them did something similar to what Matrox used to do with their Gxxx cards. The driver was open source, except for some of their code for doing dual-head, which they more or less pioneered.
I do agree that installing Ubuntu on a laptop with an Intel graphics chip is so very nice. It worked straight out of the box, but of course my Matrox G400 always has as well.
I don’t think that what Nvidia says is actually the real reason. If they had a good will to support FLOSS they could simply provide GPU specs to e.g nouveau developers. All the specs would have to provide would be what values to put into GPU registers to trigger certain functionality which perform calculation in the hardware and returns the result. They don’t need to reveal IP in order to support FLOSS drivers.
I think at one point the nvidia-glx drivers did work and were open source. Then about the time the Geforce 2 or 3 came out, the project stopped.
It definitely will be nice once the nouveau project matures. Though at least the nVidia drivers, while closed source, are quite good performance and feature wise, unlike the ATI ones.
I just wish Intel or Matrox or someone besides nVidia and ATI would get out some high end 3D hardware with opened drivers for Linux.
XGI provide Open Source drivers. It was also be logical for S3 to provide open drivers, as they’re hardly making a killing in the market right now and it couldn’t do any harm. I don’t really think you can call either of these two companies hardware “high end” though. Higher than a GeForce 4 or Radeon 9250, certainly, but they can’t compete with nVidia and ATI at the top end.
They don’t need to compete on the high-end, they just need to be well documented enough that developers can make them perform like a high-end card…
–bornagainpenguin
I recall reading quite a while ago that one of the reasons nVidia hasn’t opened their drivers was due to Intel stating that they would sue them for some IP with the AGP code for their chipsets.
They did say something like this. That garnered some understanding, but they could have worked around it had they wanted to. They could have buddied up with AMD and had the gfx card communicate over hypertransport or something. Of course, then AMD/ATI happened so that went right out. Perhaps AMD will do something like that now. Hasn’t happened yet though. I worry a bit about nVidia, with AMD/ATI on one side and Intel on the other working on their own solutions (larrabee looks interesting).
I was enticed by the idea of the open gfx, so I got a mobo with the Intel GMA X3000: the ga-965g-ds3 even though I also got an nVidia card. Figured for 5 bucks more it couldn’t hurt to send a message that I liked the openness. It differs from the ga-965p-ds3 by the presence of the graphics card only. Was a slight mistake though, my version of the board (with the Intel graphics) doesn’t overclock at all
Mesa GL doesn’t even come close to working properly with OpenGL games like Quake and DoomIII. Sorry but open source video drivers are just junk!
Nothing comes close to Nvidia’s driver packages for Solaris, FreeBSD and Linux.
Edited 2007-05-11 20:14
Mesa GL doesn’t even come close to working properly with OpenGL games like Quake and DoomIII.
Mesa is software rendering. Why would you want Mesa. You want good drivers that avoid you using Mesa rendering.
Sorry but open source video drivers are just junk!
Most of them, but not Intel’s. Intel is doing their *OWN* drivers and releasing them. They have the specs, they’ve the money to hire programmers….their drivers are fast, they’re feature-complete, unlike other crappy opensource drivers
Nothing comes close to Nvidia’s driver packages for Solaris, FreeBSD and Linux.
Are you jocking? Installing Nvidia propietary drivers is a pain in the ass. Nothing comes close to open source Intel drivers that get compiled and included along the rest of opensource drivers.
Are you jocking? Installing Nvidia propietary drivers is a pain in the ass. Nothing comes close to open source Intel drivers that get compiled and included along the rest of opensource drivers.
Sure they are a pain in the ass on Linux – but for that you need to blame Linus for his infinite wisdom about no stable kernel APIs.
FreeBSD and Solaris do not suffer from any such problems.
On all of these *nix systems, the majority of the driver support lives in userspace and interacts with the driver interface in the X server, which is under a non-copyleft, non-share-alike license. The part that involves the kernel is the direct rendering code, which interacts with the DRI framework. To get around the linking problem with the Linux kernel, this part of the nVidia driver lives in userspace and talks to a GPL shim layer that they load into the kernel. If the DRI changes, they only need to update their shim.
FreeBSD and Solaris face challenges because Linux DRI is advancing beyond their capabilities, mainly due to contributions from Intel. FreeBSD and especially Solaris are mostly locked into the frameworks they have. This is the double-edged sword of stable interfaces. As nVidia works to keep their shim in sync with Linux DRI, it becomes a hassle to continue to support these other kernels.
In his “infinite wisdom,” Linus decided that eventually people’s patience with–and trust in–proprietary drivers will run out, making stable in-kernel interfaces a non-issue. The Linux kernel development model works great for OSS drivers, and most hardware will eventually be supported by OSS drivers. Our suffering is temporary and for a good reason.
Pardon – you do realise that DRI exists in both Solaris and FreeBSD; Solaris has the Intel driver merged, the DRI merged and Mesa is included with Solaris as well.
There is no double edge to a stable interface – what it does mean is this; when you design an interface, it’ll take longer – rather than throwing ideas at a wall and see what sticks, it’ll require going away, designing, arguing, debating, analysing then implementing – sure, it’ll take longer, but what it means is that the API has been properly thought out before being merged – and better still modular enough so that new features can be added without breaking compatibility.
A prime example of this would be Solaris’s new network driver API which covers all the facits of networking, and when features are added, such as WPA support, all wireless drivers support it by virtue of it being in the actual driver API rather than it being an after thought.
I don’t want to bash Linux, but to me, it always seems that people just keep throwing ideas and see what sticks rather than sitting down and logically thinking out what the requirements of the API are and all the various issues that surround it.
Of course Solaris and FreeBSD have their own DRI implementations. I won’t argue that But Linux DRI is receiving contributions from Intel and others that aren’t going into the other kernels.
Yes, these are two very different development models, and I would hesitate to argue that one is clearly better than the other. They both have their advantages and disadvantages, and although Linux has had some success, it’s too early to pass judgment on whether their development model is appropriate for the mature and widely-deployed OS that it aspires to become.
The pattern for Linux driver frameworks starts with organic development followed by a unification effort or two. SCSI, Ethernet, and USB reached their current state of maturity through this process, and 802.11 is undergoing its reconstructive surgery right now. The end results are generally very good, and that might have something to do with the number of ideas that get hurled at the wall, so to speak.
If Solaris is to compete with Linux head-to-head, then a stable driver interface is going to be a strong selling point. By that point, though, it may be that most of the Linux driver interfaces will have already reached a state of de facto stability. We’ll see
True, but for me, I think its going to be interesting to see where Linux is going to come up – if Sun really push hard for Solaris in the enterprise; FreeBSD will always hang around as a bastion for University students – where is Linux going to sit?
I have a feeling that Linux will be *the* operating system for embedded operating systems – and honestly, I don’t why some see this as a bad thing; if we have Solaris/FreeBSD/Windows/MacOS X with a small number of Linux on the desktop, but millions upon millions running Linux everyday with their mobile, mp3 player and pda.
For me, I think its good – the future shouldn’t be able the infighting between the different *NIX’s, but instead working together to fight a common enemy – that enemy being the promotion of closed formats and the likes of Microsoft.
Pardon – you do realise that DRI exists in both Solaris and FreeBSD; Solaris has the Intel driver merged, the DRI merged and Mesa is included with Solaris as well.
I can’t speak on the Solaris situation, but I should point out that FreeBSD is using an older DRI interface, and that AIGLX is currently broken on Intel and Radeon cars under FreeBSD.
Adam
“Sure they are a pain in the ass on Linux – but for that you need to blame Linus for his infinite wisdom about no stable kernel APIs.”
A stable in-kernel API does nothing but hinder further development in terms of new features and bugfixes. Please read Grek KH’s response on this for more info, called “stable_kernel_nonsense.txt” in the kernel source’s Documentation directory. (It’s also available online at his site: http://www.kroah.com/log/linux/stable_api_nonsense.html )
(Sorry, I meant to vote you down but clicked the wrong button. Would someone do so for me please? )
Last time I checked, you weren’t supposed to mod people down just because you disagree with them. You’d be -5 if that were the case. No wonder so many posts get modded negatively even though they are not breaking the rules (off topic/personal insults.) Learn to moderate…
That said, a stable kernel API doesn’t hinder development. Just because one person, obviously biased, says so – doesn’t make it true. I’d say dtrace was light years ahead of anything linux has to offer. Same with ZFS. Same with zones. The list goes on.
A stable API just means the development model is different. More time planning, less time hacking together code that semi-works. There’s a reason why drivers in linux tend to be flaky, especially ones that are newly introduced. Personally, I’ll stick with stability. You can have all the bleeding edge functionality that doesn’t/barely/halfway works, I’ll happily wait for a stable solution that always/fully works instead.
Different models, different methods, both eventually get to the same point. One is a little more rapid at the expense of a lot of stability, one is a lot more stable at the expense of a little bit of rapidity.
That said, a stable kernel API doesn’t hinder development.
That depends. Solaris is doing a pretty good job at it, but you only have to look at Microsoft’s current situation to see what happens when things don’t work out quite so well. More to the point, Linux is developed under a different philosophy – they have to rely on community contributions rather than paying people. So while Sun can pay people to work on a stable api the same situation in Linux may lead to a stall in development or a fork when developers get frustrated about having to worry about keeping everything stable all the time. Then again, maybe not. None of us really know.
There’s a reason why drivers in linux tend to be flaky, especially ones that are newly introduced.
Except it seems like a stable api would only help long-term stability. Why would it negatively affect new drivers written specifically for it more than the old ones? I think the reasons the drivers are flaky is because no one is paying testers to make sure everything is qa’d.
Different models, different methods, both eventually get to the same point. One is a little more rapid at the expense of a lot of stability, one is a lot more stable at the expense of a little bit of rapidity.
I do pretty much agree with you here.
Edited 2007-05-12 01:59
Mesa is software rendering.
Actually, it’s not. Mesa, built with the linux-dri target (as is done on most/all linux distributions these days), will provide the various direct rendering 3D drivers, including those for Intel GPUs.
Adam
How is it a pain in the ass to install them? If your distro doenst have them in their reposotory, just download them, and just run the script! make sure you have the kernel-headers installed, and its just fine!
On the other hand, I havent experienced this sucking of nvidia drivers that some says… I think they work really well. They have legacy drivers for linux, witch at least provide hardware 3d for older cards. Configuring dualhead without editing a line in xorg.conf is pretty nice, I was actually surprised it worked that good with ‘nvidia-settings’.
Mesa GL doesn’t even come close to working properly with OpenGL games like Quake and DoomIII. Sorry but open source video drivers are just junk!
I don’t think open-source drivers are junk. For example I’m using open-source drivers with my Radeon 9200, and they work just fine, and yes, Quake does run just fine! Beryl works and all such, no problems there. They’re very stable, fast and so on, and they support _almost_ everything the closed-source ones do. The only thing that needs improving is the TV-out support, but even that works in 800×600 resolution with the correct patch..
I use the open source Radeon driver too (for the 9200), and the advantage of open drivers for ATi (when available) is that they support 3D unlike the presently available open nVidia drivers.
On the other hand, I don’t really think the Radeon driver is very fast at all. I turned off all Compiz 3D stuff because it’s slow and doesn’t feel solid. On OS X I use the same type, ATi Radeon 9200 and the (closed, of course) driver for OS X is way better: faster and feels more solid.
True, I have never experienced any X crash or anything of the kind on Linux using the open driver.
There have been some news about Ati now aiming to improve their drivers and commitment to open source:
http://www.phoronix.com/?page=news_item&px=NTc2Mg
http://www.0xdeadbeef.com/weblog/?p=288
I don’t know, I got a feeling like I had heard statements like that from Ati before too though.. Many people do claim that Ati’s open source drivers might be better than nVidia open source drivers though – and maybe so? But the only Ati video card I ever had, many years ago, I immediately changed to a higher quality Matrox card that worked way better both with Windows and Linux. Then – after Matrox recently almost completely stopped supporting the Linux customers with their newer cards – I’ve said Matrox “goodbye!” and chosen and used Nvidia because of their well working, although proprietary drivers.
After reading news like this – also my next GPU might indeed be Intel too. But let’s hope that also Ati will still be interested in supporting open source operating systems too.
Edited 2007-05-11 20:38
“after Matrox recently almost completely stopped supporting the Linux customers with their newer cards ”
Actually they are not good with vista too; till now they don’t have vista working drivers for QID low profile Graphics cards which cost 800$.
One of our customer’s workstation had this card and he wanted vista, and we contacted them to ask for the release date for vista drivers, and they told us next week, and every week they say next week!!
I don’t like their treatment for their customers, especially ones who paid 800$ for their card.
Installed, resolution under Gentoo now equals that under Windows.
Very happy I specified the Intel chip when I bought from http://www.emperorlinux.com
nVidious and bATI can defenestrate themselves.
I applaud intel for creating open source drivers for their chipset video cards; but that doesn’t mean the drivers are perfect either.
My Dell 700m, with intel 855gm chipset/video, and running Xorg 1.3 and intel 2.0.0 drivers, still has a lot of problems.
1. beryl/compiz/opencomp are slow (and with older 1.7.4 drivers and Xorg 1.2 they were much faster)
2. only x11 video works, everything else is prone to crash or not display. Basically all video is rendered by the CPU (even though Windows XP drivers render through VMR9 with GPU assistance)
3. random crashes upon resume
The biggest problem is that anything <915GM is basically considered outdated and I don’t see any immediate plans for intel to fix these problems.
I would say Nvidia and ATI (and even intel’s closed source windows drivers) work a lot better.
Edited 2007-05-11 22:40
I’m confused over the Nvidia bashing that is going on – sure, their drivers are closed source, but for me, I’d sooner have quality closed source drivers than no drivers at all.
I’ve used Nvidia’s drivers on FreeBSD, Solaris and Linux – and quite frankly, they’re very stable and reliable – compare that to Ati who flat out refuse to support Linux in a substantial manner, or Solaris/FreeBSD. To make matters worse, even after the AMD buy out, things still haven’t improved.
When I raised this, I get bashed saying, ‘Oh, thus just merged’ – I don’t buy it – if I were to buy out a company, I would lay down the law; this is what I want from the new acquisition. What is so hard with sending down a order to the relevant manager to say “this is what I want you to focus on, I want better drivers for Linux, Solaris and FreeBSD – failure to do so will result in the termination of your contract”.
Unfortunately too many companies pussy foot around around the edges instead of making their demands known, and the back it up with robust action if it is not carried out in a timely manner.
It might be all-right as long as you use a supported graphics card and only use one of the lucky supported platforms – Otherwise you’re fecked, because no specs are revealed, not even under NDA.
If and when nVidia choose to drop support of your platform and/or of your hardware – you’ve got a major problem, that only reversed engineered opensource drivers can patch a bit.
Let’s wait and see… but it’s good, in general.
I’ve been using amd cpu’s and nvidia chipsets and nvidia Quadro 2D/3D graphics in my 2 networked and beowulf.org ‘ed cluster commodity desktops (they are Linux only – with win2k and qnx on 2 small partitions) for many years – since k6-2 500 Mhz began selling – and quite pleased with it. No Intel chips for years.
Lately I’ve been reconsidering this choices for my new desktops; mostly because of chipset support drivers for linux (acpi-suspend and asus fan auto-control aren’t reliable on linux) and power consumption (nanometers).
I think Intel will be my next chipset/cpu and that I will buy a pricey mac-mini also (finnaly) to run both Linux and OS X on a small partition (afaik – mac-minis use the mobile Intel chipsets and vga onboard and will be upgraded now for sure).
There’s a lot of competition going on in the amd notebooks for Vista users (I am/was an amd fan).
http://www.reghardware.co.uk/2007/05/09/nvidia_rolls_out_amd_igps/
and this for desktops
http://www.reghardware.co.uk/2007/05/08/nvidia_guns_for_integrated_…
(I know this isn’t a hardware afficionados phorum but, alas, we all like our hardware sharp).
___________________________________
I still feel suspicious about corporations over Open Source joint efforts – for some reason that I can’t really explain.
Edited 2007-05-11 23:22
Vista?
What is this Vista you speak of?
–bornagainpenguin
“””
Vista?
What is this Vista you speak of?
“””
http://upload.wikimedia.org/wikipedia/en/6/6a/Mordor.png
Edited 2007-05-12 20:31
So this is the source of global warming. Mount Doom it all makes sense now
Also sorry that you encountered a zealot. I personally still use Windows but my next computer is going to be a dual-boot Mac and then I will probably turn my current computer into a play machine to test Linux/BSD on
Since I’m buying a Mac I guess I will be using Intel chipsets/graphics in all likelihood because I’m probably going to be buying a MacBook after the release of Mac OS 10.5.3.
“””
So this is the source of global warming. Mount Doom it all makes sense now
Also sorry that you encountered a zealot.
“””
I’ve been entirely Linux since 1997. But I’ve been a Unix admin since 1988, so I had a bit of a head start.
Actually, it was PlatformAgnostic who encountered that zealot. Which is not to imply that I have *not* encountered them. I’ve even *been* one upon occasion in moments of excessive zeal. Though I try not to overdo.
But you are right that it would be best to focus upon the real issues… like how in the world are we going to get Sauron to cut down on the release of greenhouse gasses and unburned hydrocarbons? And considering the air conditioning requirements for that (no doubt poorly insulated) tower of his, flourocarbon emmsions are probably an additional issue; He’s likely still on R-22.
You wanna take him the notice of noncompliance?
Edited 2007-05-12 21:29
but that won’t last through 2008 as Intel moves to a serialized bus architecture and an on-die memory controller.
AMD already has on-die memory controller(s) now (2007) -one for each core – and it will be included also on quad core AMD processors.
They also already have the serialized bus architecture – that was his point, that Intel is removing the advantages in technology that AMD currently has over them.
that was his point, that Intel is removing the advantages in technology that AMD currently has over them.
You are right. I had understood the other way. Thanks.
My next laptop is going to have Intel graphics
Has intel leaked any specs on these chips yet what will they compare to?
If amd makes better and or free drivers I would see that as new management. I wouldn’t hold the old managments mistakes up and rub their face in it.
I’m still looking forward to that open graphics project development feels slow though.
One can dream… A sun laptop with an open sparc mobile, open bios, and a open spec graphics card all at a fair price.
Beyond of the hardware quality, Intel is showing how they are directly connected with free software and how much they support that philosophy. This is quite interesting to free software users who can look forward to buy hardware 100% compatible with they software.
Hope to see others hardware manufacturers working for users freedom. Keep with the good work Intel!