Alex Ionescu, one of the lead developers behind the ReactOS project, has published a detailed article on the XP/Vista application compatibility system. “For the last few days, I’ve been intimately becoming aquainted with a piece of technology in Windows XP and Vista that rarely gets the attention it deserves. It has raised my esteem and admiration towards Microsoft ten fold, and I feel it would be wise to share it, publicize it, and then of course, find (positive) ways to exploit it to turn it into a powerful backend for various purposes.”
..that ReactOS will soon be developing a system along these lines. This could quite possibly do wonders for app compatibility in ReactOS if properly reimplemented.
It is possible that this will slightly improve app compatibility, but after reading the article, I don’t think that I would expect miracles.
The compatability database will not inherently allow things to work with the OS that wouldn’t work before (although perhaps certain “shims”, if implemented, might do that). From what I understand, they are already doing many of the operations that the ACS would be doing anyway: the only difference is that they are hardcoding them. Such a system will allow ReactOS to automate the process by removing it from the main code and adding it to a compatibility database, thus making the code cleaner. Given this, I suspect that the implementation of this system will be fairly transparent to the end user (no additional features to start, but better code). Then again, any boon for the Coder is a boon for the User eventually.
Compatibility hacks for 5000 applications? Sounds like a giant turd of a system…
I’d hate to be the guy stuck maintaining that monstrosity…
Edited 2007-05-28 15:56
Oh yeah, because a modular compatiblity system that isn’t hard-coded is such a mess :rolleyes:
Oh yeah, because a modular compatiblity system that isn’t hard-coded is such a mess
It’s by no means modular. It does various specific, and probably unspeakable, things right down to a per-application basis – and it seems to modify the system in ways which makes an application think it is running under another version of Windows. DOSBox does a better job of this.
It’s not as if this is merely a compatibility library that makes previous versions of components available to applications that need them.
..and it does it in a modular way. Geez.
..and it does it in a modular way. Geez.
Dude, it has a list of over 5000 individual applications that it uses to make apparent changes to the system for each individual application. That is by no means modular in any way shape or form.
Now, if all that were required to run an older application here were a set of compatibility libraries, like on Unix and Linux systems, that represented Windows 95, 98, 2000 etc. then that would be modular. As it is, it’s not, because for each application there is no concept of Windows 95, 98, 2000 or another version of Windows. The system has to make individual tweaks based on each individual application.
Edited 2007-05-29 11:32
Yes, it is in a modular way. Why? Because it’s not hard-coded. There is a database that holds the rules for any given application and the compatibility layer applies those fixes are prescribed. To take those hacks out, they simply disable the database. How is that not modular?
You don’t think it’s modular because you don’t want it to be. You want Windows to have more hacks on top of it so it can further justify your disdain.
Yes, it is in a modular way. Why? Because it’s not hard-coded.
You have a poor definition of what being modular is about.
There is a database that holds the rules for any given application and the compatibility layer applies those fixes are prescribed.
If you simply had compatibility libraries for Windows 95, 98, 2000 and XP and applications could reliably run on them then yes, it would be modular. However, those are then being hacked on an individual, per-application basis losing any modularity you ever had in the whole thing. In effect, those things actually are hard coded – for every application.
You don’t think it’s modular because you don’t want it to be. You want Windows to have more hacks on top of it so it can further justify your disdain.
Whatever.
Ah, since it’s per application, it’s not modular. Yeah, that makes complete sense.
Why is it modular? Because they could take it out and it would have no effect on the system itself (internal sources), only things that depend on it (external sources). It is designed in a way where you can easily remove a rule for an application w/o affecting functionality of anything but the application that depends on it. You can add rules that have no effect on anything but the applications the rules are designed for. It applies hooks into the system, unbeknownst to the it, for very specific cases, which make it easier to test. Of course, you always need to do regression testing. But there is already plenty of that being done for Windows.
I don’t understand. What are you not getting here?
Ah, since it’s per application, it’s not modular. Yeah, that makes complete sense.
You don’t know what modular means, or certainly what it implies.
Why is it modular? Because they could take it out and it would have no effect on the system itself (internal sources), only things that depend on it (external sources). It is designed in a way where you can easily remove a rule for an application……..
No, you’re not looking at the way the actual system is designed. If you’re designing a bunch of modules (Windows 95 compatibility, Windows 98, Windows 2000, Windows XP etc.) that are effectively a bunch of compatibility modules to run all applications under those systems then that can be said to be quite neat and modular because you’re dividing your functionality into things that can be grouped.
One you start individually changing the behaviour of those modules on a case-by-case basis then you’re losing control over the whole thing, and the functionality of one runs into another. Modularity implies strict dividing lines between its own functionality.
I don’t understand. What are you not getting here?
It’s a case of what you’re not getting. It’s just a very, very badly designed way of doing things. Microsoft should be asking themselves why they have to do it.
Wow, you didn’t actually address any of my points. Just said “No, you’re wrong.”
Nice job.
No, because a system that changes behavior based on heuristically-identifying specific clients is broken by design.
Why is that? If it’s accurate, then what’s the problem? I really don’t understand why you’re being so negative about this.
Changing behavior based on client identity is a really ugly hack. It might be necessary to meet Microsoft’s goal of 101% application compatibility, but that doesn’t make it any less of an ugly hack.
A well-designed system should have the fewest modes possible. It should have highly predictable behavior. Special-casing certain clients goes entirely against those design principles.
Even if the mechanism is highly accurate, it still makes maintenance of the system much more complex. Special case code has a way of doing that, and we’re talking about 5,000 such special cases here. I don’t doubt that crap like this is one reason Microsoft needs a literal army of programmers (tens of thousands) to get each new release of Windows out the door, while Apple makes do with a fraction of that number.
I’m convinced at this point that Microsoft’s competition has much better code to work with than they do. GNOME, KDE, Qt, and X11 are very clean pieces of code (it’s open-source, look for yourself!) despite their size. Based on how quickly Apple is able to get new features out the door, I’m guessing that their stuff isn’t half bad either. Most of what I’ve heard of the Windows code* suggests that it’s no fun to work with. That’s going to bite Microsoft in the end.
*) I’ve heard good things about the kernel, and my guess is that the CLR has the benefit being a coherently-designed new development. One way out for Microsoft might be to transition Windows as much as possible to new code based on the CLR. Of course, none of this will help unless they abandon their culture of embracing complexity.
Edited 2007-05-29 01:28
Firstly, it’s the customer’s requirement that forces Microsoft to include backward compatibility. I’d be of the mind that they know the value of removing it.
How it’s implemented really is Microsoft’s business. It’s their investment, so rightfully it should be their choice in this instance. They have their reasons for doing it a particular way and unless one has access to the source code and the time to thoroughly analyse it, then one really is in no position to criticise something he/she has no valid knowledge about.
If Microsoft chooses to implement these so-called “hacks” to get the software working and it works accurately, I really don’t see a reason to criticise it, especially when one is purely guessing about exactly how it’s done.
There is a very big difference between “hearing” about Windows source code and actually having experience with it.
Edited 2007-05-29 01:40
They have their reasons for doing it a particular way and unless one has access to the source code and the time to thoroughly analyse it
You’re just making a cop-out statement there.
If Microsoft chooses to implement these so-called “hacks” to get the software working and it works accurately
The problem is, it doesn’t work accurately and can never work accurately. The only way that this can work reliably is after the event. Vast numbers of people will have to test each individual application in the betas for the final release, or in the Gold release, so that Microsoft can create compatibility profiles in the final release or in the form of service packs.
…especially when one is purely guessing about exactly how it’s done.
We’re not guessing. The subject of this article is someone describing how the whole system works, and we can see that that is how it works.
There is a very big difference between “hearing” about Windows source code and actually having experience with it.
There’s no need to see any source code. We can see quite clearly how it works, and this series of articles gives us a description.
You’re right… someone needs to make the compatibility database. The individual applications are tested and shimmed up to work before Windows is released. New programs don’t get the shims, so devs have to make them work. Big enterprise customers can likely get MSFT to make shims for their internal apps on an as-needed basis.
What do you mean that it doesn’t work accurately? How the heck would you know? If it’s working, then all you see is the application running, despite all the unspeakable things happening in the background. Have you ever seen an application shimmed when it doesn’t have to be? What other sort of inaccuracy are you afraid of?
Most importantly: are you just bashing because you hate Microsoft?
What do you mean that it doesn’t work accurately? How the heck would you know? If it’s working, then all you see is the application running
I’m somewhat astonished that I’m having to explain this, and that these comments get modded up, but here goes.
As I said in that comment, if you have a proper, modular system where all that’s required for compatibility for older applications is a set of compatibility libraries, then this reduces exponentially the amount of application testing that needs to be done – and makes it infinitely more reliable. As for DOS, run DOS applications in something like DOSBox – that seems to manage quite well over and above Microsoft’s much vaunted application compatibility system. Running to each application to fool them individually about what system they’re running under is just never going to work reliably, and you’re going to need a ton of time and resources to do it. It also just isn’t feasible in the long run if you want to add new features and maintain backwards compatibility.
This kind of approach also ensures that more obscure applications that haven’t been tested explicitly are more likely to actually work. That’s what backwards compatibility really means. Fancy that, eh?!
Most importantly: are you just bashing because you hate Microsoft?
Are you continually dredging this up because you feel that I might have a point, and just want to believe that I’m bashing Microsoft?
Ok… I think you’re missing what’s wrong here. Microsoft publishes APIs in Windows and documentation to go along with those APIs. There are some cases where people get behavior out of Windows that is not documented. The reason certain things are not documented is that Microsoft wishes to have the freedom to change details behind the applications’ backs without changing the documented semantics of the APIs. It is worth doing this in some cases to enable new features or even to increase the performance of applications without requiring them to be relinked or rewritten.
It’s not really the ISVs’ fault for using undocumented internals… usually it’s something discovered accidentally that happens to work under all extant versions of Windows. Or it’s a bug in the program that just happened not to surface until some change was made to Windows(like those SimCity free()s). Okay, sometimes it is if they do something like walking up the call stack and searching for some internal Windows structure to munge (which apps have done in the past).
The modular system you’re describing doesn’t seem to solve this problem. Your proposed solution goes after a far easier problem. Do you still have a point?
Microsoft publishes APIs in Windows and documentation to go along with those APIs. There are some cases where people get behavior out of Windows that is not documented.
That’s Microsoft’s problem. Other systems have managed it.
The reason certain things are not documented is that Microsoft wishes to have the freedom to change details behind the applications’ backs without changing the documented semantics of the APIs.
It’s not the documented APIs that matter, but the actual APIs people can access. If developers can go behind the backs of these APIs then it is just very badly done.
It is worth doing this in some cases to enable new features or even to increase the performance of applications without requiring them to be relinked or rewritten.
Well obviously. It’s called ABI as well as API stability, and it means that a set of interfaces that an application uses from one OS to the next should be the same. The implementation of it shouldn’t affect the application, but in the case of Windows, it obviously does.
Or it’s a bug in the program that just happened not to surface until some change was made to Windows(like those SimCity free()s).
SimCity is actually a DOS problem. Microsoft should have had something like DOSBox to remedy this, as DOS is just a completely different system.
The modular system you’re describing doesn’t seem to solve this problem. Your proposed solution goes after a far easier problem. Do you still have a point?
No actually, the aim and problem is exactly the same when it boils down to it – backwards compatibility for applications. You’re getting your thinking bogged down in what Microsoft has to do to achieve this. It’s still Microsoft’s problem and it doesn’t make what they’re having to do any less bad.
Clearly your “other systems” have not really solved this problem, or we would be running some form of Unix instead of Windows.
Let me give you a really concrete example that comes up all the time. People run the GetVersionInfo() API to find out what version of Windows they’re running on. Then they do a switch statement deciding which behaviors to enable on different versions of Windows. When the newest Windows comes out, GetVersionInfo() gives the app a number that doesn’t agree with its switch statement and the program crashes or politely refuses to run.
The most common Appcompat shim applied by Microsoft is called VersionLie. It changes nothing other than the numbers returned by GetVersionInfo. Programs work, customers are happy, and no one is the wiser (except Microsoft and the ISVs who find that the next version of their EXE fails since it doesn’t get shimmed like the old one). Can you tell me how you’d solve this sort of problem in concrete terms?
I’m not arguing that the cost/benefit of this feature does not justify having it in there. What I’m saying is that from a software engineering point of view, it’s a horrible hack (no matter how its implemented!) and admiring it is really twisted. It’s an “ends do not justify the means” argument here.
Most of what I’ve heard of the Windows code* suggests that it’s no fun to work with. That’s going to bite Microsoft in the end.
I concur with this view now. You only need to look at the features list for Vista, and how much was actually achieved. In the meantime, Apple and even Linux desktops are forging ahead with some of what Vista didn’t deliver, and in the case of Apple they’ve done it with less developers and in less time. This has nothing to do with maintaining backwards compatibility. It seems that an awful lot of areas in Windows have direct or indirect dependencies on each other, where there really shouldn’t be any, and the inevitable conclusion is that it can’t be done.
I’ve heard good things about the kernel, and my guess is that the CLR has the benefit being a coherently-designed new development.
The problem is that not everything can be programmed within the CLR, and Microsoft isn’t even developing much within it themselves. It’s something they expect the hoi polloi to use, and to pick up the fall-out from. Things are likely to get worse, unfortunately.
I wouldn’t place too much hope on the CLR and the .Net framework solving all of these problems. There is already a proliferation of .Net framework versions, a lot of things which don’t even work from one to the other, and the MSDN lunatics seem to have taken over now.
Every MSDN magazine is resplendent with new and innovative ways to blow your legs off with two barrels’ worth of explosive dependencies to ship with your application (or deal with once you get it there) such as SharePoint(!), COM+, MSXML etc. etc. No matter how hard Microsoft tries, most of their customers still have Win32 API, Visual Basic 6 (sorry, you can’t open that in VB.Net!), VC++ 6 and possibly ASP code to maintain – and none of the new stuff gives them anything other than pain, heartache and a re-write with no benefit whatsoever to deal with.
The problem with .Net and the new and cool development APIs and libraries Microsoft are encouraging everyone to use is not that they are coming up with badly thought out ways to maintain backwards compatibility. They’ve just forgot about the backwards compatibility bit almost altogether.
“No, because a system that changes behavior based on heuristically-identifying specific clients is broken by design.”
When you are supporting apps going back 15 years, ones that rely on broken OS behavior, undocumented tricks and bugs, would it be better to carry all that crap forward, or provide a work around that doesn’t affect the overall stability of the new system? Ill go with the app compatibilty db, I think
Sorry, but I’m not impressed. This is nothing more than a bad substitution for a more clever and foresighted solution of the problem.
Microsoft did this already under DOS, with the infamous “setver” command. They originated the need of setver themselves, using undocumented OS hooks in their own applications to gain advantage from competition.
Windows 95 finally brought the era of application specific behaviour of the OS: To get Sim City work under ’95, it had to be allowed to re-use previously free’d memory.
They had this mess all the time and apparantly never learnd from it..
“Windows 95 finally brought the era of application specific behaviour of the OS: To get Sim City work under ’95, it had to be allowed to re-use previously free’d memory.”
And if SimCity hadn’t worked under Windows 95, people like you would be accusing Microsoft of breaking applications on purpose to get a competitive advantage.
What is the more clever and foresighted solution?
Backwards compatibility, even if implemented as an ugly hack, is the single biggest reason why Windows is so dominant today.
No I wouldn’t have had. And I guess it would’ve been patched, too.
The more clever and foresighted solution is to have a versioned API with indication wich version breaks previous versions. Old versions can live side-by-side to new ones.
You’re right that backwards-compatibility is one of the key factors for Window’s success. It’s just these times are over. MS should have taken the opportunity to break with all the old stuff with Vista, but now, things are still getting worse and worse.
I’m guessing you have little to no experience programming for Win32 throughout its existence, based on your whines. You’re guessing some old game would have been patched? Yeah, right! No, they might have eventually (after a fairly lengthy time) come up with a new version people wouldn’t get for free, because (as this example of Sim City is) the game was very old by that time, and very nicely perfected in terms of stability for features and implementation, at least on a prior platform. There are a HUGE number of applications that will never be updated for similar reasons, and Microsoft does their best to account for them.
Your statement about the API versioning issue indicates you’re unaware that Microsoft HAS been going in that direction for many things in the Win32 API. Granted, not all the API is like that, but they haven’t completely ignored that. Even then, that ends up cluttering up the codebase, because then bugs that exist for older versions that applications relied on must be kept intact in later versions of Windows, lest people bitch (like yourself) that they’ve gone out of their way to break applications again. Microsoft can’t win in the PR front when it comes to this sort of thing: people bitch horribly if anything that ran on previous versions of Windows doesn’t anymore, and yet they bitch that Windows doesn’t really progress with new things, or that it’s bloated, etc. well, you can’t have it both ways, and Microsoft can’t really win, but this solution is, sadly, the best compromise that’s likely to be remotely practical that achieves both ends.
Besides, surely you don’t think Windows is the only existing OS that does patches on its executables, do you? It’s not!
Right. Backward compatibility was largely the most important key factor for Windows success, in a world where most companies didn’t care about that (and many don’t care right now, just consider Linux).
That huge work about backward compatibility helped Windows compensate its unstability until Windows 2000. If Microsoft bothers itself by enlarging that database is just because they learned their lessons. While others didn’t.
That is incredibly inaccurate. Backwards compatibility allowed companies (and people) to upgrade to newer versions of Windows whilst maintaining access to their business specific, mission critical (to them) applications.
Microsoft have invested millions into backward compatibility alone. They didn’t do it just to be a nice company. They did it because of the very large demand for it.
Err.. To my experience and based on logical thinking, it actually increases the surface area for instability probability.
Edited 2007-05-28 22:48
Microsoft have invested millions into backward compatibility alone. They didn’t do it just to be a nice company. They did it because of the very large demand for it.
Of course. Who wrote something different than this? Backward compatibility helped spreading Windows in a world where new OS versions would usually require new application versions. The key factor is they were able to provide a quite good backward compatibility while other OSes usually couldn’t. I really can’t see where our opinions differ here.
Err.. To my experience and based on logical thinking, it actually increases the surface area for instability probability.
Actually, back in the days where stability was a major Windows concern (3.1, 95, 98, Me) many people (including myself) were a bit pissed off of such problem BUT the ability to run our applications without the need to upgrade them was a selling factor. We were able to upgrade to better Windows versions without the need to buy our software again, meaning less money to waste. It was huge benefit.
Think about it: if Windows required you to buy new apps every new version (i.e. every 2-3 years), would you have sticked with it? I’m not sure, actually.
That also multiplied the available software base because each software was available for years! Think about an app of a dead software company. If you bought it in 1996 AND you need it, chances are high you can still run it!
I’m not interested in PR here. Sorry, that’s just not my focus.
I think I didn’t get my point straight: The problem is not the backward compatibility itself, but merely the real reasons why there has to be a table with 5000+ applications. For example, the usage of undocumented functionality, with MS being example for it itself; for example, the hidden knowledge needed to use the Win32 API, which results in hacks which get wrong if the inner-workings change. For example, not being transparent about system and API changes in many aspects. For example, letting developers get away with nearly everything…
And what comes after that? MS has to clean up.
A “big picture” breakage would be the right way to make this a history. And press/people don’t like if apps break for “no reason”. MacOS apps broke for a really visible reason, and the press anticipated that move instead of dooming it. MS should have done the same, and if only because they had the power to do so and make the whole industry do a step forward.
They have a long breath, they wouldn’t need to care for their customers not upgrading in first place because of breakage, but some years later.
So I hope this cleared up a little what my take on this issue is. I am with you with many things you said, too.
Edited 2007-05-28 19:31
As the #1 provider of operating systems, with a 95% market share on desktops, MS simply cannot go the Apple route and do large breaks of backwards compatibility.
The market would kill them for that –
a little shim system like this is a pretty good solution to allow old apps to run on newer more solid OS:s, and is WAY less expensive than breaking backwards compatibility.
> What is the more clever and foresighted solution?
patching the app! why the hell would you make a change like that to the os when you can also ship a directory called “patches for old apps” with your install-cd? it’s not rocket science, you patch the app, not the os for wierd bugs.
“patching the app! why the hell would you make a change like that to the os when you can also ship a directory called “patches for old apps” with your install-cd? it’s not rocket science, you patch the app, not the os for wierd bugs.”
Microsoft should patch SimCity?!? Does that really make sense?
The reason is that a shim system that just catches and translates API calls is way easier and more reliable than actually patching binaries.
It also has no performance cost on apps that don’t use the shims, since they are simply not loaded.
With all the recent talk about patents, I wonder how safe ReactOS is from Microsoft…I know there seemed to be issues a few weeks ago, but I didn’t get the latest info on that. What’s the current status?
On another note, has anyone tried to run ReactOS in VMWare? How did it fare?
With all the recent talk about patents, I wonder how safe ReactOS is from Microsoft
From what I know, React OS is based in Europe … so the patent crap doesn’t apply.
“””
From what I know, React OS is based in Europe … so the patent crap doesn’t apply.
“””
… yet.
ReactOS is currently going through a self audit.
It is 98.7% complete.
Hopefully they have kicked any “illegal” code if there is any.
Code auditing will no save reactOS from patente issues. Patents are about copying ideas, not about copying code
I accidentally moderated you up instead of hitting reply. Oops. Anyway, patents are about specific implementations of ideas, not generalities.
Is an API patentable?
And an ABI?
Yes, you can patent an API and ABI; infact Microsoft actually made some veiled threats to wine and other ‘compatibility’ projects (such as Samba) relating to this issue – parts of win32 are actually patented. Not everything as many parts of win32 are just implementations of existing ideas (aka win32 threads for example), but there will be Windows specific feature they would have patented.
The problem is there isn’t someone in these projects with enough brain muscle to articulate what needs to be said without sounding like a 13 year old with a chip on their shoulder.
One only needs to look at the ‘show us the code’ website; which appears to be nothing more than a childish knee jerk reaction with sounds like someone chucking a paddy.
This is another great example about how much work is put behind Windows, even if people usually don’t know about this.
This part of Windows is one of most obscure among common users but well known among developers. Infacts, what Ionescu forgot to mention is that these hacks are for applications which needed them in order to work. But in many cases, Microsoft actually debugged thirdy-party applications and then told its authors how to make them work! That’s an impressive amount of work! Examples about that include Norton’s line of apps and Simcity 2000 (if I remember right). Microsoft developers provide hundreds of patches to thirdy-party developers in order to correct their bugs.
Though the shim db is mostly for applications which were using undocumenteded features or APIs or were relying on bugs to perform their tasks. And these are the components which sometimes show the well-known message “This application might not properly function with this version of Windows” (or the like).
The huge amount of work by which Microsoft provides backward compatibility even to bad-written apps is impressive.
The huge amount of work by which Microsoft provides backward compatibility even to bad-written apps is impressive.
If it wasn’t about maintaining an iron-gauntleted grip on the “naughty bits”, I could probably see past the agony to be impressed.
This has been your mildly tasteless imagery installment for the day.
You are welcome.
I’m very impressed by the scope and level of detail put into Shim, as well as Microsoft’s commitment to provide backwards compatibility to even the most poorly written and esoteric of applications.
That being said, however, I’m more unimpressed that they had to implement something like this. Ford Prefect explained it better than I can.
Just for fun I did a strings on a win2000 “shim” database. First 100 lines:
sdbf
EXE.1hq
EXE.2
EXE.G2
EXE.32t
XLDJB103
.PUTESD3Nu
23VDADR3
EXE.DACA,v
PUTESMCA
ZIWLPDDA”y
AD RETFA
.OMEDRIAzz
XE.MUBLA
ECNAILLA
.PUTESLA
EXE.05MA
EXE.LOA
E.DPUPPA
17PAMTUA
ESABOTUA
YALPOTUA
.NUROTUA
.MASOTUA
EXE.PVAR
AETNSWVA
AJTNSWVAn
LLABESAB
EXE.CBZ
E.ELTEEB
XE.HCNEBV
EXE.EKIB
RTS’EULB
E.ELGGOB
E.ESWORB
EXE.SB
EXE.CSB
EXE.3CSB
EXE.4CSB
XE.PIHSB
E.LPIHSB”
EXE.WB
.4ONISACD
E.ELTSAC
E_ELTSACd
XE.23TAC
.GOLATAC
2GOLATAC
PUTES7CC
E.TSNIDC
.TRATSDCp
.TNKCEHC
E.TNEILCr
E.RJEULC
SODNAMOC(
.BEWVNOC
PUTESXOCB
23RTAERC
.RVSGPRC,
23PTSWRC
TNMSNISC(
TNMESUSC
XE.NURTC
XE.ESRUCj
EXE.2GWC
E.EMAGWCb
XE.23DBD
LOPZUPBD6
EXE.OMED
E.23OMED*
.RTNGSED
E.GFCDIDf
LLATSNID
7XTCERID”
.59EROLD
E.59MOOD
E.NAKARD@
EGNEVERD
XE.EFSRD
XE.23_WD
.GNE07XD
.GNEA7XD
.AIDEMXD@
.PUTESXD
EVOMERAE
EXE.OMLEv
SALGOMLE
REROLPXEJ
.4NOCLAF
OMEDMRAFH
E.99AFIF
.GNIHSIFv
.REDAOLFR
.12ECROF
ROTIDEPF
OMEDDERF
W.IDDERFH
.REGGORF
.YAWETAG:
EXE.3KG
Microsoft’s attention to compatibility is one of the redeeming qualities of Windows. After all, I don’t like replacing applications when the old ones are fine with my needs. Or worse yet, doing without an application when there is no suitable replacement.
As an example: the latest version of Mac OS X on Apple’s latest hardware won’t even pretend to run CorelDRAW (it’s a classic application, and Corel hasn’t updated it in years). I snagged an old copy of CorelDRAW for Windows, selected Windows 2000 for the compatibility mode, and have been running it happily ever since.
As for Linux: if it’s not packed by your distro and it hasn’t been updated for a while, don’t both. If it is a binary, chances are that you’ll have the wrong C libraries (or maybe the wrong version of something else). If you are building it from sources, chances are that you’re going to have to rewrite a bit of code just to get it to work.
Now will the non-evil empires please listen up, set aside their elitist nonsense about “writing better code” and “avoiding kludges”, then just make sure that old stuff will work.
I disagree. I think it’s a *good* thing that Apple decided to throw away backwards compatibility with OS X. They ended up with a more solid version of their OS, and were able to produce it in record time.
One of the reasons Vista took so long to come out is (among others) because they needed to carry all that compatibility baggage. Apparently, they dropped some of it along the way too.
If you want to run old stuff, get a VM with the old OS on it. After all, not all DOS apps still run in Windows command line environment, and you don’t hear (too many) people complaining.
I disagree. I think it’s a *good* thing that Apple decided to throw away backwards compatibility with OS X. They ended up with a more solid version of their OS, and were able to produce it in record time.
I’d say that Microsoft WAS able to produce a solid OS while STILL assuring most of its backward compatibility. I believe Apple didn’t try or wasn’t able.
If you want to run old stuff, get a VM with the old OS on it. After all, not all DOS apps still run in Windows command line environment, and you don’t hear (too many) people complaining.
I’ve never seen a Windows application not running on newer (XP+) system as long as it wasn’t an application which was VERY tied to hardware (and hacks). But of course, Windows cannot assure 100% backward compatibility but I believe it can provide something near to 90%. Which is far better than 0% which is usually what others provide.
Trust me: if you pay for your apps, backward compatibility IS an huge factor.
I disagree. For something five years (and billions of dollars) in the making, Vista is underwhelming, to say the least.
Well, which is it? Never or 10%?
I spent two hours at a friends’ place trying to make an old game she had work with XP, and ended up throwing in the towel.
To a point. I bought a copy of Adobe Photoshop 3.0 back in 1995…I’m sure it still works today, but would I still want to use it? Not really.
I can understand this for old games (i.e. nostalgia), but for productivity apps? I agree *to a certain degree*, but not to the ridiculous extent which MS has pushed it – and I do believe it has made their codebase much harder to maintain.
Something to keep in mind is that the home user is an afterthought for MS, their primary focus is business and governament.
Businesses are penny-pinchers, and will use old software until a few years AFTER they needed to upgrade due to nessicary features. Not only that, but alot of companies have custom software made for them, either in house or subcontracted out. This software has an even higher cost, and they will do anything they can to not have to replace it. I have a friend who works for IBM here in montreal, and he says that they actually need to use these ancient terminals to enter their timesheets. At the last company I worked at, we were more advanced, WE used terminal emulators. (however, this advance was offset by how if you typed past the end of the line, the whole line would be cleared). Management could have asked any one of us to take a week, and we could have put together something a billion times better. However, the mentality of management is if it ain’t broke, don’t fix it, no matter how archaic it is.
You are right, of course, however in this day where virtualization has come of age for PCs what seems archaic is hacking the OS code so that it allows for all these compatibility workarounds. It seems better to upgrade the OS and keep virtual machines for the legacy software…
Oh well, personally I don’t mind that much: my PCs all run OSes that have abandoned backward compatibility, and I feel fine.
Well, while I do understand what you are saying, its alot easier to sell operating systems by saying “We still support all your old applications” then to say “Well, VPC is now free!!”.
Backwards compatibility is only a bad thing when it gets in the way of new development. The shift MS did with vista to the .net API is something they should have done long ago.
Sure it’s wonderful to say VPC is free, but there are two problems with using VPC:
If you don’t have the original operating system any more, you do not have the rights to use it under VPC. Also, you do not have the rights to transfer an OEM license to my knowledge.
Second, running one OS on top of another OS chews up more system resources. This can take a couple of forms. The most obvious one is that you will need additional memory and disk space to manage OS data. You will also chew up additional CPU cycles due to the overhead of the OS and virtualization. The overhead of virtualized peripherals also adds overhead. You may also loose the ability to perform certain tricks. IIRC, Rosetta can translate library calls from PPC applications so that they are suitable for native Intel libraries. Virtualization cannot really do that since it only has knowledge of the host hardware and not the host and guest OS. Virtualization also requires the host and guest to use the same CPU.
Yeah, I know that there is some breakage in Microsoft’s compatibility layer. That’s going to be particularly true for games, because they tend to depend upon timing and hardware (e.g. I’ve seen XP aware games that clearly worked on one computer fail under XP on another computer, because they depended upon features in the video card). But something tells me that Microsoft’s biggest concern is business users. Incidentally, business users are the ones most likely to be running custom applications that would be expensive to replace if they broke. If a game broke, who cares? Microsoft may loose a few sales at the beginning, but would get them back as soon as a new hot game hits the market.
Lol, my comment on VPC was actually sarcasm, I actually agree with you pretty much accross the board. Look at the posts earlier in the thread. The origional poster was saying backwards compatibility wasnt that important, I said it was, he said that virtualization offset that, and thats where my last comment came from.
Then there are a lot of older computer games you haven’t tried, many of which doesn’t work on XP. Even a significant number of, for the time, fairly recent games (99-00) failed to work. I kept a win98 partition for quite a while because of that.
I have a few games i sometimes wish i could try again, but for that i have to wait until vmware or the likes can run 3d accelerated games. Until then i at least have dosbox whenever i am feeling nostalgic
VMWare does support 3D acceleration…kinda. It does require a bit of tinkering, and performance is average at best (though for older games that’s less an issue), but it works.
http://ubuntuforums.org/showthread.php?t=84344
But why is this any better, for example, than the system implemented (theoretically, assuming lib authors or app authors aren’t idiots) in Linux/UNIX/whatever.
Supposedly, in Linux an application links to the ABI version of a library it functions with, and complies to support the latest ABI for the particular library interface of the program it. If there is a huge API change, usually the library is renamed, for example from libgtk to libgtk2.
They tend to be named like libname.so.<major ABI version>.<minor ABI version>.<release version>, for example, libgailutil.so.17.0.9
If the ABI compatibility is broken, the major release number is changed. That means applications compiled for libname.so.1 will not link to libname.so.2.
If you add something to the ABI, but not break any existing compatibility, the minor version is changed, so an application linked to libname.so.1.5 will link with libname.so.1.6 but not libname.so.1.4.
If you add something to the library like a bug fix that doesn’t affect the ABI, you can change the release number. The release number isn’t always used and the minor onwards are optional. The minor and release numbers are mostly unimportant in a package managed system.
Libraries can coexist if you still have applications that depend on the old version of the ABI. The library dependency system on Windows seems to be lacking compared to this system, which is extremely simple but rather effective. The nice thing about the Linux linker, is that multiple conflicting ABI versions of the same library, as well as coexisting on the system, can coexist in the same process.
The main problems with the Linux linker seem to be related to things like glibc or where the libraries have been improperly named. Some libraries seem to change their major interface number every damn release, even when nothing has changed, or something has been added, though with things like GTK, this never happens.
It would seem that if Windows had used this type of system and just incremented their incompatible ABI number with each version of Windows but included older libraries, then there wouldn’t be a need for any of this, barring really old 9x and DOS applications which relied on certain things like lack of memory protection, but they were mostly broken from the start and I doubt this would need a huge ABI compat db. Maybe I’m missing something? Just a case of 9x/DOS being so hackish and badly thought out in the first place?
I think your missing something small: these appcompat hacks are not there to deal with APIs changing incompatibly with regards to their specified behavior. The meanings existing of Windows functions do not change between releases (or are at least not supposed to), aside from the addition of new flags.
The Shimming system is for dealing with apps that make mistakes or invalid assumptions (sometimes this is not a mistake, but something undocumented which worked in one version but not going forward). It is entirely orthogonal to DLL versioning, which may or may not be more advanced on Linux.
The advantage of the ShimEng is that it allows Microsoft to retain old behaviors where apps need them without clutting up the mainline code of Windows or slowing it down. The shims actually overwrite the API calls in the Windows core dlls to detour into the shim code. It’s a well-engineered solution to a nasty problem (by necessity the solution will have to be slighly nasty).
This is the kind of engineering I really like: rather than taking an ivory-tower view of what the world should be like, the appcompat people have to deal with horrible, horrible problems without letting the inelegance spread too far.
Hmm, I see.
What we need is something that app developers can check their application with that will try and figure out any undocumented usage, particularly common mistakes, providing non-NULL values to reserved parameters and so on.
Things breaking from undocumented usage is a real pain, and it might be good from MS’s point of view too, since in upgrades they won’t be blamed so much for breaking compatibility.
You’re in luck:
http://www.microsoft.com/technet/prodtechnol/windows/appcompatibili…
Too few people know about it and use it, though.
While I’m not a fan of MS, there are a lot of useful nuggets that nobody uses and knows about available to their developers.
This sort of stuff needs to be built right into the debug functionality of Visual Studio.
it likely can’t be, because that might run afoul of some anti-trust rulings. There is a “chinese wall” between the VS people and Windows due to a 2001 court ruling.
Sounds like the most horrendous security loophole to me. Modification of the in-memory process? Controlled though registry settings? Oh my!
If a facility like this is as powerful as it sounds, the only way to make it secure is to lock it down to the extent that it’s useless.
If a facility like this is as powerful as it sounds, the only way to make it secure is to lock it down to the extent that it’s useless.
Not at all, this is speculation. Never heard of problems related to AppCompat subsystem. Plus, these settings are not meant to be accessible by the users but will only be deployed by Microsoft itself.
There is no plug to deliver your changes and, AFAIK, it will only be Microsoft to add stuff to this database. Plus, this is not intended to correct your bugs but to provide a way to honor behaviour of previous Windows versions, even if your calls were undocumented (and hence, you’ve been dumb to use them…).
It’s a static thing, not a dynamic one. Plus there’s no meaning in hiding yourself in such area because if you could access it, you could already do what you were willing to.
I mean, as hard as they stuff their new OS’s down the throat of the public. They would have to ensure some sort of compatibility with the apps that keep people stuck on windows.
IMHO Vista and Office 2k7 are wretched. Hell, it doesn’t even support our core application where I work!
-nX
This is a perfect example of turd polishing. Sure it’s a somewhat shinier, but it’s still a turd.
These kinds of hacks are non-issue if you have the source code to your apps.
I can see MS needing this from a business standpoint. Their customers are not generally computer savvy enough to understand the difference between a broken application and a broken OS, so blame will be assigned randomly.
But it’s still (as many have said here) an ugly hack.
They should have made some effort to make that clear. They could have made it separate such that it’s not implemented by default. You install an application, and if it doesn’t work you go to the control panel and click on “Fix Broken Applications”, which presents a list of installed broken applications. When you click on one, it enables the needed shims for that one.
That way MS would be pointing the finger of accusation at the guilty party but still allowing the customer to have a relatively easy ugly hack. And software vendors would have motivation to get their apps off the broken list.
What does that accomplish? MSFT is not in the business of making ISVs look bad. They’re in the business of selling OSes and trying to make ISVs look good (so that they write Windows software and help MS sell more OSes).
I think it’s a pretty elegant mechanism to deliver a bunch of ugly hacks. But beauty is quite a relative thing. Some of the most powerful tools we use are ugly, brittle hacks that are made to work through careful engineering. I personally find this more exciting than finding elegant solutions to easy problems and simply ignoring the hard ones.
“Some of the most powerful tools we use are ugly, brittle hacks that are made to work through careful engineering.”
Careful engineering is not the same as good engineering though.
“I personally find this more exciting than finding elegant solutions to easy problems and simply ignoring the hard ones.”
Personally I prefer to find simple, elegant solutions to real problems, hard or not.