A few weeks ago, Ars published part one in a series called “From Win32 to Cocoa: a Windows user’s conversion to Mac OS X”. In this series, Peter Bright details why he believes “Windows is dying, Windows applications suck, and Microsoft is too blinkered to fix any of it”. Part one dealt with the history of both Windows and the Mac OS, and part two deals with .Net, the different types of programmers, and Windows Vista.In part one, Bright heavily criticised the Win32 API, saying it was filled with legacy stuff and hindered by 15 year old design decisions. In part two he explains that as an answer to the complaints, Microsoft introduced the .Net framework, which was supposed to replace the Win32 API as the API of choice for Windows; in fact, the next release of Windows, Longhorn, would make heavy use of .Net. “It could have provided salvation,” Bright writes.
But it didn’t. According to Bright, .Net was fine technically, with a “sound” virtual machine, “reasonable” performance, and an “adequate” language (C#), but the library – “used for such diverse tasks as writing files, reading data from databases, sending information over a network, parsing XML, or creating a GUI” – the library is “extremely bad”. Bright explains that this is due to the target audience of .Net.
.Net had to roughly appeal to three groups of developers:
At one level, you have people who are basically business analysts; they’re using Access or Excel or VB6 to write data analyzing/number crunching applications. They’ll never write the best code or the best programs in the world; they won’t be elegant or well-structured or pretty to look at. But they’ll work.At the next level, you have the journeyman developers. Now these people aren’t “business” people – they are proper programmers. But it’s just a job, and they’ll tend to stick with what they know rather than try to do something better. They might be a bit more discerning about their tools than the business types, but they’re not going to go out of their way to pick up new skills and learn new things.
At the final level, you have the conscientious developers. These are people who care about what they’re doing. They might be writing business apps somewhere (although they probably hate it, unless they are on a team of like-minded individuals) but, probably more likely, they’re writing programs in their own time. They want to learn about what’s cool and new; they want to do the right thing on their platforms; they want to learn new techniques and better solutions to existing problems. They might be using unusual development platforms, or they might be using C++, but they’ll be writing good code that’s appropriate to their tools. They’ll heed UI guidelines (and only break them when appropriate); they’ll use new features that the platform has to offer; they’ll push things to the limit. In a good way, of course.
Because .Net had to appeal to all of these groups all at once, .Net had to make various concessions. Before .Net, this was not a problem; the first group had Access and Excel macros, the second group used VB6, and the final group used C++ or whatever other was the cat’s miauw at that point. Win32 catered to this because it was designed for C. C is relatively simple and ubiquitous; “As a consequence of this, pretty much every other programming language created in the last couple of decades can, one way or another, call C APIs.”
Bright explains that .Net doesn’t really do that. .Net can of course still use different languages, but in the end, they all use .Net’s APIs for things like drawing windows on screen, or saving files, or querying databases, or whatever. .Net is trying to be a one-size-fits-all solution, which made it “good enough” for group one and two programmers – but not for the final group.
The third type – well, just ignore them. They’re too demanding anyway. They’re the ones who care about their tools and get upset when an API is badly designed. They’re the ones who notice the inconsistencies and omissions and gripe about them.
And this is the main problem with .Net, he states. It’s too “dumbed down”, making it frustrating to use for the type 3 programmers. Win32 shines through everywhere in .Net, making what should be a brand new API into something resembling Win32 much more than a new platform should. Bright gives an example about Windows Forms, how it depends heavily on Win32.
Bright then moves on to Win64, the next big opportunity for Microsoft to clean up the Win32 API. And again – it didn’t. He gives a few examples, such as OpenFile, a Win16 function to open a file (you didn’t see that coming did you?). It was deprecated in Win32, and replaced by CreateFile, so you’d think they would have removed it from Win64. Except – they didn’t. It’s still there, even though no one is supposed to use it.
There are other problems too, such as the name of the system folder. It was called system in 16bit Windows, system32 in 32bit Windows, and, well, system32 in 64bit Windows, because many programmers had hard-coded the system32 directory instead of using the specific API call to determine the correct system directory. Oh, and on 64bit Windows, 32bit files go into a directory named syswow64.
And then Microsoft showed us Longhorn. It was built using .Net, had a consistent look-and-feel, and we’ve all seen the demos of what Longhorn could look like according to Microsoft – it looked stunning. But it got cancelled. “What we got was Windows Vista,” Bright grumbles.
Bright gives a whole list of inconsistencies in Windows Vista and Office 2008, some of which nearly made me cry and curl up in foetal position – I never knew how deep the crisis actually went. Did you know there are now three completely different ribbons? One from Office (not usable for other programmers), another one written in C++ (not for .Net), and a new one for Windows Seven (not for .Net either). Right.
Microsoft has had good opportunities to do something about this, but they have been systematically squandered through a combination of ineptitude, mismanagement, and slavish adherence to backwards compatibility. The disillusionment I feel is incredible. I enjoy writing programs, but I don’t enjoy writing for Windows. And while once it made sense to stick with Windows, it just doesn’t any more. There’s now an attractive alternative: Mac OS X.
Stay tuned for part three.
the author indeed has some valid points , I can’t argue with him , but if he’s really looking for UI consistency and code reusability I’d advise him to look at KDE(4) , the level of code sharing and reusability is astonishing , and the look is consistence across all kde apps – except a few of course-.
The OP is correct. QT, and by extension KDE, has a very nice, clean API. I can’t compare it to MacOS X as I am not familiar with theirs, but let me say this: If you’re programming on Windows and don’t want to deal with the platform nonsense, you can escape a lot of it by using QT. You gain some portability in the bargain.
But probably the article’s author would not find QT a useful answer. His complaints mention win32 often, but he’s not so much complaining about the strange/horrid GUI APIs as he is about the platform as a whole (e.g. file locations).
At one moment I seriously considered moving to Qt, I’ve even read the C++ GUI Programming with Qt book. There’s one thing that stopped me in the end: the lack of 3rd party components. I couldn’t find any.
It doesn’t matter if you prefer .NET, Delphi or MFC, there are a dozen high quality visual component vendors out there, and 100s of freeware ones. For example, a fancy tree view, a collapsible panel, report builders, shell controls that look like Windows Explorer, charting, image viewer, just to mention a few.
I was unable to find anything for Qt.
I still think Qt is fantastic if your GUI is simple and basic, and when portability is important. But if you want to get the most out of a platform, nothing can beat a native front end + portable back-end combination.
Uh… Hello. You have this big ass giant “3rd party high quality visual component vendor” called KDE4 offering all kinds of fancy visual components to add on to QT4. For free. (KDE is more than just a desktop manager)
Except companies have an issue when they’re not willing to open source the product – it makes using KDE prohibitive. So, no that is not a very good answer.
However, it also makes for a good case for a company to come along and provide such for Qt. But, just to note – Qt makes it far easier to create such a platform compared to say Win32/MFC/etc.
It’s actually a very good answer, since the add-on functionality you usually want to use from KDE resides in the libraries. And KDE libraries has always been LGPL or BDS licensed, making it unnecessary to open source applications using it. Making KDE a very good alternative.
However there already are several high quality extentions for Qt. They even used to be listed and linked from the Trolltech site, but it seem they have updated their website since last time I looked and I’m unable to find the page now.
Anyway one of the extentions, that has existed nearly forever(since the days of Qt 1) are Qwt.
http://qwt.sourceforge.net/index.html
From that page there are links to a few more:
http://qwtpolar.sourceforge.net/
http://qwtplot3d.sourceforge.net/
And you can look at the qt-apps site, there is a growing collection there:
http://www.qt-apps.org/index.php?xsortmode=new&xcontentmode=4298&pa…
Edited 2008-05-06 20:02 UTC
Well, I can see why the guy doesn’t like .NET if he likes coding in C/C++. Personally, I’d rather 69 with a grizzly bear while having king kong shoved up my ass than use either of those two languages.
LOL…. I completely agree with you. What I find extremely frustrating with OSX is that it is stuck in ObjectiveC and C++. I mean how 70’s is that.
I started writing using the Commodore 64 so I have been around the bend a bit. Suffered through the memory model fiasco. BUT .NET is amazing. Actually even Java has it right that way. C++ and C is useful for low level system coding. As a general programming language forget.
I read the article and I thought what a piece of garbage. All fluff no stuff…
I can’t believe I have to ask, but could you please actually link to the article? I’m on EDGE and it required like 3 extra 45+ second pageloads to find it myself.
The link is right there in the item. First paragraph, even.
The link in the first paragraph is to part 1, which is only mentioned as a leader. Part 2 is the main focus of the article here and is inexplicably linked in paragraph two even though there was every opportunity to link in paragraph one (“part two”).
The exact same thing was done in:
http://osnews.com/story/19712/Microsoft_Withdraws_Proposal_to_Acqui…
http://osnews.com/story/19710/Interview:_Jonathan_Schwartz_CEO_of_S…
http://osnews.com/story/19708/Slackware_12.1_Released
http://osnews.com/story/19706/Planet_GNOMEs_Lack_of_Love
http://osnews.com/story/19707/Bringing_Down_the_Language_Barrier_-_…
http://osnews.com/story/19705/McBride:_Linux_Is_a_Copy_of_UNIX
…
Some of these do not even feature any links on the main page. It certainly seems systematic; link past coverage or past events to give context to the actual news item, but bury the link to the news item and any blurbs about that in the following paragraphs so users have to ‘read more’ to see what’s being commented on.
Yup. I do this to prevent people from not reading the “read more”. The “read more” on articles that we have been doing the past few weeks is a new element on OSAlert, and I need to coerce people into seeing them, so they know it’s there. It also prevent people from not seeing the whole picture; the new frontpage items on OSAlert are far more general and less information-dense than they used to be – with information meaning information from the actual news article at hand. In other words, they are less meaningful, so you actually NEED the read more to gain a better understanding. The new style teasers generally contain nothing but a little history or background information on the topic at hand, but relatively little from the actual newsworthy article.
OSAlert had turned into a glorified RSS feed with three lines teasers all over the place, no background information, no nothing, and an endless list of pointless test releases of Linux distributions, and meaningless items like “here’s a review of GreatBSD 2.3.6.1.1.1a. Go read it.” I’ve changed that. It’s certainly not perfect yet, but give us time.
This is the new style for OSAlert, which allows us to do more thorough items, with more information, and maybe an opinion or two. We’ve heard nothing but good responses, so everyone better get used to it .
Edited 2008-05-05 15:36 UTC
So OSAlert is the Thom Tech Blog now? Seriously, how now is it different from a blog? I like link aggrregation, and I’m sure I’m not alone. To be frank, I really don’t care about your take on the subjects. I’ll form my own, thank you. I come for links. I also don’t care that you offer your take,but it’s very presumptuious to so openly try to force it on us.
Edited 2008-05-05 16:09 UTC
I looked at a few of the recent post and discovered to my surprise that the My Take was nowhere to be found. I am glad that you listen to the criticism of your users and the new style of OSAlert is in my opinion improved in many ways. Thanks! I have been against the My Take concept since it was introduced. It is unprofessional and irrelevant in that context. I have been reluctant to criticize it openly as OSAlert is driven by volunteers and I don’t want to demotivate an effort which I appreciate. But of course the editors take is still justified and absolutely relevant in the forums. Any way, keep up the good work!
Edited 2008-05-05 18:05 UTC
I think you’re overreacting a little. We’re going to make sure the main links are in the teaser going forward; we listened to feedback and made the approporate changes.
That said, there’s nothing unprofessional about having a “My Take” section. First off, you don’t have to read it. Secondly, the basis of this site is good discussion. All editorials are opinions, and I don’t think it’s over the line to start that discussion with legitimate, sincere reaction. Thirdly, most people like having a little more insight into many of the stories (or backstories). If they bother you, kindly opt out of the “Read More” section. If not, you get some extra stuff to read. Everybody wins.
And you don’t see anything wrong with that? I rely on osnews to let me know when something related to my interests is happening. When it does I expect a link to the actual information. The “extra details” you provider are either a subset of the linked article, and thus not useful to me because I will be reading the article anyway, or are comments that should be posted in reply instead.
Do you not see what’s wrong with that? The summary *should* provide information, hopefully enough that I can decide whether I want to peruse the article in full. Forcing me to click once to decide whether I want to click again wastes my time. Give me sufficient information *and* a link up front!
That’s fine, even good. When something is posted that I am not familiar with I like the extra background describing it. But why is this a reason to remove the link to the article?
It’s fine that the system “allows” you to do more thorough items, but I am not seeing that. I am seeing “requires readers to waste their time” on perfectly routine items where details are *not* required.
You’re hearing a negative response now. Please stop hiding article links! Add “read more” pages to you heart’s content, if you must. You could just post a comment like everybody else, but power is meant to be abused. Doesn’t bother me. Just don’t let this fancy new feature of yours allow you to lose perspective on what you’re doing here: I don’t come here to read your commentary, I come to find out what’s going on in the OS and related world. If it’s not original reporting then I *just* want the link to the source material.
I agree with sorpigal’s post.
Noted.
Like I said, we’re testing the waters with this stuff – it’s just as new to you as it is new to us. I’m trying out different methods, different formats, and such things take time. You can’t expect us to get everything right in one go, especially seeing how small a team we actually are, doing everything in our free time.
I’ll see what I’ll do with the article links. I’m talking to a lot of readers on the ‘new style’ all the time, every day, to see what they like and don’t like. Your considerations will be taken into account just like everyone else’s.
Edited 2008-05-05 17:07 UTC
Thom, how much time does it take to insert the relevant link in the summary? Do you need to run it by a committee? How is it that these things take time?
If content is not original, put the link to the original content in the summary – why is that so hard? If most of your content is not original, you ARE an aggregator, don’t pretend to be something you aren’t.
If you want to be more like Ars (and I can tell you do), work on creating more original content, which you have been doing lately – and I appreciate it – but don’t dork around creating lengthy summaries that people are “coerced” (your word) into reading because the links are buried in them.
If people want to just click through to the links, that’s what they are going to do. If you try to stop them, they are just going to leave. Your attitude toward your users is very wrong-headed.
If you want people to read the full article text, do something simple like making a simple “Read more” link that takes you to a combined full article and comment listing – like just about everybody else does it (well, except for Ars – and I really don’t like the way they do it, but at least they do place the relevant links in their summary)
That’s actually something more people have been saying, and it sounds like a darn good idea to me. There are issues though, such as how to handle ‘real’ articles, as they can be much longer and multi-paged, making placing the comments somewhere more difficult.
I’ll bring this up on the crew m-l!
Where are the facts that the .NET api suffered as a result of catering to these imaginary 3 groups of programmers that we all somehow fit into?
Marshalling events onto the UI thread? Oh how can we cope. It’s all Win32’s fault and the sky is falling. Windows forms in 1.0/1.1 wasn’t the best but these days its pretty awesome. It’s far from being a BAD api. Infact the sheer scope of things you can do in a consistant manner without ever having to leave standard .NET libraries is pretty ridiculous.
The worst part is that Java Swing does the exact same thing…
Using non portable code is retarded.
Use some toolkit like QT or something. Direct calls to Win32 or Cocoa is retarded, unless you do it is very well defined section of your project that are small and easy to change when you need to port.
XCode?
Is XCode free with OSX? I know if you’re making commercial software QT can be expensive. $4k per developer.
Although, if you’re making commercial software that doesn’t generate at least $4k revenue per developer you have problems anyway.
Last time I dl’d it, it was free. Check here.
http://developer.apple.com/tools/download/
Be warned, it is a 1GB+ dl.
I believe this is intentionally meant to be sarcastic, but yes Xcode is free. So are all the APIs. If you want Support you can buy contracts or register as a Select or Premiere Developer and get your focus noted.
Generating an additional $4k revenue can be difficult if you’re a small shop or a solo developer. Generating an additional $4k per developer if you’re a large software house can be really prohibitive.
If you already have millions of lines of Win32 code, it can be very difficult to embark on a rewrite. For better or for worse, Win32 is like COBOL. It’s there, it’s hard to remove, and most people just end up dealing with it.
Lets do some math. Assuming a uniform salary of $60,000 per annum the ratio would be 1 develop:15 QT licenses. If you make the salaries more realistic its worse. So if QT for large developer teams if QT makes your programmers 1/15 more productive you break even at worst.
So to me it looks like $4,000 is cheap for large teams.
Where I live, a developer costs between ^a'not60 to ^a'not100 for an hour ($90 to $150). For $4000, you have one cheap developer for one week, or less if the developer is not cheap (that is maximum 1/52 of the year). If the developer has a salary of $60000, it is not what he costs. There are a lot of other costs like training him on new technologies, etc. Moreover, check monster. A developer in New York for instance is more like $110k to $130k
Sure, you can have developers in India for half the price, but then, you can buy QT licences for half the price in India.
Edited 2008-05-06 14:08 UTC
It is not as expensive as porting your code from win32 to cocoa or from cocoa to win32, unless your project has 100 lines of code or less. $4k is one developer for one week. For any project bigger that 30 man days, $4k for a truly portable toolkit is well worth it for your code will be way less expensive to maintain.
I agree totally. I’ve been using Qt 4.x for the past several months for a few side projects, and it’s wonderful how consistent it is across the three big platforms with little-to-no effort on my part as the programmer.
This inconsistency shines though in all their products, not only for developpers.
I can’t wait to see them eat it.
– Kevin
I’d really like to know what kind of programs the author of the article develop, if his problem is that he can’t make vane animations or pretty programs then hey, use WPF.
Now for someone who is looking for writing layered data applications witch customers don’t give a damn about being the most pretty program but they care reliability and scalability then excuse me but MS solutions and tools are excelent.
I see the author just biased to the ramification of “make it pretty” and not to the ramification of “make it reliable”.
Edited 2008-05-05 16:03 UTC
His issue is consistency which shoudl be an issue to anyone who has to program for windows. He gave plenty of examples of this and even gets literal and shows a pic of how inconsistent MS is just on their UI design alone, so you can imagine their api isn’t much better.
You mean consistent on looks?
consistent in API?
Cause the API is pretty much consisten, now if his problem if the look, then big deal, there are tons of priorities before reaching looks.
I am a Java developer and that too not an OS nor desktop app writer. I work on enterprise applications and I can see where this author is coming from. As much as I like my work programming in Java can be frustrating because of its verbosity and boiler plate code. Granted that is not what the author was talking about directly but I definitely understand him about the deprecation stuff he was talking about. Java is chock full of those…still its getting faster and faster…and better…so thats all I can really ask. Too bad Java is not performant as C/C++ resulting in fast desktop apps or else it would really be a nice language for most developers I think…though I think Java bytecode can be compiled to .exe nowadays wiht some third party tool.
I do not want to start a flamewar, but IMHO, the Win32 API is one of the bests APIs written ever; to develop it, they considered a lot of very important things, like API backward compatibility, ABI backward compatibility, error handling, etc.
Ok, it has inconsistencies, but they are the result of a lot of years put in production.
I agree the Win32 API is hard to use, but it does not turn it into a bad product, having a C API of such huge size brings its own complexities.
I would agree that it is a very impressive C API, but the author is comparing it to Cocoa, which is incredibly consistent and logical, object oriented, much more full featured than Win32, protects the programmer from himself in many many cases, runtime-typesafe, and produces native object code. The two really don’t compare.
It looks like you’re in the second group.
This is probably the first time I’ve seen anyone defend the Win32 API as being well-designed and clean. I guess if it’s the only API you’ve ever used, you might think that. Then again, there was a time when it was the only API I ever used and I still thought it was a pile of shite.
<sarcasm>
Do you code in Visual Basic?
</sarcasm>
The Win32 API is a structured, C API, period.
You will not find some object oriented (though the UI functions use ‘classes’, polymorphic or generic stuff) on that API because it was not thought in an OO world.
Being a C API, is one of the bests C APIs ever; you can run in Windows Vista that old HelloWorld.exe compiled in your Windows 95 box.
The fact that Win95 programs (some of them at least) can still run on Vista is not a testament to the greatness of the Win32 API, but rather to the huge amount of work Microsoft spent making sure things are backwards compatible.
I don’t think an API that has at least three methods of memory allocation that all exist because “well, that’s how things were in 1988” can ever be considered a good API.
If an API defines several data structures, it is a very common practice to provide three allocators and “releasers”, one per each data structure, that is the way how the things in C work.
These are really good things about the win32 api:
* Backward compatibility
* Consistency (being consistent in Java or C# is quite easier than in C because some portion of the consistency comes with the class hierarchies)
* Good error handling (ok, exception handling is better, but C does not provide such thing and I consider the win32 error handling good)
* Stable (it is very hard, almost impossible that the API by itself gets crashed)
* Scalable (ok, you can argue here about a lot of decisions taken to keep the backward compatibility and the new data structures to extend the current functionality, but, come on, mark a method as @deprecated has been a very common design decision in the OO world too).
* Good security (a lot of functions have been thinking on give or block permission to use).
* Seamless UNICODE support. In the front-end, you just use one function, and depending on your project configuration, you will be using the Unicode version FunctionW or the non-unicode FunctionA version of Function.
ok, you cannot compare the Java or C# object models to this huge set of functions, but IMHO, all the weaknesses in the Win32 API come from the implementation language (C) and not from the API design (programming using GObject in the bare metal is as tedious as programming in win32).
I am a java guy and I like a lot the Java APIs (Swing is the most beautiful API found outthere) but talking about Windows and its Win32 API, the engineers that created it deserve a lot of respect.
That can be a good or a bad thing. Why is WPARAM 32 bit and not 16 bit? I thought WORDs were 16 bit and DWORDS are 32 bits? Oh wait, WPARAMs and LPARAMs go all the way back to Win16 and they’re still here. They made sense in a Win16 world where WPARAM was 16 bit and LPARAM is 32 bit. The make no sense today when both parameters can be 32 bit or 64 bit, depending on what OS you’re running.
Consistently verbose and overly engineered. How many functions do you come across that pass a series of NULLs and when you check the documentation, it tells you to always pass NULL as this functionality will be added in the future. 20 years down the line, you are still passing NULL.
That’s nothing to do with the API. When people talk about API stability, they mean an API that doesn’t change much. Runtime stability is unrelated to API stability.
You’re joking right? It’s such a verbose API. A simple task like creating a new window takes lines and lines of code. The code size of a Win32 project grows tremendously fast. I’d hardly call that scalable.
uh … what?
True, thats a good thing about the Win32 API.
I code in Win32 everyday (ATL/WTL specifically). Not a day goes by when I do not wish that we went Qt or some other more sensible library.
I am not saying that Windows is not a truly horrid mess, but Apple also lacks perfection.
Unless you have only just started using OSX you would notice that Apple have consistently broken their own user interface guidelines over time and they also support some pretty vile event driven legacy APIs – They even write key components like iTunes and the Finder using them. And Why does QuickTime seem like its own world?
And what is going on with Java?
It seems less than perfect on either side of the fence.
Maybe the grass is just always greener on the other side.
good article
But the library the .NET “API” used for such diverse tasks as writing files, reading data from databases, sending information over a network, parsing XML, or creating a GUI the library is another story altogether.
The library is extremely bad.
And then he never explains what exactly is wrong with it.
The guy is 100% crazy. Or simply ignorant. Like, what exactly is wrong with reading data from database in .NET?
Edited 2008-05-05 23:52 UTC
The author is probably a weak programmer, a student level in the first year, and he lost three points:
– because .NET is based on Win32 API and is visible in behavior it is a naturally thing. The API does not matter on behavior, the files and database APIs are much cleaner than the predecessors, so no need to argue with it, if you compare with similar technologies till the moment, like COM, Win32 raw API. And is comparable with Cocoa on matter that even an integer is an object (so you can write in .NET somelike: 64.ToString() ).
– the 64 bit programming means mostly bigger address space, not bigger arithmetics. 64 bit integer are rarely used in 64 bit programming. Why? Cause SSE3 can process in one step 8 integers of 32 bits, and only 4 integer on 64. 64 bit programming is used to break the 2GB limit of most memory allocators and that’s all.
– the backward compatibility is keep cause of the same reason that exists GTK1 even today and is patched. Some software still use it. So deprecation doesn’t necessarily mean that will be removed in +1 version, but means that is a bad practice to use it. One more thing, the deprecated functions are typically implemented using the recommended function, so you will get a performance hit when you use the deprecation. But in rest, there is no waste to have an extra function, to let the old code to still compile.
This kind of comment doesn’t make you a better programmer, nor it gives more credit to your post.
What are you talking about? The fact that .Net was base on win32 is why the author think it’s flaw. And I don’t see your point about 64bit, it’s completely unrelated to the article, plus it add nothing to support your claim.
No! backward compatibility was keep because it was so badly design and poor programmers took many bad habit from it, that they’re no other choice.
Your argument for GTK1 is the perfect example of what the author is talking about. Break compatibility for greater good. You CANT use GTK1 code against GTK2 library, you have to explicitly import GTK1 libraries. But if GTK was .net, they would have incoporated GTK1 inside GTK2 for full backward compatibility. But at the really high cost of inconsistency, complexity and maintenance nightmare.
I agree 100%, but I will add on to that
1. .net 3.0 came out in early 2007. Why is he still talking about WinForms a year and a half later? The reason is because he is trying to prove a point, and that WPF kicks cocoa around the block when it comes to flexibility and power in UI design
2. he talks about the inflexibility of the framework, but doesn’t give any supporting reasons for that belief. The whole thing is heavily based on the provider model, and I have yet to find a situation where I have not been able to implement an interface and completely change the behavior of the higher level apis.
3. he talks about the inflexibility of the framework compared to java when it comes to enterprise archetecture. Someone must not have told him, but the java API BLOWS for enterprise archetecture, that is why the java community has come up with so many replacement frameworks. LINQ2SQL may not be perfect, but it kicks the living crap out of EJBs any day, which don’t even fit in very well with modern ideas (like SOA).
4. He slams microsofts handling of 64bit, but fails to mention that .net will compile an optimized version of native code for your specific cpu at install or first run, compared to apples solution of just bloating your executables by shipping with multiple binaries for all architectures stiched together.
Now, lets look at cocoa
Obj-C is ancient. That doesn’t make it bad, but the entire industry has moved in a completely different direction. It is basically bringing back a less popular version of smalltalk and calling it modern.
Cocoa changes radically from version to version, which is a developer nightmare, especially for large software.
There is an attempt for grand unified one size fits all APIs (which he slams in .net), they are just nowhere near as complete OR consistent. Show me how to do web services with cocoa, and I’ll show you how to do them with WCF (hands down the best service platform at this point in time)
Maybe the author will come up with this in part 3.
But it looks you also trying to prove a point. WPF kicks cocoa arount the block? I don’t have any experience with WPF, except reading the documentation. But I fails to see how you explain it, or back it up .
Kicks the living crap? Seriously you blame the author in point 2 that the doens’t backup is claim, why don’t you?
You’re right about java, but aren’t the both equal?
He slams the result of a whole inconsistency and development philosophy. Optimization is nice, but completely useless when you start including C++ librairies into the mix. The author talk about the ribbon as an example. But if they open the door with this, many others will follow.
As for apple packages, this is a package containing both libraries. So it will not load everything in memory. Linux is the same with /usr/lib32 and /usr/lib64, a link /usr/lib aiming the system default.
But maybe syswow64 is a better place to make future hardcode link for 32 bit libraries…
1. WPF uses an incredibly powerful xml based content model. A silly example of this would be something like <Button><Textbox /><Button> to make a button with a textbox in the middle, but it illustrates the point well. This is possible in other UI frameworks, but is typically alot more difficult.
2. WPF uses an incredibly powerful databinding model. It also takes a bit to wrap your head around, but once you do you don’t really want to go back to anything else. If you want to see a good example of this, here is a blog post on how to turn a listbox into a to scale representation of the solar system using data templating http://www.beacosta.com/blog/?p=40
3. The tooling for WPF blows XCode out of the water. The designer in visual studio is fanstastic, unlike the winforms designer which only caters to people who don’t want to know what is going on, the WPF designer accomidates people who want more control over the code. AND there is a tool for designers to directly modify the UI called Expression Blend. This allows the designer to work in parallel with the developer, and actually make changes in the actual code file, as opposed to emailing a mockup in photoshop.
There is alot more, but those are the three big ones, that to my knowledge do not exist on any other UI platform.
I did back this one up, the author praised java while bashing .net. I brought up that doing things The Microsoft Way (WCF and LINQ), makes SOA a joy to work with, as opposed to doing things the Sun Way which is only really appropriate for the absolute largest of enterprise apps. I also brought up the provider model, which allows very high level easy to use APIs (like user authentication) out of the box, but allowing for a great deal of flexibility in implementation details if you choose to roll your own provider.
The difference between .net and java is the java community process moves at the pace of a sloth, and MS has shown a very good history so far of rolling modern ideas into the framework and its languages (extension methods, lambdas, real generics, WCF, etc).
Where java shines is that the community has a good decade or so on the dotnet one, and they have already solved many of the issues we are still figuring out. It does not shine on technical merit.
You are right, but since when is shipping two binaries better then compiling an optimized one on the fly? Linux’s /usr/lib32 /usr/lib64 is a cleaner version of Program Files/Program Files (86), but that speaks more about how horrible the MS deployment story used to be then the way it is now. If you are using .net (the standard for almost a decade at this point), the whole thing becomes moot.
I am not defending microsofts legacy 32-bit on 64-bit story, it is an ugly hack. I am pointing out that you shouldn’t throw stones in a glass house, because apples story is just as hacky.
I just ordered a book on XAML, mostly because of that link. I don’t do much .NET programming, but I loved the early Channel 9 videos on XAML, and I’m incredibly interested in it as a UI framework.
Edited 2008-05-06 19:37 UTC
Yeah, I’m a web guy and I have never been interested at all about UI before XAML.
The chris sells oreilly book is pretty much the standard “Hello, world” book at this point. Beatriz Costa’s blog (where that link came from) is an absolute goldmine on databinding and datatemplating. Once you have the basics down, I would highly recommend starting at the first post and reading them all if you want to do anything even remotely data-centric.
For me personally, the big thing has been unlearning alot of concepts I picked up from other UI frameworks I have used. It is closer to xhtml then anything else, but only at the most basic of levels.
It seems that the .Net expert Kevin Hoffman is not thinking like you. He just said at the last WWDC that Cocoa and Xcode just blows .net and visual studio.
And as we said, he wrote the books ….
you should also take a look at the ‘MVC Design Patterns in Cocoa and .Net.’ section …
http://dotnetaddict.dotnetdevelopersjournal.com/roughlydrafted.htm
Edited 2008-05-07 07:33 UTC
This guy may have been a big MS guy, but any bias he may have once had has completely shifted, and that is evident in his talk. I have seen that before, and alot of it has to do with the overall quality of the OS, rather then the actual dev tools.
That being said, he does have some points.
MVC
Obj-C by its nature pushes you towards very clean architecture, .net by its nature pushes you towards VB6 style RAD “everything in the code behind”. Any .net developer worth his chops is going to use Model-View-Presenter for win/webforms, or Model-View-ViewModel for XAML, but some of the tools actually encourage sloppyness. On this point, I think he is bang on. (why I mention it first )
Designer
He said that he was unable to design an interface, and because of that he liked the XCode designer, because it spat out better UIs by default. I say that any developer who does UI design in any way needs to read a few books on the subject, because if you don’t you are simply not doing your job well. Sure, a professional usability expert is going to do a BETTER job then you, but that doesn’t mean that you can just abdicate all responsibility as a UI developer.
As someone who is not a designer, but a programmer who takes all aspects of his work seriously and has learned the fundamentals of good design, I would take winforms designer over the xcode one any day, and I am not even talking about WPF, because it really takes things to the next level (as I have mentioned elsewhere).
He does have a point about core animation though, it is nice to have some canned, standard effects to work with that people can get used to seeing. Since we are still in the early stages of WPF (it just came out last year, visual studio 2008 was the first version with tools for it), I still have hopes.
Data Access
from the site
That is misleading. Ignoring LINQ (which is out now, and it rocks), we have had OR/Ms on .net for years now, and I don’t know of anyone using raw ADO.net for a very long time. My personal favorite (pre LINQ) has been subsonic http://subsonicproject.com/, but the most popular is probably NHibernate http://www.hibernate.org/343.html.
Also, .net has pretty good XML APIs, and very good XML serialization APIs, so using XML isn’t quite the nightmare that was implied.
Anyways, I have said in a whole bunch of places now that I don’t think that obj-c/cocoa is bad. I am really not trying to say that. If I were doing audio apps, or image manipulation apps for a living, I would probably have gone the apple way. But it is far from being modern, and it has a hell of alot of liver spots on it that drive me around the bend. to quote Rory Blythe
Edited 2008-05-07 12:20 UTC
Really? The entire industry has moved away from MVC/KVC modeling which are inherent strengths of ObjC and Cocoa?
Please. They’ve moved towards it. I laugh at CORBA when it was bludgeoning the industry with using it and ignoring NeXT and their worthless MVC/DO/PDO Models for OOA/OOD.
Where are we now? Truly amazing that every language, especially C# and Java act as if they’ve invented the whole MVC paradigm.
C++ with Trolltech are using it extensively as does KDE.
Do you give credit to NeXT/Apple, specifically ObjC/Cocoa or it’s elder Smalltalk?
Never.
What a joke.
Obj-C didn’t invent MVC, smalltalk did. I have never really implied that anyone else came up with that pattern. Smalltalk was a great language in its time, if it weren’t for the huge performance issues, we would probably be using things that are very similar in syntax.
The fact is we are not.
I am talking more about how much of the syntax and ideas are either just really old and outdated (method prototyping, header files, multiple inheritance), or just odd and quirky (messaging, posing, categories, forwarding, etc).
Now, odd and quirky are far from being the same thing as bad, I actually specifically said that I wasn’t saying it was bad. But it is being passed off by mac guys as modern, which it is not. You can trace a natural evolution in syntax both for the statically typed languages (c++ – java – c#) or dynamically typed languages (perl – python – ruby). You see alot of syntax in the language hasn’t really been used in anything serious for about 20-25 years now.
And I’m just talking syntax, im not even going to bring up things like (proper) GC or sandboxing.
Edited 2008-05-06 20:54 UTC
No s*** sherlock. Clearly you have a reading deficiency as I pointed out that MVC came from Smalltalk. NeXT extended it and included Delgates and later Bindings ala KVC.