There are plenty of programming languages around. David Chisnall points out the various factors that determine what makes a “good” language. But note his caveat: These principles don’t always apply in any given set of circumstances!
There are plenty of programming languages around. David Chisnall points out the various factors that determine what makes a “good” language. But note his caveat: These principles don’t always apply in any given set of circumstances!
Writing a useful program using just the C language is almost impossible.
Hmm the book is probably targeted at PC's.
Hmm. I was looking at all the binary files on Linux such as ls, tail, cd, ps, top, etc. They are all in C. I just never realized they aren’t very useful (sarcasm).
’nuff said. Blender 3d was, up until a couple of years ago, written entirely in C (except for the python interface).
So, just to ask the same question you did: exactly how is C not useful?
So, just to ask the same question you did: exactly how is C not useful?
Who on EARTH is saying C isn’t useful?!? It sure as hell isn’t in the article.
Jesus, I couldn’t agree more!
Why do people always jump to defend their favourite programming language instead of reading the article _carefully_?
Anyway, though the article does not offer any breathtaking insights, it’s still pretty decent and at least doesn’t contain loads of BS.
I’m very much interested in programming language trends but even after reimplementing one of my programs in python (neat language btw.), C(++) is still my all time favourite.
The python version is about half as long (3xx vs 5xx loc) and is about ten times slower, which is quite acceptable.
Personally, I’d always choose a powerful language over a popular one IF
– and thats really important –
the powerful language is still popular enough so that I can buy a decent book about it.
Now let me get back to my sweet wxPython book
… and my homework for university – damn it!
When the author says that “Writing a useful program using just the C language is almost impossible”, he means that you can’t do anything useful using the language by itself. For example, there is no way to do I/O using just the C language. All useful C programs use at least the C standard library, and usually many other libraries.
It also applies to most other languages. You cannot find a high level language that can do this without some sort of library.
C++, C#, Java, PHP and everything else uses their respective standard libraries
“Goodness” of the language also depends on the IDE and API. For example SWING is ugly like crap (those funky event listeners and stuff), Windows.Forms is just a pleasure
I use C#, C++ and C for writing real applications, depending on the target platform. In C you have full control of the resources, memory & stack usage, and other stuff. C# provides very good API, garbage collector and Visual Studio 2005 roxx (even more with Visual Assist X). C++ is something in the middle between C# and C.
Programming language is good when you like it.
Edited 2006-10-22 19:32
There’s no way to do I/O using the C standard library either. That’s done via system calls which can change from system to system (although I don’t know if I’ve ever heard of a system which has a different ‘read’ call).
I’m surprised he talks about the C library, it’s pretty much a big joke today. It doesn’t define much at for you other than a way to work with strings.
But you are correct. He was trying to say that C without a library (and system calls) is pretty much useless. He wasn’t saying C is useless.
That’s not true. The standard library part of the C99 specification takes up 250 pages. It covers functionality ranging from date/time functions to I/O and wide-character handling. It’s not as comprehensive as Java or even Common Lisp, but its hardly just strings.
Some of these features are new in C99, but stdio has been there forever.
There’s no way to do I/O using the C standard library
Functions such as printf(), fopen(), fwrite() etc. are all defined as part of the standard library. That’s what “stdio.h” is all about; standard I/O.
That’s done via system calls
The standard doesn’t define the mechinism by which these functions work, and there is nothing about “system calls”; you could be working on a system that doesn’t have a kernel, for example.
I’m surprised he talks about the C library, it’s pretty much a big joke today. It doesn’t define much at for you other than a way to work with strings.
As the poster above me pointed out, the current C standard covers 250 pages, and “way[s] to work with strings” is a tiny part of it (strings.h, specifically) That you think this suggests that you either havn’t worked with C very much, or you are confused about which parts of C are the language and which parts of the standard library.
That’s not a universally true statement “For example, there is no way to do I/O using just the C language.” because there are many platforms where memory-mapped I/O is how it is done: the Apple 2 series, for example. The x86 has dedicated I/O instructions, sure, but even with an x86 system is is possible to do memory-mapped I/O, which can readily be done with straight C and no assembly language, or any libraries written in another language.
Sure, you could argue “What about timing loops?” and a few other things, but… that’s just hair on the rabbit, and sooner or later, the low-level access (depending on the platform) can be done in C, assuming you don’t need to step out of straight C to do things like clear interrupts and the like, and the system doesn’t force you to use specific I/O CPU instructions.
Fine. There is no way to do I/O in the C language that doesn’t depend on implementation-defined semantics
From TFA:
Languages such as Smalltalk and Lisp were traditionally run in a virtual machine (although most current Lisp implementations are compiled into native machine code).
Might be a nitpick, but Lisp wasn’t traditionally run in a virtual machine. It was traditionally run on actual, not virtual, hardware designed specifically for the task:
http://en.wikipedia.org/wiki/Lisp_machine
A bad nitpick, have you read the numbers in the article you reference?
It is said that there were about 7000 Lisp machine sold, that’s not a lot. I’d say that much more people even in the old days used Lisp on normal computers than on a Lisp machines.
That’s the crux of much of the discussion. For example the “can’t do I/O” in C meme is demonstrably false, since even the standard library that most folks rely upon is written in C. Stdio is written in C, for example, but it relies up the read(2) system call.
C can certainly be written without a standard I/O library ala the one provided by Unix systems. Many embedded applications use C libraries that are not stdlib compatible.
But, then if you look at things like Fortan and COBOL, both of these have had I/O since the dark ages. The I/O statements are actual syntax structures within the grammar. C, clearly, does not have any file syntax within the grammar (you could argue #include from cpp, but, really, that’s a nit).
Java as well has no formal syntax for I/O, relying again on a library support structure. But you’ll notice with Java, the Java language specification is seperate from the Java Class Library. Witness Microsoft J# for a “Java” implementation that does not support that Sun Java class library. But as far as I know, the language is in fact pretty much identical.
Many languages rely upon a library mechanism to support I/O. But, may don’t. BASIC is another example with language specific I/O.
Most modern languages rely upon using in built extensibility to provided external services.
But scripting languages or applicaton specific languages certainly don’t need that kind of flexibility or extensibility. They need to perform their limited task as best as practical push all of the internal power and flexibility into the implementation.
Because an issue if you have to express things in general ways, then soon you end up with a lot of boiler plate, redundant code to convey the simple message of a specific task at hand. Use of the function, subroutines and class libraries can help hide some of that cruft, but they only go so far.
Systems like Common Lisp, Scheme, Dylan, even C help mitigate that through the use of Macro facilities. C++ through its ability to abuse operator overloading (as well as C macros) give you an option to create mini Domain Specific Languages.
In the end, what makes a good language is a language that basically stays out of your way in communicating your desires in the domain of the implementation. There’s nothing more frustrating than hitting some limit in a language that won’t let you express some concept the way you want to express it. Like a painter running out of canvas.
But that doesn’t mean that languages without limits are necessarily “better”. As always, it depends on the domain. I’d rather write some apps in a 4GL that let’s me do 90% of my application in far less time than a “conventional” langauge, than fight a language that is simple to general purpose for what I’m doing. Coddling it and show it the way every time.
While the full power GP languages are very good, in the trenches application developers should not have to be language designers to get their jobs done. They should simply be competent at taking a reasonably powerful toolbox and applying it to their domain efficiently.
Technically, the “language” specifies a set of constructs, and the semantics of those constructs. “The library” specifies standardized functionality, but functionality that is built upon the constructs defined in “the language”.
Thus, it is not false to say “you can’t do I/O in C”, because C, the language, has no semantics for I/O. Yes, the standard library is written in C, but almost every C standard library has a part written in assembly, because there is, for example, no way to invoke the kernel (via sysenter or a software interrupt) in standard C.
That said, I don’t think defining “language” and “library” so rigidly is particularly useful, and statements like “you can’t do I/O in C”, while technically accurate, aren’t very useful either. Practical implementations of languages have built up three layers that define the language in the larger sense. The lowest level is the semantic layer, which specifies the constructs of the language and their behavior. The next layer is the set of “magic” libraries. Such libraries do not present new constructs at the programmer-visible level, but provide functionality that could not be implemented solely based upon the constructs presented in the lower level. The C standard library is an example of such a layer, as is something like a Common Lisp FFI, or Java JNI. Lastly, there is the “standard” library layer, which consists of libraries that can be implemented entirely in terms of the language’s constructs, but are part of the “language” for standardization reasons. C++’s STL is a primary example of such a layer.
That said, I don’t think defining “language” and “library” so rigidly is particularly useful, and statements like “you can’t do I/O in C”, while technically accurate, aren’t very useful either.
And it only makes sense in less-expressive languages like Java where you have to wait for the compiler guys to put a foreach into the language.
and produce a confused debate.
The article starts off with an implied wrong assumption: that the size of a language is related to the number of keywords in the language. It then moves on to a classic mistake: that the standard libraries of a language are not part of the language.
From there it goes down hill.
This is the best example so far of why a poorly written Informit article shouldn’t be linked to.
I kinda agree… about the only thing to get from it is that a good programming language depends on:
a) the person(s) writing the code
b) the person(s) maintaining the code
c) the task at hand
State the obvious I guess, but then people do put down languages despite the fact they can be good for something.
It’s like saying what makes a good tasting food.
Not really. You can state general principles about what makes good tasting food, like:
– fresh ingredients
– attention to compatibility of flavors
– careful control of cooking time and rate
Yeah, sure taste is a factor, but conveniently tastes among human beings aren’t completely unpredictable.
Not really. You can state general principles about what makes good tasting food, like:
– fresh ingredients
– attention to compatibility of flavors
– careful control of cooking time and rate
It all comes down to individual taste and the process is pretty much irrelevant to the taste buds.
The article was so bad though, I just skimmed it, and I don’t even know if the author ever gave a conclusion to what he thinks is a good language.
Individual taste isn’t so individual. The process is relevant to the taste buds to the extent that everyone’s taste buds are tuned to recognize certain things the same way. For example, when ingredients sit around for too long, their chemical structure changes, and humans can recognize those changes.
Most people dislike the taste of, for example, fish that isn’t exactly fresh. Large groups of people find particular taste of flavors appealing (eg: sweet/sour). Not many people like the taste of overcooked food. These are the principles on which cuisine is based, and why restaurants can exist at all. If individual taste was that individual, we’d have no national cuisines, and we’d never talk about “the great mexican place on the corner”.
If you attend a top culinary institute and they teach you the state of the art in cooking, you have a “better” chance of creating food that people like, but at the end of the day some people just aren’t going to like that foi gras dish..
Take Lisp and Scheme for example. Many programmers are turned off by S-expressions and prefix notation. Now given enough time, Lisp “can” be an acquired taste, but there is no guarantee, and many people will always prefer an alternative. “Some people will always prefer chicken over foi gras”.
And of course, in the case of programming langauges, there’s a lot more to liking a programming language than just syntax and semantics. Some people will prefer a language just because they have a better chance of using it in a job or they prefer the tools or community surrounding it.
Really that was just plain silly, I want my two minutes back.
One of the interesting things about Java is that people’s big speed objection to Java seems to have little to do with CPU utilization.
It seems to instead stem from need to compile the program and any libraries it uses at launch. This burst of activity does a bunch of disk and memory IO that blows out disk and CPU caches and generally makes a machine sluggish for a while afterwords. Combine that with an often sluggish UI and the language ends up rarely used for client side applications.