Today’s commercial-grade programming languages – C++ and Java, in particular – are way too complex and not adequately suited for today’s computing environments, Google distinguished engineer Rob Pike argued in a talk at the O’Reilly Open Source Conference. Pike made his case against such “industrial programming languages” during his keynote at the conference in Portland, Oregon.
I think more recent versions of Java have added syntax and constructs to make it easier and cleaner. I had thought the next version[s] were going to continue the process… at least until Oracle bought Sun.
Time will tell.
On the other hand, it certainly is easier to develop in RoR than Java, Struts 2 (which is vastly improved over Struts 1) and the whole J2EE engine… For that matter I whipped up my own html/http libraries for Euphoria in a matter of a couple of days and they are easier (to me anyway) to use than Java for web development.
“Google Executive”? It’s Rob frickin’ Pike. While he was at Bell Labs he worked on Unix, Plan 9 and Inferno, and has co-authored two extremely influential programming books with Brian Kernighan (The Unix Programming Environment and The Pratice of Programming), among many other achievements.
He also helped design Go, and was probably explaining why it was created.
Basically, he’s pushing Google’s “Go” language as the answer. Which has interesting elements to me, but it’s not something I’d consider as an alternative to Java for ‘enterprise’ development.
Java has deficiencies, no question, but it’s a language I can get work done in – it has all the third-party libraries you could want, and a wide variety of tools for developing, building, testing. For the time being, Go lacks those things.
There two different way of coding: prototyping and developing. The first one require fast and easy languages, normally fault tolerant. The second require design, rules and vision. Those style should not be mixed. A prototype stay a prototype until it die. You can not convert one into a nice and future proof program, even if the code is cleaned and partially rewritten. Look at lua, it is widely used in the game industry and other prototyping heavy industries. It have a really nice syntax for handling NULL values:
local myVar = myObject:myFunction() or array[“key”](arg) or array.key2(var) or 10
the first non null value will be used. You can declare a new “class” function by simply writing it and passing the object as the first argument. Then:
myObject:myNewFuntion(arg2,arg3)
will work! You can also define new object property on the fly and do almost everything “for now on” in your code. No formal definition or restriction. How nice! Is it? For a prototype, yes, for anything else, no. This result in spaghetti code and you lose the original “way of doing things” it soon become a mess of hacks. Solid and well designed syntaxes declaration, as painful as they are to write help a project survive the prototyping stage. You can then debug and maintain the code, it is not the case with all these lossy languages. Those who have to maintain (Visual) Basic {6,.NET} or PHP code know that very well.
And what about Java is so necessary. I often find that type safety can be a hindrance causing me to have to write several pieces of similar code or code that is convoluted with extra syntax just to express objects that are obviously similar but don’t fit in to an obvious object hierarchy. I often find myself overriding methods for several default values and or types which results in several methods calling other similar methods, in a dynamic language I often reduce half a dozen overridden methods into a single method that shows all the default values in one declaration. The reflection libraries and creating dynamic methods or inner classes in Java is plain ugly compared to languages like Python or Ruby. The rigidity of languages like Java has it’s place and it can be useful in some scenarios. But there is a lot of places that it just becomes extra syntax that really doesn’t make the code any clearer. In fact there are a lot of cases where you end up writing hundreds of lines of boiler plate code that becomes completely unnecessary when you remove the necessity of explicit typing, Templates (or Generics if you prefer) and method overriding create hundreds of lines of extra code.
Dynamic languages allow you to choose when the best time to use a feature is. If type safety is so important in a specific scenario you can type check. The fact is dynamic languages tend to let a designer have more freedom to chose the right path based on the scenario as opposed to taking the same route regardless of whether there is a better way. For the most part if I can do in 100 lines of code what takes 500 lines in another language the 100 lines of code is likely to be easier to follow and have less errors than the 500 lines.
There is nothing inherently wrong with dynamically typed languages. And the assertion that if you want to build real software you need Java or C++ has been proved incorrect time and time again.
Twitter is one of the fastest growing and highly utilized systems in the world right now, it’s written in Ruby. Checkout Plone/Zope, a CMS written in Python that has been used in thousands of websites including some real heavyweight systems like Akamai, NOvell and Nokias web sites. Or maybe you could checkout Django which is used to develop several high traffic news sites. And WordPress, RoR, Pylons/Turbogears, and probably dozens of other frameworks provide robust, well designed, tested systems that allow you to quickly and effectively build enterprise class software.
I’m not saying that in certain situations where performance is absolutely key or when integrating with some other software or company policy that Java, C++ or C# are better alternatives. But there is certainly a place for dynamic languages. They are not toys and just like Java if you follow good design and testing principles you will be able to construct an enterprise ready system with them. And sometimes, in fact more often than not, you can save time and have cleaner code when you can choose how to use the language you’re using rather than have something like type safety imposed upon you as a developer.
A bad developer can shoot himself in the foot regardless of the language or the framework he/she is using. It may be that dynamic languages provide bigger guns that can be used to shoot off your whole foot more easily. But it all comes down to the developer/designer and their ability with the tools they have to use.
Twitter is one of the fastest growing and highly utilized systems in the world right now, it’s written in Ruby.
This is incorrect. Twitter was initially written in Rails, but the performance was dismal and it wouldn’t scale; plus they basically had to implement an entire type checking system in the test suite.
So they switched the entire back end to Scala, the JVM-type-safe functional language. That saved them. You can read about it here:
http://www.artima.com/scalazine/articles/twitter_on_scala.html
a quote from this article:
Another thing we really like about Scala is static typing that^aEURTMs not painful. Sometimes it would be really nice in Ruby to say things like, here^aEURTMs an optional type annotation. This is the type we really expect to see here. And we find that really useful in Scala, to be able to specify the type information.
As our system has grown, a lot of the logic in our Ruby system sort of replicates a type system, either in our unit tests or as validations on models. I think it may just be a property of large systems in dynamic languages, that eventually you end up rewriting your own type system, and you sort of do it badly. You^aEURTMre checking for null values all over the place. There^aEURTMs lots of calls to Ruby^aEURTMs kind_of? method, which asks, ^aEURoeIs this a kind of User object? Because that^aEURTMs what we^aEURTMre expecting. If we don^aEURTMt get that, this is going to explode.^aEUR It is a shame to have to write all that when there is a solution that has existed in the world of programming languages for decades now.
I couldn’t agree more. Some languages are excellent at prototyping (python, the basic family…), other are good at real software developement.
However, it’s also true that since Pascal became out of fashion, computer science still lacks a language that’s at the same time easy to learn and suitable for complex software development tasks.
Learning C is reasonably easy, sure, but it’s not suitable for big software development (headers quickly become messy due to the lack of object orientation, and you don’t reuse code as easily as in modern OO languages. And let’s not talk about security and memory leaks).
Learning C++ is a pain. Really. Seriously, can a language so poorly designed that you have to manage the class1 = class1; case yourself still exist and be popular today ? And let’s not consider the lack of array limit storage, function pointer syntax, the painful way dynamic memory allocation has to be managed when an exception occurs…
The little defects of C were acceptable for its times, but in C++’s core syntax (I’m not talking about the STL here) there’s just too much failures which make it painful to understand and use.
I would like to see a real compiled OO language, in the Pascal Object and C++ tradition, that comes with C++’s popularity (and hence libraries) and with a much easier learning curve. It would make the compiler more complex, sure, but that’s not an issue : the compiler is written once and runs once in a while, but when a language has a crappy syntax it’s millions of future developers which suffer…
Edited 2010-07-28 14:13 UTC
Many of the failures of C++ are caused by syntax. SPECS (http://www.csse.monash.edu.au/~damian/papers/HTML/ModestProposal.ht…) is a really nice redesign of the C++ syntax that fixes most of these problems.
I have been waiting for a language like this. See my post below for a description (all the features exist in separate languages, I just want to combine them).
However, I think a compiler for a language like this would actually be a lot simpler. C++ is probably the hardest language to write a compiler for. The grammar is ambiguous and not LALR.
I’m actually working on a language like this. The grammar is actually really simple because it is LL(1), so it has been really easy to just write a recursive descent parser in C++ and not waste time fighting Bison or ANTLR. I have found that, by far, the hardest part of writing a compiler is dealing with forward references.
Are there problems with Java and C#… of course. But is it worth introducing another language?
I think we’ll always need a low-level language to talk to hardware… that is ‘C’.
I’ll fully agree that C++ used beyond just c-with-classes is a bungled mess. I’ve had the displeasure of working on a C++ project recently and I forgot what a mess it is.
Java, c# can handle much of the application layer.
The best thing to do would be to make better libraries for these languages. This way you don’t need to deal with language interop issues.
I don’t think the complexity in Java or C# are too complex. If they became the defacto standard and students learned them, they would learn to overcome and deal with the complexity. If instead we keep introducing new languages, people will never be able to settle in and learn to deal with the complexity.
It’s like our language. English has lots of flaws. Yet, thankfully no one decides to start designing new languages every few years. We learn to deal with its complexity.
Unless your language offers a significant break from the past… I think it’s best not to throw in something new.
Java and C# are not complex languages. Instead, both languages are very sweet. I love their syntax.
C on the other hand is NOT an intuitive language and I’d say it is complex because there aren’t any classes. The OOP side of C++ is really good and sweet but the C side of it is plain ugly. The OOP side of it makes things very intuitive and fun to work with along with all of the inheritance, virtual methods, constructors/destructors, operator overloading…..etc
I started programming in an OOP world and I always hate when I deal with native WIN32 API – a nightmare. My first “real” programming language was C++ and not C. The book I got thought me everything about OOP.
Yeah, I don’t think that they are complex languages. However, the syntax is not the greatest. IMO, a combination of Python-style indentation and SPECS style declarations (see my post below for link) would be cleaner and more intuitive.
No, C is actually much simpler than Java/C#. The average programmer can pretty easily “fit the spec inside their head” (or at least all the important and practical parts). C is nice because it hides nothing. It is also really easy to design nice C APIs.
I’d have to completely disagree here. The nicest things about C++ are templates (especially variadic templates, a feature I miss in every other language), RAII, and operator overloading. Everything else is terrible. (However, what mainly makes C++ hard is manual memory management, which is even worse in C, which doesn’t have RAII or smart pointers).
OOP in C++ is especially messed up. The main problem is that classes are value types, and thus can’t store instances of derived classes, unless you use pointers or references. Of course, this is completely necessary, and makes sense for C++, but it obviously makes C++ a lot harder than Java or C#.
Also, designing good C++ APIs that go beyond C with classes is really hard. Qt and Boost are really the only two good C++ APIs I know of.
Yeah, WIN32 was bad… I feel your pain.
OOP is really not that special. It’s often not even useful.
I would suggest reading about “the expression problem”. In OOP languages, it is easy to add new kinds of things (new subclasses), but it is hard to add new operations on things (new virtual methods that have to be defined for each subclass). In functional languages, it is easy to add new operations on things (new functions that operate on an algebraic data type), but it is hard to add new kinds of things (new constructors on algebraic data types, which requires modifying every function that uses the type).
There is no full solution to the expression problem, and both approaches are equally as useful and powerful. A lot of Java/C#/C++ programmers don’t understand this, and think that OOP is the only correct way to program complex systems. If you have never tried a functional language such as Haskell, I would highly recommend it. It will open your mind to all kinds of new ways to program.
That^aEURTMs All I Want to Do!
I hate computer languages because they force me to learn a bunch of shit that are completely irrelevant to what I want to use them for. When I design an application, I just want to build it. I don^aEURTMt want to have to use a complex language to describe my intentions to a compiler. Here is what I want to do: I want to look into my bag of components, pick out the ones that I need and snap them together, and that^aEURTMs it! That^aEURTMs all I want to do.
You can read the rest of the article at the link below if you’re interested in the future of computer programming.
http://rebelscience.blogspot.com/2009/08/why-i-hate-all-computer-pr…
That’s called model view controller (MVC)
Not at all. It’s called compositional programming using a graphical user interface and a tree component structure. It hasn’t been invented yet.
Then it’s called Quartz Extreme IDE and some other failed products. LEGO programming with a graph (tree is too linear) is a failure, nobody ever managed to create a product that can produce a product with a plus value. The plus value is what text based languages provide.
You can render a C++ program in a graph and will become a mess of wiring because advanced logic is just to complex to be represented graphically.
That’s what everyone wants to do. That’s what VB tried to do. That’s what Java Beans tried to do.
In short, creating versatile, reusable components that can simply be “snapped” into place in generic apps is hard as shit. If the app is of any significant complexity at all, then it is probably going to require at least some custom scripting. VB probably did it best, and the main gripe that many programmers, myself included, had with it is that the inability to poke inside and tweak the insides of the Active X Controls he/she is using in an app is in practice very constricting. They often do not do something that you need, and when one of these black box components has a bug in it, the programmer just has to file a bug report with the author of the component and wait until the component is updated, which is not acceptable if you are facing a strict deadline.
One idea that I has not been properly explored (in my opinion) is Donald Knuth’s idea of Literate Programming: http://en.wikipedia.org/wiki/Literate_programming
You might want to check it out.
Edited 2010-07-26 03:41 UTC
And spaghetti would actually look like spaghetti, nice … as I understand the use of pseudo language, I don’t think that every programming algorithm can be visually translated.
Of course some basic structure can build visually (like say UML), but would you spend more time make your visual look nice or your code working.
I could also argue that keyboard typing (plus any auto completion and helpers you could get) would be more productive than “looking up the proper construct, then drag and drop with the mouse”.
Visual “programming” could be done with most database IDE(for enterprise), then there is a lot of “click and play” style program for game.
I agree with you. Why do all people think that they can write an actual program by simply adding stuff in a visual tool?
As Frederick Brooks said in his “No silver bullet” essay, the tools can help to make all the accidental complexity easier, but the essential thing (the actual running program) is intrinsically complex and such complexity cannot be made easy.
So, what to do to have better programs? Having better programmers will do the magic.
You people are all wrong. Just because it has not been done and you have no clue as to how to do it does not mean it cannot be done. The human brain uses a tree-like control structure to manage astronomically complex knowledge. We will learn how to do the same soon, sooner than you think. When that happens, everybody and their uncle Phil will be a programmer and all of you elitist programmers will be thrown to the sharks. LOL.
I can’t wait.
Edited 2010-07-26 06:29 UTC
It have been done, and it failed badly, do a little more research.
-Data can be represented visually (UML, DFD (Data flow diagram))
-Physics can be represented visually (game engine frontend too)
-Entity interaction can be represented visually (blender visual game engine)
-Interfaces represented visually (WYSIWYG)
-Algorithm can not.
The “blocks” your talking about can not be represented visually, they are math, logic or procedure. Even if all the block you can thing about are done. Just putting them together would not be sufficient to generate a plus value. You have to add logic and procedure.
Storing data can be done in tree, it’s called XML, but computing data can not. I have seen many product over the years showing drag and drop programming, but I just can’t remember the name as nobody used them and talked about them. They were unable to generate something better than the sum of the blocks.
You don’t know what you’re talking about. What I am proposing is a non-algorithmic software model. See you guys around.
Yes. It seems like most people don’t seem to understand what you’re advocating. However, that doesn’t mean it is a good idea.
It sounds really nice in theory, but I’m skeptical that it can be really useful. When you can show me a large, working application built that way, I’ll believe it.
The focus on non-algorithmic graphical programming looks like LabView to me… I’ve seen it in action once, and I don’t want to see it again…
Edited 2010-07-28 14:17 UTC
Looks like he’s in a “me against the world” dream…
I’ve already discussed that “non-algorithmic” way of programming, here on OSAlert, probably with @mapou. And honestly, I don’t see how one can compute anything without an algorithm. Even the very high-level declarative languages like Prolog require some thought for writing the clauses.
How can you determine whether a year is a leap year without without an algorithm is a mystery to me. How can you know a UI element is being dragged and dropped without some if’s is also a mystery. Unless “application”, “program” or “algorithm” don’t mean what I’ve always thought they mean, this “non-algorithmic software model” is just not credible.
Agreed: nice but is it realistic? I doubt it. I wouldn’t even believe it when I see it. I would when the details of how that works have been exposed and I can’t deny any of them. Then, I would gladly admit I was just stubborn.
Actually, some things are inherently non-algorithmic in our current programming model : that’s exceptions and interruptions, that together form the basis of event-driven programming. I’d even add that any serious asynchronous operating system requires heavy use of events, and hence a partly non-algorithmic programming model. As an example, for drag and drop, you can have an OnDrop event for each UI widget that allow an action to be taken when something is dropped on said UI widget, the event handler being overloaded depending on the thing you’ve dropped on the widget.
Event-driven programming can be very nice. Managing some things without it is overkill. On the other hand, the program – event interaction is the root of all synchronization and deadlock evil. A solution to this could be a purely non-algorithmic programming model, but I highly doubt that it would make things easier : again, LabView is probably the ugliest programming tool in existence…
OK.
But does event-driven programming void my previous question? It’s just a way of saying “don’t wake me up until there’s an event that I can handle”. Right? Handling the events still needs an algorithm. Implementing that non-polling behavior still needs an algorithm. In my understanding of “non-algorithmic …”, there is no algorithm involved.
Maybe that the reason we don’t seem to understand each other here is that we don’t share the same grounds for discussion? What is an algorithm and what is not? Any description of steps needed to solve a problem or accomplish something using if-tests and sequences of instructions is an algorithm. From cooking recipes to checking the brake fluid level in my car, any process of thought that results in a decision, in a conclusion or in a concrete realization is an algorithm, even if we usually keep this term for the computing world and name it differently elsewhere. If so, how can you decide which window is the topmost window in a window stack if there’s no algorithm involved? How can you send a MouseDown event to a window if there’s no algorithm? That’s what I don’t understand.
As I said in the previous post, the intended meanings are probably not the same. I suspect that “non-algorithmic programming” should bear another name.
An algorithm is a linear sequence of instructions (in the broadest sense : branching is included).
If you want something non-algorithmic, take an analog electric circuit in a stationary state : the order in which you put the components does not affect the behaviour of the device. Electronic components interact with each other, but not in a sequential way, so we lose the “linear sequence” concept. Physics is full of non-sequential behaviours like this one. LabView uses a non-algorithmic programming model based on an electric circuitry metaphor (which fails miserably at its task of making programming noob-friendly), though the implementation of this programming model is probably algorithmic.
Also, I think that you’re slightly mistaken about interrupts : it’s not that the programs falls asleep ans transfers control to the operating system, which will wake it up when it’s interruption time (all that using algorithms). An interrupt is asynchronous, it brutally stops a program in a middle of what it’s doing and transfers control to a piece of code which saves processor state and does what’s necessary in order to handle the interrupt. Events based on interrupts are hence not algorithmic. The program did not wait for interrupts, they just happen. Only event handlers are algorithmic.
I don’t know of something else in the computer world that’s not algorithmic. That’s not astonishing : at the core levels, processors are made to execute a sequence of instructions, or several sequences at once. Algorithmic programming is hence the most efficient and intuitive way of using them…
Edited 2010-07-30 15:15 UTC
Better tools can work.
The GUI tools for example are much better. I’m certainly not manually coding pixel positions. We have nice GUI builders.
However, for logic and algorithm expression, I really don’t see things getting much better than our written programming syntax. If you can understand a flow-chart, you can understand an if statement.
Yes; actually you can draw flow diagrams to represent the actual logics of some algorithm or process… but I would prefer writing three lines of code to draw several rectangles and one diamond and a lot of arrows to represent what I want the computer to do.
I particularly agree with him re Java. C has its share of wrinkles but Java is worse IMO.
The D programming language shows some real promise. Still early days for it, but it has some really nicely-thought-out features (even including lazy evaluation, whihc is great for a systems-programming-level language).
Would you please elaborate your comment? Why do you consider Java is inadequate, badly designed?
Giving that Java actually only shares the syntax and the basic algorithmic stuff with C; why do you compare it with it instead of C++?
As is mentioned in comments of the original article, Scala is more an answer to his concerns than Go is.
And this very reason is why I like programming in assembly. Code optimization? It’s already running at the lowest level! But seriously, when is someone going to start working on English++? How can a noun like “Google” also be a verb or adverb?
Edited 2010-07-26 11:25 UTC
I don’t think there is too much complexity in languages like Java. C++, maybe
However, Google’s Go language is silly. It has the stupidest syntax I have ever seen. And it’s not just that I don’t like it because it’s different. I have very few problems with the syntax of Haskell, Lisp, Python, and C++. Go’s syntax is just badly designed and inconsistent. A good example of a language with a really well thought-out syntax is SPECS (http://www.csse.monash.edu.au/~damian/papers/HTML/ModestProposal.ht…).
My main problem with current languages is that there are no languages that have all the features I want:
* an imperative language that makes functional programming relatively easy (e.g. C#, D2)
* static type system
* relatively fast execution, preferably compiling to LLVM
* algebraic data types (e.g. Haskell, ML)
* at least local type inference (e.g. Haskell, ML, C#, C++0x)
* something similar to Haskell’s class system
* user defined value types (“structs”)
* unsigned integer types
* powerful templates/generics (e.g. C++, D)
* a nice, consistent syntax, preferably with Python-like indentation
* RAII
* built in support for unit testing (e.g. Cobra)
* garbage collection (and the equivalent language without garbage collection would be nice too)
Right now, my first choice would be Cobra (http://cobra-language.com/). However, I don’t want to have to rely on Mono. I don’t want to get into another pointless Mono argument, but my main concern is that if my software relies on Mono, a lot of people are not going to use it.
So the two languages I use the most are C++ and Haskell. I like both languages and they are pretty good for what they are designed for, but I want a more application-oriented language.
So, I’m working on my own language. Right now it only does a few very basic things and compiles to a byte-code format that is then JITed to LLVM. I don’t know if I will ever “finish” it, but it is definitely a good learning experience, much like when I wrote my own simple OS a few years ago.
Edited 2010-07-26 14:55 UTC
Thanks for the link. I couldn’t help noticing how many of the constructs remind me of Pascal (type declarations, the assignment operator (:=), the pointer operator (^), object declarations, to name a few). Their proposal put a finger on some awkward constructs of C++ that I would not have thought about. Good read.
Big thanks to Google for providing Python and Django services. Like always Google leads the way
Well, I hate this thread.
Since I read it and realized that most of the major failures of C++ came from its syntax, I’ve been losing hours trying to redefine it in a way that would make it much easier to learn and master, hence making it the perfect programming language (easy to learn, flexible, and compatible with existing libraries). Since no one will ever write a compiler for it, especially not me who has some kernel development to do first, this is losing time in the purest sense of the expression. I could have spent all that time working on my paging management classes ^^’
Edited 2010-07-29 18:56 UTC