With highly expressive syntax that is easy to read, write, and maintain, dynamic programming languages like Python and Ruby are extremely conducive to rapid development. Microsoft and Sun Microsystems have observed growing interest in dynamic programming, and plan to integrate more extensive support for dynamic language features in their respective managed language platforms. Elsewhere, check PHP for .NET.
With all these languages being ported to various virtual machines, I guess it won’t be too much longer before people stop writing native apps altogether. This is unfortunately for the end user, who will have to put up with apps that are probably never gonna run as fast, or integrate as well as their would-be native counterparts. I don’t look forward to a future of running a desktop of of .NET and Java apps, or even worse, Web 2.0 stuff. In fact, I think I’d rather have my balls crushed by a wooden mallot.
I’m sure it’s fantastic for developers, who finally have any easy way to port their slow-ass apps across platforms, but certainly the bright minds in the IT industry could come up with a better solution than this.
Edited 2006-08-10 22:07
You can still write apps native to the OS with these. Just not native code.
But lack of native code, while producing a larger and slower app most of the time, is better in the long run for everyone.
Native code has a big disadvantage that interpreted code doesn’t. Even the very best C compilers throw away a lot of semantic information that exists in the source code, simply because there’s no way to exploit it in static machine code. Once the code is compiled, that’s it. Runtime optimization is relegated to complicated and expensive micro-op decoding and reordering in hardware.
The potential exists for interpreted code to take us to new levels of runtime optimization through the use of dynamic, adaptive runtime optimization. In particular, interpreters can take us to new levels of thread-level parallelism that far eclipse the limitations of hardware multithreading.
Native code need not be left out of the party. Through the use of code-optimizing firmware or hypervisors, one can optimize for a variety of different objectives. This isn’t something that hasn’t been done before. Transmeta accomplished their impressive power efficiency through the use of a firmware “code morphing” technology that performed just-in-time compilation on x86 machine code to produced highly optimized Very Long Instruction Word (VLIW) native machine code.
In short, native code isn’t necessarily faster than interpreted code. The present-day performance delta has more to do with the fact that static compilation has been more extensively studied than has dynamic compilation. The leading C compilers are pushing the limits of the technology, whereas the leading JIT compilers are barely scratching the surface.
If you’re at all interested in dynamic JIT compilation, check out Psycho (not my project or anything):
http://psyco.sourceforge.net/introduction.html
I doubt with all my heart a JIT can outperform a compiler. Check out instructions the compiler generates and it’s even difficult to achieve the same performance with hand writen assembly unless you know exactly the internals of processors.
A JIT, on the other part, is code generating code. That code generator runs on the processor, so it can never perform better than a well optimized and compiled binary that doesn’t need to generate code because it was pregenerated by the compiler.
Jitting is necessary for virtual machines to perform better, but I fail to see how they could make a program perform better than a natively compiled one, unless the natively compiled is really not well programmed or uses the wrong algorithms; but that also applies to code generated for virtual machines.
The idea is that a bytecode compiler on the client platform can generate machine specific optimizations such as supporting SSE2 or Altivec extensions on the processor if it supports it and macro-expanding it to the equivalent non-SIMD equivalents on older machines that don’t support it. This allows greater flexibility on the part of the processor developers to add extensions to the machine language of their processors and expect the new hardware to be used.
When MMX first came out nobody used it becuase most people didn’t have MMX on their systems. Nowadays practically everybody has some sort of extension to the native instruction set of their processor. The bytecodes that support these extensions will help them gain acceptance while boosting performance at the same time.
“A JIT, on the other part, is code generating code. That code generator runs on the processor, so it can never perform better than a well optimized and compiled binary that doesn’t need to generate code because it was pregenerated by the compiler.”
In short-lived applications that may very well be true, but on long-running applications Java already beats native code in some cases.
A static compiler is limited by what it can see at compile-time. It can’t make assumptions about the run-time behaviour of the code. Therefore it looses a lot o optimization opportunities.
For instance…
– A C compiler may inline small functions, but it won’t do it for larger functions that are called from multiple places. A JIT compiler may notice that 99% of the time it is called from a particular function, and inline it there.
– On code hot spots a JIT compiler may notice that some variables are used more than others, and keep them in registers, even if that means always doing a LOAD/STORE for others.
– A JIT compiler cooperating with a garbage collector may move deallocations out of an otherwise tight loop to improve cache efficiency.
Just like branch prediction and some other run-time optimizations by the CPU itself can provide big performance gains, so can a JIT.
Hey Butters, “you’re good at many jobs”, but I modded you up this time.
The whole runtime,compiletime thing is a farce. I sit at home (and so do you) with so many damn wasted cycles that that we need to burn every one is ridiculous.
By the same token, even during computational intensive tasks, compilation for instance, we spend so many cycles blocking on I/O requests. To answer sbenitezb, this is where the JIT compiler gets the cycles it needs to do its work.
Taking this into account, there are two situations where JIT compilation incurs a processing overhead. The only throughput overhead occurs when the interpreter is running enough CPU-bound, asynchronous threads such that all threads are never blocking at the same time. This kind of workload is extremely rare. Although read-only transactional systems (databases and web servers) are massively and asynchronously threaded, they’re also I/O intensive, so we can hide the JIT cycles. The only workload that forces the JIT compiler to block its own workload is in high-performance computing, where programmers typically program in assembler, possible inlined in C or FORTRAN, and carefully parallelize the algorithms.
The more common causes of JIT overhead result in added latency. This happens on process creation, when, at the very least, the JIT needs to compile some of the bytecode before the interpreter can spawn the process. This also ocurrs when a process wakes up due to an external event or function call, since it won’t have any queued code to execute.
That second point means that JIT compilation is inappropriate for many parts of an OS kernel unless you’re willing to trade performance for runtime security checks. This doesn’t mean that kernels can’t be written in interpreted languages, it just means that they should be statically compiled, like Microsoft’s Singularity project.
Mod up dude!
Except that currently as the runtime optimisation are not saved, they must be redone each time you start the application, making client-application slow.
That plus the fact that the JIT must do his optimisation while the program is running whereas a normal compiler can spend hours doing optimisation if wanted means that the JIT cannot really take advantage of these data.
IMHO a profile based compiler is the best of both worlds.
Transmeta ‘impressive’ technology was so impressive that they got hammered in every comparison I’ve seen..
But lack of native code, while producing a larger and slower app most of the time, is better in the long run for everyone.
Pardon? given the deminishing returns on CPU performance, I doubt anyone will notice the difference between a quality written managed application vs. an umanaged one.
Managed code is very immature at this moment, I’ll put money on it, when 3Ghz core 2 machines are common place, with dual core considered ‘low end’, the difference in performance and teh snappiness to the end user won’t even be noticeable.
Heck, the latest Lotus Notes 7 is based around Eclipse/SWT – if IBM can get an application as complicated and complex as Notes available on a platform with reasonable performance, I’m sure any company can achieve it.
As a side note; the new Microsoft Office 2007 uses Winforms for its user interface front end; so even now there are applications that are exploiting the power of managed code without any noticeable affect on performance.
I never said the difference would be noticable.
“As a side note; the new Microsoft Office 2007 uses Winforms for its user interface front end; so even now there are applications that are exploiting the power of managed code without any noticeable affect on performance.”
It’s about time MS eats it’s own .Net dog food. It’s been, what, six years since the inception of .Net?
True, VS started using .Net 2/3 years ago. But it’s easier to introduce a VM based app to developers, who understand and appreciate the cost/benefit analysis of VMs, and are more willing/patient to put up with slower start up times and more memory consumption, then it is to end users, who want stuff to just work, and not have their system slowed down.
It’s the same with Java/Swing/SWT. There are lot’s of excellent Swing or SWT based IDEs, like Netbeans, Eclipse, JBuilder, JetBrains, jEdit, etc, but not many commercial and/or open source Java/Swing/SWT based end user apps (but that is now starting to change).
Bottom line is developers, and especially commercial software companies, have preferred native compiled code for their products for good reason. They don’t want to risk pissing off their end users with VM overhead.
But that is gradually changing. As older hardware gradually gets thrown to the junk heap, and as Java/Swing/SWT and .Net/WinForms and Mono/GTK# become better optimized, and as JIT compilers get better, the difference in overhead and performance gradually becomes less and less noticable, and less and less relevant.
It’s about time MS eats it’s own .Net dog food. It’s been, what, six years since the inception of .Net?
True, VS started using .Net 2/3 years ago. But it’s easier to introduce a VM based app to developers, who understand and appreciate the cost/benefit analysis of VMs, and are more willing/patient to put up with slower start up times and more memory consumption, then it is to end users, who want stuff to just work, and not have their system slowed down.
Office 2007 doesn’t represent the first time MS has used .NET in non-developer products.
Office 2003 Outlook Business Contact Manager
SQL Server Reporting Services
Windows’ Media Center interface
are a few examples. Many of their server products also use .NET.
Just as a side issue to that, IMHO, the idea of calling it .NET quite frankly was a mistake, in that, many *assume* that it is just for web based or network based applications, when in reality, it is a whole new framework/set of API’s which can actually replace the win32 as a platform to programme for.
For me, I don’t look at .NET as a competitor to Java, instead, I view it as a replacement for win32; if we look at it as a Java replacement, then sure, you’re going to be let down by virtue of the fact that Microsoft hasn’t implemented fully the .NET framework on other platforms.
As a win32 replacement, however, it can be viewed as a great leap forward, given that for example, winforms will finally replace the mirade of different toolkits in Windows, and instead, there will be one, resolution independent, modern toolkit that’ll bring a level of consisty to the windows platform.
” when in reality, it is a whole new framework/set of API’s which can actually replace the win32″
Actually, as I understand it, much of the .Net libraries are wrappers for Win32, with additional pure .Net libraries.
So it’s not a complete replacement for Win32, per se, but rather an extension.
But, on surface level, as a framework and API from the developer’s perspective, it is a replacement. It’s just that under the hood there is still some Win32.
Actually, as I understand it, much of the .Net libraries are wrappers for Win32, with additional pure .Net libraries.
This is partially true. Some libraries do wrap Win32 (e.g., parts of Windows Forms), others do not (e.g, System.XML). The API replacements formerly know as WinFX are likewise not just Win32 wrappers.
Edited 2006-08-11 22:55
“Office 2007 doesn’t represent the first time MS has used .NET in non-developer products.
Office 2003 Outlook Business Contact Manager
SQL Server Reporting Services
Windows’ Media Center interface “
Those are not exactly bread and butter apps for MS, now are they?
But it is a start.
The point is, they’re gradually introducing .Net based end user apps when it’s appropriate and are not adopting it across the board (a sensible thing to do).
But their marketing keeps saying .Net is the be all to end all, just like Sun does with Java. That’s okay, the vendors spent big bucks on their respective initiatives, and they want as much adoption as possible.
But the bottom line is, in spite of the sugar coating marketing, it boils down to the best tool for the job. VM languages are very good for certain things, and compiled languages are very good for other certain things.
Don’t drink the big vendor marketing (or OSS evangalism) kool-aid, and use what best fits your given situation.
Those are not exactly bread and butter apps for MS, now are they?
But it is a start.
The point is, they’re gradually introducing .Net based end user apps when it’s appropriate and are not adopting it across the board (a sensible thing to do).
But their marketing keeps saying .Net is the be all to end all, just like Sun does with Java. That’s okay, the vendors spent big bucks on their respective initiatives, and they want as much adoption as possible.
The difference in the marketing of .NET vs Java is that MS’ emphasis has been on using managed code for new development or mixing managed and unmanaged code in legacy or (if necessary) performance-critical applications. It’s not an all-or-nothing situation and the amount of migration, if any, is dependent on individual project needs.
With that said, the big push towards using managed code is that, due to it’s verifiability, you are open to more usage scenarios while maintaining security. Plus, starting with Vista, the managed/unmanaged interop situation reverses for accessing many platform services. This situation will only grow as .NET replaces Win32. This is the bread and butter, as both end-users and developers will be affected.
With that said, the big push towards using managed code is that, due to it’s verifiability, you are open to more usage scenarios while maintaining security. Plus, starting with Vista, the managed/unmanaged interop situation reverses for accessing many platform services. This situation will only grow as .NET replaces Win32. This is the bread and butter, as both end-users and developers will be affected.
IIRC, there are large amounts of Windows Vista that have already been re-written using the .NET Framework – such as the new WPF for example, using based on the .NET Framework.
As much as I would love to see the legacy code from Windows purged from the tree, at the same time, I realise that the key to Microsoft success is the ability to run rickety and old applications on the latest copy of Windows.
I just hope that by the time Vienna ships, there will be a sufficient number of vendors who have made the big step of dragging their applications kicking and screaming into the 21st century instead of the ad-hoc crap they’re doing right now which is causing all manner of problems in regards to running under the UAC permissions.
It’s the craftmanship I’ll miss most.
You have C/C++, Qt/Gtk/WxWidgets. I will keep my responsive and fast KDE desktop with natively compiled applications for most of my tasks. The web is not for applications, is for information. I hope this thing about web 2.0 quickly vanishes as just another atempt to complicate things and reinvent the wheel.
You have C/C++, Qt/Gtk/WxWidgets. I will keep my responsive and fast KDE desktop with natively compiled applications for most of my tasks.
Last I checked, KDE is built on Qt and its odd dual-licencing scheme that will keep its open-source licence bound to Posix-like OSs for eternity.
Personally I’d rather see a wxWidgets that uses compile-time polymorphism on an LLVM-based packager that will compile the bytecodes natively on the destination platform for maximum optimization and then store the native code on the hard drive at install time.
Last I checked, KDE is built on Qt and its odd dual-licencing scheme that will keep its open-source licence bound to Posix-like OSs for eternity.
Hasn’t the GPL Qt been available on Windows for some time now?
Now that I check Trolltech’s website they do have open-source versions for Windows and Mac as well as X11. That still doesn’t help everybody but it’s a step forward. I guess I’m still partial to wxWidgets though.
“Now that I check Trolltech’s website they do have open-source versions for Windows and Mac as well as X11. That still doesn’t help everybody but it’s a step forward. I guess I’m still partial to wxWidgets though.”
Well, that’s a problem, being partial. I’m partial to Qt. But I also give credit to Gtk and WxWidgets because both deserve it.
I would probably use wxWidgets if it weren’t for the flickering when resizing.
And I would probably use Qt 4 too if the library weren’t so damn huge. A minimal hello world application becomes enormous.
.NET is easily the best choise for small applications.
“And I would probably use Qt 4 too if the library weren’t so damn huge. A minimal hello world application becomes enormous.
.NET is easily the best choise for small applications.”
Yes, .NET is really small :p
.NET is easily the best choise for small applications.
With a whopping great runtime and overhead to go with it. Hmmmmmm.
“Last I checked, KDE is built on Qt and its odd dual-licencing scheme that will keep its open-source licence bound to Posix-like OSs for eternity.”
Last time you checked may have been 1999 or something like that. Qt has GPL license and works not only on Posix-like OSs, it works in Windows too.
Last time you checked may have been 1999 or something like that. Qt has GPL license and works not only on Posix-like OSs, it works in Windows too.
I know QT works on Windows and Mac as well as Posix OSs. I just hadn’t seen that they released those versions as open source. The previous time I checked was indeed over a year ago and at that time the only free versions for those platforms were closed-source 30-day trials.
I know QT works on Windows and Mac as well as Posix OSs. I just hadn’t seen that they released those versions as open source. The previous time I checked was indeed over a year ago and at that time the only free versions for those platforms were closed-source 30-day trials.
*shakes head* and you’re a programmer I assume? thats almost as terrible as the ‘programmer’ on ‘who wants to be a millionaire’ who didn’t know what LCD stands for.
Yes, 4.x series is available, in source form, for Windows, MacOS X and X11; infact, the MacOS X version has been available since the 3.x series IIRC.
You can emerge that in gentoo.
You’re right – and it’s amazing how many people are starting to run their “native” software on virtual hardware in order to abstract it from the underlying OS/hardware. At least that way a poorly-written native app doesn’t crash the machine right?
If a virtual server falls in a virtual forest and there is no sound device…
We heard the same thing in the 80s regarding assembly to C. Nowadays, unless you’re Michael Abrash you’re unlikely to globally out optimize a fast C compiler.
At least C# (far from my favorite language) gives the ability to bypass bounds checking.
Don’t worry, we’ve seen it all before. I think John Harris (from Hackers book fame) has stuck to his guns and is still programming in assembler.
You must never have seen this:
http://produkkt.abraxas-medien.de/kkrieger
It’s 96K, give it a shot and tell me what you think about optimization.
That being said, I think virtual machine languages (platform independent applications) are the future. I’m sick of being stuck on “X” OS due to needing a certain application.
yeah, but that was written in C I believe. It uses Generative maps.
It’s besides the point anyway. Yeah, assembler is cool. All us old guys love bit-twiddling, but it’s stupid to push assembler.
I ran that demo. I’m thorougly impressed.
Yes, I agree. Just saying optimization has it’s place, but it’s non-language specific. You can optimize anything. I’ve seen extremely quick java applications (and written them). Java (and most VM languages) just have the startup overhead of the VM itself, memory overhead of the VM, and right now (Mustang fixes this 90%) the Swing stuff is slow in Java. Processing/etc is actually quite fast – lots of sites use Java handling huge loads.
> It’s 96K, give it a shot and tell me what you think about optimization.
Most (All?) of the hard job is done by the 3D graphics hardware, not Java, at least that’s my guess (I can’t run the demo because I don’t have a graphics card that support pixel shaders and stuff like that).
Anyway, recently I wrote a Game Boy emulator in Java and then translated it to C++/SDL. The C++ code runs 3.5 times faster than Java. Not bad for Java, IMHO.
Oh boy. VM’s are mentioned and people immediately start screaming about how slow they are…
That’s just a load of crap! Java nor .NET are truly slow.
If you want an example, just try http://www.map24.com . This stuff is a Java applet, in 3D and with live data from the net. You can hardly say that this is slow!
Integration with the OS gets better every release. Specially .Net does a good job (at least on the windows platform). There really is no difference from running a native or vm application there.
They don’t necessarily make an application easier to power to other operating systems. Having a version of the virtual machine for other OS’s is only half of the problem. The developer has to use API’s that aren’t bound to any particular OS. Example: if a developer uses WinFXm it’s not going to work on any other operating system but Windows.
The GREAT thing about virtual machines is that they can make the system more secure by sandboxing the application. They also allow the application to run on other instruction sets as long as there is a virtual machine(and the same libaries). Virtual Machines allow the explotation of instruction set extensions and the use of different architectures altogetger. It would be nice to be able to move away from x86 eventually.
“In fact, I think I’d rather have my balls crushed by a wooden mallot.”
I couldn’t have possibly put it better.
Running a bunch of Java, .Net, and Web apps is an exercise in total frustration. Even modern, fast machines with tons and tons of memory can be brought to a crawl by all the VM crap.
Don’t get me wrong, I’m quite fond of both Java and .Net (particularily Mono), and I like the dynamic languages like PHP, Ruby, Perl, and Python.
But those mostly bring benefits to the developer (easier to use, faster to develop with). They bring virtually zero benefits to the end user, other than the greatly reduced memory leakes or buffer overruns that can occur more commonly in C/C++ programs.
But the extra time and effor it takes to eliminate memory leaks or buffer overruns is worth it in terms of the much faster end result.
…will directly be addressable by the native instruction set of their host processors, maybe assembly programming will make a comeback. In the meantime we’ll need a way to switch our existing code to the new instruction sets and bytecodes are one way of doing that.
Sorry to be frank, but what brain dead moron would want assembler to make a comeback?
There are patterns that are universally accepted that were the result of high level languages in the first place. Crossplatform was not the reason for high level languages.
@Lambda
Assembly!=Assembly
I have programmed in MIPS and 68000 series assembly and can tell you that they are light-years ahead of Intel’s assembly in terms of ease of programming. I want THOSE assembly languages to make a comeback. Intel x86 can sit and rot for all I care.
I want assembly to be promoted to the point that it’s almost a high-level language itself. That would eliminate the need for compilers and interpreters and even some OS features such as device drivers.
My idea of a good bytecode is to implement those features in a portable fashion so that high-level languages would become simple macro-libraries and front-ends for the assembler. I can see LLVM doing something like that since it’s a lot like MIPS assembly and HLVM will help implement the dynamic HLLs in LLVM.
.NET seems to be keeping up financially with its backing from Microsoft but I think LLVM is better thought out and Java may find itself running on LLVM before too long via GCJ. The fact that Apple seems to be backing LLVM helps too.
I have programmed in MIPS and 68000 series assembly and can tell you that they are light-years ahead of Intel’s assembly in terms of ease of programming. I want THOSE assembly languages to make a comeback. Intel x86 can sit and rot for all I care.
Big deal. I programmed MIPS in college and programmed in 6809 at work when we needed someone to pick up the slack. Intel’s instruction set is irrelevant to the discussion.
I want assembly to be promoted to the point that it’s almost a high-level language itself. That would eliminate the need for compilers and interpreters and even some OS features such as device drivers.
Then it’s not assembler or you want micro-code to compile this assembler for you. In any case, it’s brain damaged thinking to want to reinvent the wheel for high-level constructs that are universally accepted.
Hell, we’re not even close to the high-level constructs that we really need. Why does anybody in their right mind want to dick around with going back to the machine level. Most of us aren’t masochists.
NET seems to be keeping up financially with its backing from Microsoft but I think LLVM is better thought out and Java may find itself running on LLVM before too long via GCJ. The fact that Apple seems to be backing LLVM helps too.
Well, you finally make a bit of sense. There are disadvantages to targetting a runtime – mainly that you have to constrain to the runtime. But there are cost, benefits to everyting.
What’s the point of spazing out and saying we should all go back to assembler. Nobody cares.
Then it’s not assembler or you want micro-code to compile this assembler for you. In any case, it’s brain damaged thinking to want to reinvent the wheel for high-level constructs that are universally accepted.
I want the bytecode to compile this assembler for me. Specialized hardware can often beat software in terms of performance.
Someone also mentioned in the ATI/AMD Open Source Driver thread that AMD is planning on a dedicated Hypertransport bus that will allow dedicated coprocessors to be plugged in to standardized sockets. This will allow Assembly to go multilithic with its instruction set to become a higher level language than it has ever been before. Machine language won’t be stuck with just one hunk of silicone since it can be spread out across dedicated asymmetric cores.
Just imagine going from an engine with one or two cylindars and going straight to the V8! Sure it will take some getting used to if you’re used to powering riding lawnmowers with engines but a V8 can push a really big car or truck by comparison.
Personally I think this is the next big break for the computer industry. OSs and runtime libraries will find themselves embedded in microcode ROMs with dedicated integer units and the smaller the microkernal the better. This could mean software could become the prototype for the next generation of hardware. Bye bye bloatware. Hello parallel instruction pipelines of increasingly varied types.
As for not making sense, sure I get ahead of myself every now and then, but if you keep prodding you’ll eventually get a streight answer.
Edited 2006-08-11 00:24
I want assembly to be promoted to the point that it’s almost a high-level language itself.
You might be interested in HLA, aka High Level Assembly. Seems a contradiction in terms, but the basic idea is an assembler and extensive support for (custom) macros, and some programming shortcuts.
http://webster.cs.ucr.edu/
You might be interested in HLA, aka High Level Assembly. Seems a contradiction in terms, but the basic idea is an assembler and extensive support for (custom) macros, and some programming shortcuts.
Thanks, but I’m currently on multiple processor platforms already and HLA is x86 only. I’ll stick with Low Level Virtual Machine. http://llvm.org/
I’ve got another virtual machine called AmigaDE that tried to be what I’m looking for but it wasn’t good enough either. It only supported little-endian processors.
Since we’ve gone down the assembler route….
Listen, we all love the romanticism of hand coding instructions. It’s over though.
OK? Not? no, it is!
Us people in our mid/late 30s and beyond say it is. Give it up. Yes, we do assembler for bootstrapping, but that’s it.
Godamn, C is dead except for systems programming
“Godamn, C is dead except for systems programming”
That’s like saying building foundations is dead, except for building houses, offices, skyscrapers, warehouses, etc.
True, most programming jobs are for customized, internal corporate apps, where the big gains in productiviity, improved security, and reduced memory leaks/buffer overruns are worth the price of big start up times and huge memory usage that VM languages bring. Here, Java and .Net, and to a lesser extent, dynamic languages rule the day.
But all of that runs on some system. OS’s are written in C/C++, drivers are written in C/C++, hell, even most VMs themselves are written in C/C++.
C and C++ and assembly will always be relevant. They are the only viable tools for building computer system foundations (OS, drivers, VMs, etc).
Why not simply build an additional core in to the CPU so that it executes byte code natively. Of course it won’t be a virtual machine anyway, but one should get a speedup.
People have done that many times, e.g.g CPUs specially designed for Java.
Didn’t work very well.
Sorry for that title, but the discussion went the usual OS route and that VMs are too slow to program a kernel.
Yes it is true, but how many kernels do you program in your life, it is more likely that you will write a dozen db apps or webapps than one line of kernel code. Nowdays most programs and it has been like that for years are written either for jitted, interpreted or semi interpreted languages.
C has been relegated to systems programming mostly (besides legacy maintenance).
Webapps, and generally end user apps is where VM based languages really shine. Imagine being able to program a program which does not crash for years within a very short timeframe (does not crash from day one), this has been possible in smalltalk and lisp ever and basically is where this stuff really shines. 90% of all apps usually idle 99% of the time waiting for user input, so there goes the speed argument.
It is more important that a language is programmable (which C++ clearly is not) that the results are stable within a short period of time (which is where C utterly fails) and that the tools are in place and do not cost a fortune. As for the speed argument, VM based languages are clearly not fast enough to host an OS on top of it, but they definitely are fast enough for most end user applications and their scalability in a server domain regarding parallel loads now has been proven for ages.
No sane person ever nowadays even would consider using C++ assember or C for a typical server application unless there are serious technical constraints which enforce this stuff (certain parallelism requirements, certain real time requirements)
> …Imagine being able to program a program which does not crash for years within a very short timeframe…
Java, NET, etc. programs don’t crash, yes, but they throw runtime exceptions when something fails. That’s better than crashing all the machine, but still the application is buggy no matter if it’s written in Java, C, or assembler. The problem is the programer or developer, not the language or platform. Seems that a lot of people think that because they are now using a VM magically all their problems/bugs will disappear.
Caucho has a servlet engine that translates PHP scripts ….
http://www.caucho.com/resin-3.0/quercus/
Just in case anyone’s interested ….
“Oh boy. VM’s are mentioned and people immediately start screaming about how slow they are…
That’s just a load of crap! [neither] Java nor .NET are truly slow.”
Sorry, but it is 2006 now, and using VM for desktop apps is still a huge step backwards for everybody:
– pure speed at runtime is unessential for 90% of app usage time (user is looking at gui and moving mouse, or reading text anyway)
– slow app startup time, while loading and initializing the VM is otoh very noticeable and annoying
– VM runtimes/classes that cannot be shared/reused by concurrent running apps consume gigs of mem. Not everybody has a developer workstation just to read email while surfing web and editing spreadsheets
– even if using only one runtime per technology, we could still end up running at the same all of those: JVM, GRE, CLR/MONO, PHP, PERL, PYTHON/PARROT
– desktop integration sucks badly (and I’m not talking only about the app skin)
– while VM should isolate app from differences in underlying OS, coders write bad apps that become tied to specific versions of a VM. This forces the complete VM to be packaged along when distributing the app. This means that I have about 5-10 JVM on my disk, all eating up space and not beneficiating of security fixes when I upgrade the only one that I installed by hand
My 2c…
These are all challenges, but none of them are insurmountable. Many of them haven’t been addressed because they’re desktop-centric issues that are not so important in the server-apps space. As Java and Dot Net move more into the desktop realm, I’m confident we’ll start seeing solutions to these issues.
– I agree that pure speed isn’t so important. Smart multi-threading can have a much bigger impact on perceived performance
– Sun has done some good work on addressing the startup time issue, but for some reason hasn’t released it. I’m not sure if it ever will, but it shows that it’s possible to address this issue:
http://java.sun.com/developer/technicalArticles/Programming/mvm/
– The approach referenced above (JVM Isolates) addresses the issue of classpath sharing as well
– There’s no reason that PHP, Perl, Python, etc. can’t run on either the CLR or on a JVM (in fact, many already run on the CLR). And, if you (or your IT organization) standardize on either a Java platform or a Dot Net platform, you may end up running only a JVM or a CLR environment
– Yes, desktop integration sucks badly. But it’s getting better, and there’s no reason it couldn’t be good. Plus, if you imagine a world in which most of your desktop apps run on a JVM (perhaps your desktop itself is running in a JVM), then the desktop integration problems go away. If Firefox can get Java and C, C++, etc. to talk to each other in the browser( with XPCOM), there’s no reason it can’t be done on the desktop.
– I agree that it’s inexcusable for an app to run on JRE 1.1 but not on JRE 1.5, just like I think it’s inexcusable for an app to only run on Windows 3.1 and not Windows XP, which is as inexcusable as an app running with libc5 but not glibc 2.1. Developers of successful apps make sure that they can run on a recent, common platform. BTW – often apps that “require” an older JVM may actually work fine on a newer one but just haven’t been tested. Generally, a new JVM release will not break an existing app.
I think that soon we’ll see developers using a high level language that translates into native code. This way everyone will be happy.
I’ve seen projects out there to do such things. One was a program that would translate Python code into C++ which then could be compliled to native code. The other day I saw a new project to develop Gnome apps in a C#-like language that would be compiled into C code or something (http://www.paldo.org/vala/).
So hopefully it won’t be necessary to choose between an easy/fast language for developers but slow for users, or a slow/difficult language for developers but fast for users. The future is to make the best of both worlds compatible.
I think that soon we’ll see developers using a high level language that translates into native code. This way everyone will be happy.
Personally, I see that as a very sensible suggestion. However, it should be pointed out that higher level languages are nothing without a good framework backing it up. C++ on its own, not great. C++ with Qt on the other hand works great, and C# would be nothing without the .Net framework.
GCJ has this with the option to compile natively through GCC, and I’m not sure why something like Mono didn’t do this to be honest (apart for reasons of supposed ‘purity‘).
Some high-level languages can’t be compiled to native code. Python is one such language, 90% of it depends on run-time magic. You can compile something that looks like Python, but everything that makes it interesting and productive cannot be used.
IronPython actually makes this very clear. It is only 1.5x faster than CPython on the standard Python benchmarks, even though the .NET VM is A LOT faster(*).
(*) The CPython VM does very little to optimize bytecode generated by the interpreter. Actually, it would be very interesting to see what would happen if the CPython VM were as smart as the JVM or the .NET VM.
Some high-level languages can’t be compiled to native code. Python is one such language, 90% of it depends on run-time magic.
Dynamic languages do not need JIT compilers or interpreters. Common Lisp is at least as “dynamic” as Python and runs in its own VM, yet it’s been compiled ahead of time for 2 or 3 decades now. It performs quite well too, according to the “Language Shootout”, without “hotspot/psyco” runtime optimizations.
(My opinion is that runtime optimizations can go at most as fast as a profiled, then optimized, statically compiled program.)
This doesn’t necessarily mean that Common Lisp is a better language than Python and Ruby, but the later languages lack in terms of implementation maturity.
PS As others have suggested, if JIT compilers are fast enough to compile a program before/while it’s run, they might as well do it during installation and keep the start up time low.
Edited 2006-08-11 20:13
“Dynamic languages do not need JIT compilers or interpreters.”
I didn’t say they *need* interpreters. However, at least in Python’s case, static compilation
won’t yield a significant performance gain. In fact, the dynamic nature of the language makes it much better suited for run-time optimizations only possible in an interpreted environment. Also, JIT compilers optimized for static languages such as C# don’t yield much of a performance gain either (otherwise, IronPython would smoke CPython in every test).
The post I was replying to implied somewhat that the performance problems of Python could be fixed by compiling it to native code. In fact, the opposite is probably the real way to go, improve the current interpreter.
I don’t know about Common LISP, but I would be interested to see the performance difference between interpreted and compiled code.
However, at least in Python’s case, static compilation
won’t yield a significant performance gain. In fact, the dynamic nature of the language makes it much better suited for run-time optimizations only possible in an interpreted environment.
That’s the point I’m trying to make – languages like Python and Ruby are not as special as they’d like to think (I’m a Ruby fan myself). They face certain challenges that have actually been solved for some time now.
Also, JIT compilers optimized for static languages such as C# don’t yield much of a performance gain either (otherwise, IronPython would smoke CPython in every test).
I don’t know about Common LISP, but I would be interested to see the performance difference between interpreted and compiled code.
According to my old officially unofficial, lousy benchmarks interpreted Lisp (by the “clisp” interpreter) performs in the scale of Python and Ruby and compiled Lisp (by SBCL) performs at least as fast (probably faster) than Sun’s Java.
It needs its own “special” VM and compiler tricks to perform its best (Lisp-to-C isn’t transparent enough) – but that’s what this thread is about, making existing VMs more friendly towards dynamic languages.
On the other hand, it took about 25 years before the first decent compilers appeared for Lisp. For all this time, Lisp was considered an interpreted language, by its nature. Let’s hope that it doesn’t take the same time for Python and Ruby Of course, they can take a shortcut and check out the existing technology and experience.
One thing that confuses me is that LISP is faster than perl but uses more memory.
I always thought perl was trading memory for speed wherever possible or am I wrong?
There was some discussion going on wether or not these design decisions needed to be reconsidered for parrot, if I remember correctly…
I’ve seen lots of arguments for and against JIT runtimes but what about AOT compilers for bytecodes? LLVM supports both JIT and static compilation thus making the bytecode possibly part of a packager that would finish up the optimization at install time. Would that be the best of both worlds? I think it would be.
One thing to remember about intermediate codes is that any compiler (such as GCC) that can be retargeted to different processors uses a bytecode internally to do its non-processor-specific optimizations. Letting the processor specific backends run on the client-side would only lightly slow down the install speed which is already slow anyway (due to disk access) and would still be faster than recompiling the whole app from source.
Had the same idea a while ago…
But I don’t really know if compile time would only slightly slow down the whole process.
g++ is not exactly fast at compiling stuff.
At least for some of my small programs (~500 lines) it took about five seconds and the slowdown seemed to grow worse than linear with the number of lines.
Of course you could always choose a faster compiler, say tinyC, that might compile ten times faster at best but then the generated code will be slower.
It would probably approach the speed (or lack thereof) of a JIT-compiler as you increased compile speed further and further…
Thanks, RandomGuy, for your reply. Your comments made me realize that I had failed to fully indicate the method of compiliation I was describing.
Here’s the process I’m suggesting: GCC to compile to a bytecode on the developer’s platform, bytecode to compile to native code at install time. GCC is slow becuase it does a lot of macro-optimisation internally using its internal bytecode. The only compilation necessary on the client side would be the final back-end code-generation stage of the compiler.
Hmm, I think you explained it quite well.
It’s just that I don’t know all that much about the compiling process so I didn’t know where most of the time was spent.
I often heard guys complain that GCC throws away type information only to recover it incompletely (eg. autovectorisatiion). Do you know at which point exactly this loss of information happens?
In other words: Is all of the type info still available after compilation to bytecode?
I like the idea of having a high level VM type language that can be compiled to native machine instructions.
True, you can lose some runtime optimiztion (one of the benefits of VM languages). But you gain in much faster start up times, and greatly reduced memory usage.
But, as someone has already alluded to, C++ with QT is very much like programming in a high level language.
Heck, C++ with the STL, and boost, and smart pointers, etc, is like programming in a high level language.
It doesn’t completly eliminate the need for manual memory management, but the library containers, and smart pointers, make it much, much, much easier.
And you get a nice fast, great looking app (that blends great with the native environment) that uses minimal memory.
Really, IMHO, VM languages are best suited for server side web stuff, where you can’t muck with memory management, and you need the huge API’s of Java/JEE or .Net, and you need the built in distributed computing capabilities of VM.
But for destkop stuff, C/C++, with QT, GTK+, GTKmm, wxWidgets, etc, just simply can’t be beat. With the good libraries (QT, GTK, wxWidgets), you get high productivity, higher level constructs, and with C/C++ you get native speed, and lnf.
Edited 2006-08-11 17:27