“Oracle and IBM today announced that the companies will collaborate to allow developers and customers to build and innovate based on existing Java investments and the OpenJDK reference implementation. Specifically, the companies will collaborate in the OpenJDK community to develop the leading open source Java environment.”
…Step 3: Profit???
I, for one, hope that IBM dominates Oracle and takes them over.
Oracle has some terrible software in its stack (such as PeopleSoft, for example) that needs to be burned with fire and started over. Maybe Java will be the tool that does it but I highly doubt it. If there was a clear migration path from PeopleSoft to WebSphere or something decent, that would be a step in the right direction.
Actually Peoplesoft is not an Oracle developed software. Peoplesoft works also on WebSphere as app server (as it works with Bea/Weblogic). Peoplesoft is just an ERP system where the database model makes 90% of the application itself. If you know the model, all you need is SQL client to maintain the app. Sure, you may need some reporting stuff (Crystal,SQR etc) and maybe some PeopleCode but this is really not mandatory. They are very straightforward.
Which do you want? WebSphere? Or something decent? Because IBM WebSphere is about as far from “something decent” as you can get. We’re a week away from finally pulling the plug on our WebSphere experience.
Wow! I thought for as much effort as IBM should be putting into their software, that they’d be better than PeopleSoft. According to my sister, who has to deal with PeopleSoft constantly, it is lacking in fault tolerance and is incredibly slow. Another friend who used to be in the North Dakota college system (whose school signup process is also based on PeopleSoft) said that when time came up to sign up for classes, the software on the server would slow to a crawl.
Is there some intelligent server software anywhere that can cope with loads well when dealing with online record-keeping?
Seriously,
Where is it? This place lit up for fear of Java being hosed after the merger.
I suppose you can continue to whine it’s not GPLv3, but GPLv2, but then again you would have to stop using Linux, as well, to be consistent.
Was I one of those? I don’t seem to remember. I think I defended Oracle and gave them the benefit of the doubt.
IMHO, Oracle has acted similar to Sun under McNeally. I really liked Johnathon Swartz, the guy that opened everything up. I wouldn’t have been surprised at anything Oracle is now doing if it had been done under him.
This Java collaboration is more about sustaining Java, than it is about open source. Open JDK was the official Java source code, and pretty much is now too.
Yes he opened everything up without a business plan and broke the company. I’m sure all the people that lost their jobs think he is just a swell guy.
Sun had problems long before everything got opened up. One did not necessarily cause the other.
John’s plan:
Step 1: Open source everything, pay 1 billion for Mysql.
Step 2: ????
Step 3: Profit
They would have made more money if they took the billions he spent on open source acquisitions and put them in low risk mutual funds.
His plan sunk the company. He even stated early on that having a business model was not important.
I can’t always write from everyone’s perspective, so I try to stick to my own. I know less about running a company than I do about what makes life easier, more enjoyable and more productive for developers not working for sun. As the other commenter noted, its easy to criticize a failed business and point at a move we disagree with and blame it for the failure.
I’d suspect ( with the previous cavat that I know very little about business), that much like in medicine some treatments will cure a man with one disease that would kill a man with another. But that hardly means the treatment is wrong for every patient.
Money can be made from open source, Red Hat is the obvious example.
However Red Hat has a clear and working business model which is sell corporate support.
Schwartz didn’t know how they would make money by open sourcing OpenOffice or how the company would recoup the 1 billion dollar purchase of MySql. Most software cannot be sold using the support model and Schwartz admitted that he didn’t have a business model that supported his actions.
The guy was a douchebag that walked away with millions while a lot of people lost their jobs. Don’t defend his incompetence just because he open sourced a lot of software.
I agree that buying MySQL and VirtualBox was rash, and really can’t be explained in the context of a business plan–because the rather obvious purpose of those moves was to garner more interest in a Sun buyout. If you realize this, then it becomes clear that the purchasing streak was a smart move–assuming you supported the idea of Sun being bought out, that is.
I unfortunately was not a big supporter of the idea, and for me it was this change in strategy (starting with the MySQL buyout) which marked Schwartz’s downturn as a leader. However, it is important to realize that this is not the same thing as “not having a business plan”. Schwartz had a business plan the entire time–first, open-sourcing everything and making money off the hardware and the support, and then doing everything possible to get bought out by a bigger company. If he had just stuck with the first plan, I actually think Sun could have still succeeded… but sadly, it’s impossible to say now.
…usually where there is a reason for outrage?
Is this like the what’s in my pocket riddle from the hobbit?
No, but really
What is in my pocket? And will fiddling with it make me myopic?
IBM *shifted* it’S focus from the Harmony project to openJDK, which is great for openJDK but not for Harmony, I guess that in the light of the recent lawsuit (oracle vs google) IBM saw that there was no viable commercial future in the Harmony project regarding it’s legal status and the impossibility to get a java validation kit from sun/oracle.
Too bad, the harmony project is a great project for an alternative jre that was mostly compatible with the official one.
I love the way “Harmony” seems to be a universal project name for discord:
http://en.wikipedia.org/wiki/Harmony_(disambiguation)#Computing
They can reuse their expertise and code to create/improve a decent D2.0 compiler. For my taste it is a good alternative. They could also provide a GUI toolkit and packages for Google go. However D is closer to Java.
…but I still don’t trust Oracle.
I just don’t see what all the fuss is about. Why is everyone so hung up on Java? It’s as if Java is the only savior of the world wide web and nothing else could ever replace it.
Some of the reasons for avoiding Java:
1. It’s owned by a large corporation, who are more than happy to sue you over patents, etc. when they don’t like the competition.
2. The motto of Java – “Write once, run everywhere”. Well that’s just marketing nonsense, how many times have you seen problems with different JVMs or JDKs?
3. Java is bloated and a memory hog. Well there is no secret about that, if your aim is to develop fast, responsive software, then do not use Java.
These days Java is mainly used for web applications. How many C libraries are out there for developing web applications? Well, hardly any, is that because C is so much inferior to Java? Nonsense, that’s because there are too many inferior programmers who think that way. Given the right set of libraries, you could develop a web applications in C in the same time as it would take you with Java.
In case some of you were asleep during your CS lectures, you can develop reusable software in plain C, using object oriented paradigm, just take a look at Solaris or Linux kernels.
You were awake when they said that C is not object-oriented, right?
Such arrogance and no knowledge to back it up.
I have enough knowledge to know that you don’t need object-oriented language in order to use objects.
I agree, gnome and Win32-API (and cuh… cuh… Motif) are very clear object oriented design implemented in C.
Yes, and they are also ugly hacks.
I’d suggest you take a couple of basic C courses, as your comment was -way- of the mark.
Nothing stops from implementing objects in pure C. Come to think about it, more-or-less any language that support functions, complex storage [e.g. arrays] and callback functions, can implement objects.
Heck, I can even implement objects (including inheritance) in BASH or assembly!
Now, whether -you- consider that true object oriented language (as it requires some additional foot work to implement objects), is irrelevant.
… I’d think twice before calling someone arrogant.
(Keep in mind that I don’t really agree with the OP)
– Gilboa
Edited 2010-10-13 13:26 UTC
Nothing stops me from implementing objects in pure C?
Well, of course I knew that, but what’s your point?
OO paradigm is a super-set of procedural programming, thus it is obvious that the same functionality can be achieved in a procedural language.
In a broader sense, any currently used paradigm can be implemented in an assembler, but does that mean that we should use assemblers for everything?
The question is why would you bother to use C to emulate C++ (unless you have to for some reason, e.g. for backward compatibility or to work around compiler limitations)?
It’s very relevant because it is not a matter of opinion. C is not object-oriented. Period.
Besides, that OO-C thing is just the bit that caught my eye. When someone claims that C is the way to go for web development, I naturally assume that they don’t have a clue. Call it a prejudice, but I’m rarely wrong.
In the world where web apps are barely secure even though they are written in high level languages like Java, the last thing we need it to return to buffer overflows, dangling pointers, etc.
Edited 2010-10-13 14:25 UTC
If you are constantly having problems with buffer overflows and dangling pointers, then maybe you should go back to college.
I’ve written huge amounts of C code – stacks, queues, linked lists, concurrent hash tables, multithreaded network servers that can handle 1000s of concurrent connections, using I/O multiplexing (poll, kqueue, epoll, etc). I’ve also written a fair amount of string/text parsing code, e.g. HTTP parser, and I didn’t really have any major issues with buffer overflows or dangling pointers.
When someone says C is insecure, or results in buggy code, it is usually the case of an incompetent programmer who blames his tools.
If you think it’s impossible to write reusable software in C, using object-oriented interfaces, then get a book by Eric S. Roberts “Programming Abstractions in C”, you may learn a great deal from it.
Edited 2010-10-13 15:06 UTC
Who said that I had problems with my C code? We are talking about the mainstream here, and the majority of people who write code do have these problems.
Besides, many people who do know about these problems and generally write quality code occasionally make these errors and the reason is simple: Do something enough times and you’ll make a mistake eventually. Why should we do something manually if it can be done automatically?
Well, good for you… and bad for you! While it is great that you know C inside and out, from what you’ve said it is likely that C is the only language you know so well. Don’t get me wrong, C is a great language, but you seem to think that one size fits all.
Also, you said that you did not experience any major pointer problems, but if you used a managed language, you wouldn’t have any problems at all.
Either that or they are talking about the statistics and not about any specific person.
I never said it was impossible. I am saying, however, that it is impractical. That’s why we have C++. I will check the book out, though.
Personally, I prefer C to C++, because IMO C++ has a rather cumbersome syntax, but if I wanted OOP, C++ would be the logical choice for me.
Your post mostly implies that I am incompetent, while you are the experienced expert. I take my part of the blame for that, because I made it personal in the first place, and I am sorry for that. However, I still disagree with you.
Please, defend your position and convince me that I should use C for OOP instead of C++ with some solid arguments.
Edited 2010-10-13 17:30 UTC
Ever heard of memory debugging tools like Valgrind ? Using them in testing is just as safe as using an interpreted language, but you don’t need to perform slow checks *every time* a program is run.
Removing errors is part of the debugging phase. Doing it once the program is fully cleaned up is just unnecessary. If bugs go past the debugging phase, it’s the debugging phase that is faulty for not stress-testing all aspects of the product.
Valgrind is cool, even VERY cool but unfortunately it is not a solution to every memory problem. The reason for that is that valgrind works only for heap allocated memory and everything on stack or any static memory accesses won’t be checked. Usually when I have very weird undebuggable memory problems I have to change my stack data to be dynamically allocated .. sometimes it reveals stuff .. but still it is a great inconvenience. Managed languages may have an advantage in this but still C/C++ rock.
Can you go in more details about those problems ? I’m curious.
Edited 2010-10-14 06:17 UTC
You misunderstand me, I’m not trying to convince anyone to use C. You can use whatever you’re happy with. My original point was about Java and how it’s perceived to be the only viable solution in the area where it’s deployed, when in fact you could use plain old C and achieve much better performance with smaller memory footprint.
Era of hackers has ended. Now is an era of sunday programmers and web developers … You think those can do quality C?
The most common misconception is that the market needs more programmers – nothing could be more wrong … it’s the bad programmers themselves that create the need for more programmers .. an endless loop of bugs, incompetence and shitty design.
Well, I guess most people couldn’t use C with the same productivity, but, yeah, it could be done.
Once again, I apologize for jumping to conclusions, because the whole thing might be misunderstanding on my part. I am not 100% sure, because it’s 3am over here.
Anyhow, there are plenty of more suitable Java alternatives for web dev. C# comes to mind, for example. C wouldn’t be high on my list of alternatives. Just because it can be done, does not mean it should be done.
I withdraw from this conversation and I hope that, ranting aside, I made some valid points…
Thanks for taking the time to respond.
I would say that is rather a mindset of designing an application or a library.
Hmm .. not. See pure functional languages.
Actually this question is pretty valid. One can code in C++
in a way that is very clean avoiding all the shitty complicated syntax and also making use of more strict type checking and const correctness. C++ is in not even slower in C in a pure sense. If you do OOP in C and do parallelism it will the same speed as virtual methods in C++.
However, what I heard about C++ is that it adds so much to the syntax landscape that it encourages very shitty design.
I think it’s the same argument as with Perl / Python.
Python encourages one clean syntax, while with Perl programs
can take a form of an ascii art.
Well .. it’s a strange situation then .. C is a language with no object-oriented design in mind but it allows for designing your programs with object-oriented design.
While C would not be perfect for Web Development one could image a library that could even do garbage collection and be
relatively easy to use. I think writing web apps in C could be a somewhat achievable given a good library.
With C++ however the possibilities for convenient Web Dev are endless .. all the language features you can utilize.
of course I meant polymorphism… Forgive my “something” that made me make that mistake. :p
Of the top of my head:
1. You only need a small subset of CPP OO capabilities (objects), but rather not get hit by the CPP compiler limitations. (E.g. strong types)
2. Memory allocation under CPP is abysmal, at best. (Even if you take the time to replace the new implementation with your own, you still cannot replace informative error codes…)
3. Class constructor (and destructor) are poorly designed (lack of informative error codes), forcing you to use C-like initialization functions.
4. I work in environments (such as kernels) that do no look kindly at linking against the CPP rt libraries, let alone using exceptions (!!!!!) as error handling mechanism…
In short, if you only require objects and limited inheritance, good C implementation is far superior compared to full fledged implementation CPP. (And don’t get me started about STL!)
I fully agree. As I said, I didn’t really agree with the OP.
– Gilboa
All valid points.
Well, Stourstrup himself said that there’s not a single favored way to use C++, and that not using every feature is not a crime or characterized incompetence.
In my case (kernel development), I currently use it as a very nice superset of C with classes and (very important) function and operator overloading. I also use references from time to time. That’s it.
In that case, the sole runtime requirement is to run the constructors before the kernel. Not a big deal, the assembly snippet required in order to do that is only a few lines long.
… Than again, I must wonder, what’s the use?
If you cannot use:
– operators (kernel code that cannot be debugged by reading it is a big no-no in my view).
– virtual functions (as above, plus, they may hide a major performance hit due to unknown vtables depth/size).
– You are forced to write your own per-class init and close functions (lack of constructor and destructor error code).
– You are forced to re-implement new (over, say kmalloc, with some type of fancy error code management).
In the end, you’re only left with objects and partial inheritance – or in-short, you more-or-less get an automated “this” pointer.
Call me crazy, but IMO, it’s far easier to implement objects in pure C. (As it’s already being done, quite successfully, in the Linux kernel)
P.S. Which kernel are you using?
– Gilboa
Edited 2010-10-14 22:42 UTC
Makes code a lot easier to read and work on for various reasons, enforces separation of the various components.
I use operator overloading heavily myself, since I implemented a cout-ish class for debug output (much easier to implement than printf, and more comfortable when it comes to using it IMO).
You can still debug with usual techniques like moving a breakpoint forward and see where things go wrong. If the code crashes after the instruction using the operator, you know who’s faulty…
In a kernel, if the initialization of a basic component failed, I think it’s time to panic() anyway. But if for some reason you really need an initialization/closing function to return an error code, you can just go the C way and use an “errno” global variable.
Well, new is only as complicated as you want it to be. If you want it to be an automatically-sized malloc, you can just write it this way. You’re certainly *not* forced to reimplement new each time you write a new (duh) class.
Again, it’s essentially a matter of high readability. In my experience, C code gets messy and unreadable a bit too easily, while with C++ it’s easier to keep statements short and easy to read.
Every one has the right to have its own opinion
In my opinion, code readability is more important than ease of implementation. And my experience with Linux code is that it doesn’t exactly take readability too far.
Couldn’t get satisfied with the existing stuff (noticeably because I’m allergic to posix, which limits the range of possibilities quite a lot), so I rolled my own.
Edited 2010-10-15 05:56 UTC
While it is possible to remotely debug the Linux kernel or the Windows kernel, doing so will produce weird, if not unusable results due to timing and scheduling changes.
In short, in most cases I either debug by eye or add a couple of log messages.
Now, as you said below, you’ve rolled you own kernel (nice!) which negates the “I need to debug a code I didn’t write” problem which makes operator a -huge- no-no in most cases.
Try finding bugs in STL without using a debugger and you’ll understand what I mean…
As long as you’re alone and you’re not running any type of 3/5-9’s application inside the kernel.
Lets assume that I monitor traffic on 10 different network cards and the 10’th NIC is flaky, If I panic, I lose all the traffic on the other 9 NIC’s, plus, nobody will know why I failed.
A far more sensible approach will be to continue working, disable the flaky NIC, and send a log message to the admin.
Plus, if the machine has RAS features, the admin will be able to replace the NIC without rebooting the machine or restarting my system.
Like in the debug option above, everything depends on what you’re doing and inside which kernel. A production system cannot simply panic every time some thing minor (or even major) happens.
What I meant was: that unless you’re writing your own kernel (as you do), in any C-based kernel you’ll have to re-implement new on-top of the native allocation function(s), but by doing so, you’ll lose a lot of features (E.g. In the Linux kernel you send fairly important flags that will require some fancy foot work to emulate under CPP).
I usually say the same… about CPP
IMHO good C implementation is far more readable than CPP code due to the cleaner code/data layer separation. (Let alone the “What you see is what you get” effect that cannot be produced under CPP due to operators and virtual functions).
Actually, I find Linux easier (as messy as it is) to read than Windows (DDK) or *BSD. But as you said, it’s a matter of opinion.
As I said above, nice!
OSS project or past time / proprietary?
– Gilboa
Edited 2010-10-15 06:38 UTC
In my case, when debugging things, I like to use a combination of breakpoints and log messages. Also, for kernel code debugging, I just love emulators like Bochs and QEMU. I couldn’t imagine kernel work without those. Having the ability to instantly check code changes on a freshly compiled kernel using a command line is just a priceless ability.
Hey, that’s cheating ! I can’t live without my breakpoints. If I don’t have some at hand, I just hard-code them using a while(1) ^^ (And for relatively high-level code like the STL, you can also use more subtle and powerful debugging tools like Valgrind for memory leaks)
Indeed, panic is obviously not, by any mean, a universal solution to every single problem. I just recommended it in case initialization of a vital part of the kernel failed, because at this stage there’s obviously nothing else to do than displaying an error message and dying.
At later boot stages, however, the situation is quite different. It’s not the kernel that can’t do its job, it’s a process, or a module in the case of a monolithic kernel (in this case the NIC driver). So that process/module can die/be unloaded alone and leave the rest the system in peace. Just leave a message on the admin’s desk, as you said, after all attempts of self-healing have failed
That’s more of a problem with using C and C++ together than a problem with C++ itself, I think
OSS project on my spare time. I rode Tanenbaum’s wonderful “Modern Operating System”, and it gave me the will to scratch some itches I have with current desktop operating systems, even at a very low level. It’s an experiment in design and implementation of a desktop operating system from the ground up. I don’t know how far it will go but I just feel the need to do it, “just to know”.
Edited 2010-10-15 17:01 UTC
You can still use object oriented concepts in a language which doesn’t (strictly speaking) support objects. C is a great example of one of those languages.
If you really want to, you can even create a sort of object in C, using a structure with data and pointers to functions.
Granted, if you’re doing so, it’s probably best to switch to C++…
It was my understanding that most Java development nowadays was middleware, backend and mobile, rather than Web (or desktop for that matter).
Is that not the case? I don’t do any Java work any more (not for several years now) so I genuinely don’t know.
I think they could be right, based on what I’ve seen in my professional moments. Web mostly.
Java is used quite heavily in mobile, yes, both the “official” Java ME and Google’s Dalvik variant (which has different but mappable operations in its virtual machine). You might have heard of those newfangled Blackberry and Android phones.
Java is also used *very* heavily for developing corporate applications, many of which use web clients but many of which use Java fat clients. Either way, the business logic layer is commonly Java. (It’s illegitimate child C# is doing pretty well, too, or so I hear.) And, of course as mentioned elsewhere, Java is very popular for server-side web programming.
By the way, Java is back in the top spot this month on the TIOBE Programming Community Index (a ranking of programmer community interest in a language, more or less), after briefly being matched by C. The fastest rising languages, however, are Objective-C (used by Apple’s iProducts), Python (my personal favorite), and C# (Steve Ballmer’s personal favorite ;-).
I wish people would stop mentioning the TIOBE index as the whole thing is really quite meaningless. All they do is google every programming language name and count the number of total hits. Hardly an accurate way to gauge interest in a language.
Bet THAT wish doesn’t come true, especially since you don’t offer a better alternative. Hint: There’s not a generally more meaningful method of gauging interest in a computer language that’s also practical to implement than a survey of query volumes across 6 major search engines AFAICT.
For example, lines of code written in each language in a given month might be more meaningful, but the only code an analyst can really see is open source, which wouldn’t be very representative of general language interest at all. Sales of compilers might be interesting, but of course some of the best compilers are open source and so proprietary languages would be badly over-represented. Languages taught at major universities would be even worse, just a sheep counter in effect.
And the TIOBE results generally track well with other measures of general interest, as well as with a lot of us old timers’ experiences.
So, TIOBE it is. As a general gauge of interest, it’s all we’ve got, and frankly, it’s not that bad as long as you’re not placing major financial bets on the results.
In my more cynical moments I’d guess your favorite language doesn’t elicit much search engine activity, so I’ll concede for the record that it’s “meaningless” and you shouldn’t worry about it.
It’s not web applications like you are thinking–or at least like I think you are thinking. We’re not talking about Java applets like in the old days–we’re talking about using Java to serve HTML pages, via Servlets (Java classes which read and write HTML directly to the HTTP socket), JSP (Java Server Pages, think the Java version of PHP–code is embedded directly into pseudo-HTML and interpreted on the fly) and Java Server Faces (component-based UI framework written in XML, with HTML output and plain Java running the show in the background). Also, Google has its own quite popular Java-based platform called GWT (Google Web Toolkit) in which you program using a component-based UI framework, in pure Java, and it gets “compiled” into an HTML/JavaScript web application.
On top of that, as you mention Java is used a lot in middleware; e.g. to support legacy applications–but of course increasingly there is demand for these old applications to get HTML front-ends, enabled by (you guessed it) Java. Java also provides some nice tools (e.g. Apache Axis) for creating and consuming SOAP-based web services (i.e. the defacto means of communicating between applications on the web using XML and HTTP).
(Disclaimer: This is all stuff that .NET does too, just in a monolithic, non-cross-platform, closed manner.)
Now does Java’s relationship to the web begin to make sense?
Edited 2010-10-14 21:26 UTC
Java is extremely fast, and mostly used for backends today. The large servers. Not web anymore. Several of the large Stock Exchange systems are developed in Java. The worlds(?) fastest stock exchange system at NASDAQ is developed in Java.
You obviously dont work with large Enterprise servers with huge demands on performance and latency.
I obviously (like many others) don’t have infinitely deep pockets to run a small cluster of mainframes that Java needs to achieve those levels of performance that you’re implying.
But hey, what the hell, these days it’s the taxpayer who bails out banks and other financial institutions, no wonder nobody gives a shit how much the IT infrastructure costs (amongst other things…).
Oh, you are totally off from reality. I suggest you do some heavy catch up.
First of all, the new Stock Exchange systems are all trying to compete and be fastest in the world. The fastest, with the lowest latency, earns most money. There are billions of dollars involved and they want the best and fastest systems. And NASDAQ’s are using, not C nor C++, but… Java! They can afford any amount of money or any investment, and they choose Java. Why is that, if Java is so slow? It isn’t. Wrong of you.
Second, your example program with hash tables are not relevant for real servers. You have merely done a basic for loop, and try to extrapolate how a big server will perform. That is pure fail logic and wrong.
Third, NASDAQ runs their systems on Linux on x86 servers. Mainframes are DOG SLOW. Yes, they are. Wrong of you, again. Any modern high performant x86 cpu is 5-10 faster than the best IBM Mainframe cpu. Recently, IBM released their newest Z196 cpu, which runs at 5.5GHz and has ~250GB cache (L1 + L2 + L3 + L4) – check the specs. Yes, it is a quarter of a TB in cache!!! Ridiculous. And STILL, it performs as slow as a single core 2GHz Xeon cpu. Now THAT is bad and a waste of GHz and cache.
The Mainframe cpus are dog slow. You would use a Linux cluster on x86 instead. Here are three links:
Here is a source from Microsoft http://www.microsoft.com/presspass/features/2003/sep03/09-15LinuxSt…
“we found that each [z9] mainframe CPU performed 14 percent less work than one [single core] 900 MHz Intel Xeon processor running Windows Server 2003.”
The z10 is 50% faster than z9, and the z196 is 50% faster than z10, which means a z196 is 1.5 x 1.5 = 2.25 times faster than a z9. This means a z196 corresponds to 2.25 x 900MHz = 2 GHz Intel Xeon. But todays modern server x86 cpus have 8 cores, which means they have in total 8 cores x 2 GHz = 16 GHz. We see that x86 at 16GHz is plenty faster than z196 at 2GHz. This shows that a z196 is dog slow
Here is another independent source from a famous Linux expert that ported Linux to IBM Mainframe, who says 1MIPS == 4MHz x86.
http://www.mail-archive.com/[email protected]/msg18587.html
This shows that a z196 which has 1400 MIPS corresponds to 5,6GHz x86. But a modern x86 has 8 cores, that means it has in total 16GHz, which is 3x faster than 5.6GHz. Again, we see that the Mainframe is dog slow.
Here is another link where the cofounder of TurboHercules says that a 8-way Nehalem-EX gives 3.200 MIPS using software emulation:
http://en.wikipedia.org/wiki/TurboHercules#Performance
But software emulation is 5-10x slower. This means a 8-way Nehalem-EX running native code should be 5-10x faster, that is, 16.000 – 32.000MIPS. This big MIPS number matches a fully equipped z196 mainframe with 64 cpus. Again, we see that the Mainframe is dog slow.
In short, an 8-way Intel Nehalem-EX PC will be as fast as the biggest IBM Mainframe with 64 cpus, in terms of cpu performance. Of course the Mainframe has higher throughput, but the Mainframe latency sucks. No exchange uses Mainframes.
You need to do some heavy catch up.
This is a follow up on how fast Java is, compared to C. I did a quick benchmark – create hash table, insert 100K integers, lookup all those integers, then remove them from hash table.
One program is Java, using Java’s HashMap. Another program is C, using my own hash table implementation. Both test were run on dual Pentium 3 machine, running NetBSD and native openjdk7-1.7.0.92.20100521nb1. Each program was run several times to warm up CPU cache and get average time values.
Java program time:
time /opt/pkg/java/openjdk7/bin/java -hotspot test
0.58 real 0.39 user 0.14 sys
Here is the time for C program:
p3smp$ time ./test.exe
0.08 real 0.05 user 0.02 sys
C program is about 7 times faster than Java.
Below is the source code for both programs:
import java.util.*;
import java.io.*;
public class test
{
public static void main(String args[])
{
/* Create hash table */
HashMap htab = new HashMap(100000, 0.75f);
/* Create array of integer objects */
Integer[] int_obj = new Integer[100000];
for(int i = 0; i < 100000; i++)
int_obj[i] = new Integer(i);
/* Hash table insert */
for(int i = 0; i < 100000; i++)
{
htab.put(int_obj[i], int_obj[i]);
}
/* Hash table lookup */
for(int i = 0; i < 100000; i++)
{
if(htab.get(int_obj[i]) == null)
{
System.out.println(“Error: htab.get() returned null”);
}
}
/* Hash table remove */
for(int i = 0; i < 100000; i++)
{
htab.remove(int_obj[i]);
}
}
}
#include “htab.h”
#include <stdlib.h>
#include <stdio.h>
int main(void)
{
int i;
uint32_t *int_obj;
htab_t htab;
union htab_key key;
union htab_val val;
/* Create hash table */
if(htab_init(&htab, 100000, 0, 0, 0.75, HTAB_UINT32_CMP, NULL) != 0)
{
printf(“Error: line=%d\n”, __LINE__);
exit(1);
}
/* Create array of integers */
if((int_obj = malloc(100000 * sizeof(int))) == NULL)
{
printf(“Error: line=%d\n”, __LINE__);
exit(1);
}
for(i = 0; i < 100000; i++)
int_obj[i] = i;
/* Hash table insert */
for(i = 0; i < 100000; i++)
{
key.uint32_key = int_obj[i];
val.uint32_val = int_obj[i];
if(htab_insert(htab, &key, key.uint32_key, NULL, &val, 0) != 0)
{
printf(“Error: line=%d\n”, __LINE__);
exit(1);
}
}
/* Hash table lookup */
for(i = 0; i < 100000; i++)
{
key.uint32_key = int_obj[i];
if(htab_lookup(htab, &key, key.uint32_key, NULL, &val) != 0)
{
printf(“Error: line=%d\n”, __LINE__);
exit(1);
}
}
/* Hash table remove */
for(i = 0; i < 100000; i++)
{
key.uint32_key = int_obj[i];
if(htab_remove(htab, &key, key.uint32_key, NULL, NULL) != 0)
{
printf(“Error: line=%d\n”, __LINE__);
exit(1);
}
}
}
I think you should have included a full source code to gain more credibility but I can give you a benefit of the doubt, mainly because the result is in the line of mine opinion (which is ‘Java sucks’). You could gain even more speed with C++ templates inline functions and get rid of these pointer dereferences and get a more continuous memory model with more local accesses.
Oh, well of course if you factor in the time the JRE needs to get started it will be slower. Why not do a test where the user is prompted to press “Enter” and the time to completion is measured programmatically? Would be a heck of a lot more meaningful.
Yes OK, I’ve done it and Java is still about 3.5 times slower than C on this simple benchmark.
My guess is it has to do with the fact that it is dynamically allocating space on the heap. Try allocating the space in advance by adding the -Xms command-line switch and see if that improves things.
According to whom? Wired & BYTE Magazine circa 1997?
4. Native integration
This is really why so many Windows developers switched to .net. By refusing to use native controls and fonts Java resigned itself to a second class citizen. It took them forever to get Swing looking OK in XP but now with Vista and 7 they are back to the same problem.
5. JRE distribution
They finally cut the JRE down to size but it is too late. Now they have a new problem which is that people do not want to install the JRE unless they have to. Those Java updates are freaking annoying.
Fuck Java, it’s a waste of time.
And Java GUIs on Linux are still barely responsive and butt ugly
I have used Eclipse, NetBeans and various other Swing and SWT apps such as TV Browser
http://www.tvbrowser.org/
on Ubuntu recently and they blended in great–might I say a fair bit better than many Qt apps, even. I guess some people are just overly nitpicky.
Edited 2010-10-14 21:37 UTC
Could the IBM and Google lawsuits have been to get them to ditch their proprietary versions of Java? Possibly the Apple one was to open the iPhone/Pad/Pod to standard Java?
If so I’m behind it but that would confuse the hell outta me.
Is it the same OpenJDK that haven’t posted any announcements on the website since dec-2009?
http://openjdk.java.net/
Not closely related to the article itself, but has anyone noticed how slow the Java Platform Documentation site has become since the url changed to oracle.com?
I am seriously wondering what Oracle’s plans for Java are. Lawsuits on one side, complete disregard to details (such as the one above) on the other. The future certainly doesn’t seem too bright. So seeing at least another big player, like IBM on the scene might actually be very welcome.