In 2017 NASA announced a code optimization competition only to cancel it shortly after. The rules were simple. There is a Navier-Stokes equations solver used to model aerodynamics, and basically, the one who makes it run the fastest on the Pleiades supercomputer wins the first prize.
There were a few caveats though. The applicant had to be a US citizen at least 18 years of age, and the code to optimize had to be in Fortran.
Yes, Fortran is still a thing. It’s very good at certain computational tasks and some important mathematical functions have been built, tested, refined, and made bulletproof in Fortran over the years. Case in point. If you want to build the Python ML library(ies), Scikit-Learn you need a Fortran compiler. Why? Because a lot of the linear algebra libraries used within SKLearn are Fortran based. No one wants to expend the time and effort to rewrite them in a different language and possibly re-introduce bugs that were eradicated long ago.
That, and Fortran compilers are probably better at optimization anyway.
Except… how do we know they were eradicated long ago? We don’t have a test suite, other than the implicit testing done by programs who use those libraries. If we had a test suite, not only can we be sure that the bugs really are eradicated and not reintroduced, we can actually do optimizations without breaking anything. Not to mention move to newer versions of Fortran.
In some applications the test suite is real life, not a simulation.
Edited 2018-11-28 01:33 UTC
Exactly. And why doesn’t that scare people? How can people claim to be confident about those Fortran libraries when that is the case?
Believe it or not, mission critical software was written, used, improved, and relied upon before test-driven development.
The flip side to your concern about “no test suites = armageddon” is the more demonstrable mistake of rewriting code because a new group of developers feel they can do it better. Read Spolsky and/or Zawinski to see how some well known projects imploded badly by starting over. And besides, isn’t the holy mantra of OSS that “many eyes blah blah blah no bugs”? Expressing concern over the correctness and stability of OSS math libraries because there aren’t unit tests undermines a major tenet of the OSS philosophy.
These Fortran linear algebra libraries are trusted and have a long (decades long) pedigree. Not only have they been tested in mission critical applications they are written in a language quite specifically designed to be powerful in the exact problem domain for which these algorithms are used. Anyone advocating for starting over on them in a ‘modern’ language just to add test suites would (and should) be quite quickly dismissed.
What?
The thing is that it is almost impossible to cover all corner conditions in a test suite.
You will never get 100% certainty.
But by using something that has been used in many use cases and had bugs fixed decreases the chances of hitting the bug.
Also most older platforms / languages tend to be a lot simpler. So it is much easier to grasp what is going on and find issues.
Today technologies tend to be so complicated that it is almost impossible to get complete understanding of what is going on in the code.
Because Gnome 3 using script kiddies like you are kept far,far away from them…..
You lose the argument, dickhead.
Except .. that many of the libraries we use on numerical processing come with testing modules. It was done long ago in many of them because be multi-platform, be correct and test Fortran compilers for good code generation have been a concern for years, even more because of the idiosyncrasies of F77 and older standards, each corrected on subsequent revisions.
To test, on many cases, it is just a matter of issuing:
make
make test
and look for any error indication.
Since 2003 standard most of the critics about Fortran does not hold water anymore and there are the niceties:
– a true exponential operator;
– complex numbers have been a native type for ages;
– concurrency support as part of the language;
– very good attention to “C” interoperability;
Not the best for all things but, for what it is good, it is damn good.
Edited 2018-11-28 10:59 UTC
Great.
So why can’t they be (or aren’t being) used to verify non-Fortran libraries? A number’s a number. A matrix is a matrix. A finite element model is a finite element model.
Hum, let me guess, numerical simulations is not your main concern isn^A't it?
The kind of tests you are going to see on numerical libraries, specially on Fortran but also in “C”, are, mostly, composed of cases with already known answers. It is more a validation of compilation, link and accuracy for the particular functions inside the library. Can you explain in more detail what you are asking for?
I’m asking that if those tests are good, surely they can be used to verify implementations in whatever development tool chain is being used. Julia. Python. Octave. Whatever.
So the argument for keeping some libraries stuck at Fortran77, like introducing bugs, should be moot.
Sorry but, perhaps, there is a misconception on your understanding of some of my previous post. I never advocated to keep some libraries frost on F77 standard, I talked about the evolution of the language and the fact that the shortcomings of “Fortran” , the language, were addressed on newer versions.
Anyway, I dunno that the kind of people that create numerical libraries would instrument it the way software developers do, if that is what you were asking. “make” and then “make test” seems to be fine for them (as I already have said).
Test suites doesn’t prove bugs are eradicated either. And why do you assume there are no test suites for Fortran?
What I’ve said in other comments as well is that what test suites do exist are obviously not as extensive as assumed, otherwise you could use those test suites to similarly verify libraries written in much newer Fortran and non-Fortran languages.
If there is still fear of messing with the Fortran libraries because it could possibly introduce bugs, then that is an admission that the test suites aren’t extensive enough.
I used to work in an industry where the correctness and behaviour of code was critical to safety.
I asked why we/they used Sun Fortran 77. Not Fortran 90 or anything newer but 77. I think the tools were called SunWSPro but not sure after all these years.
The reason was that the particular compiler and the numerical libraries had been very extensively tested and validated. That amount of testing had not been done for more recent versions .. and wasn’t worth the huge effort as the gain from the newer languages was minimal.
Fast forward to today …
At various open source (un)conferences and meet ups I sometimes hear about very exciting and theatrical uses of tools like python and scikit-learn .. .. for use in space missions for example. I sometimes ask how they can trust the programming language, and indeed, the scientific libraries .. or rather.. hw do they get sufficient confidence that matches their risks and impacts …
Answers. Come. There. None.
Edited 2018-11-27 23:00 UTC
If that was really true, they’d have a test suite they can rerun with newer libraries and compilers and languages. If they don’t have the confidence to be able to do that, I’d argue that indicates they know deep down that their tests aren’t extensive or have complete-enough validation.
So the only difference between the Python folks and the Fortran folks is that the Python folks are not under a strong illusion about the reliability of their libraries and language.
kwan_e,
A few years ago I felt that I should try to embrace python because it was the in thing to do at one point, so I began using it as a glue language figuring it would work just as well as any other languages. Boy what a mistake, python’s concurrency was immature and unstable. I wasn’t even using it for complex code, but there would be race conditions inside python constructs that made it hang. The timeout code and exceptions that were supposed to handle failures would themselves fail (but not always). I had to set up external processes to kill python when it would lock up somewhere deep within the python stack.
It could just be a case of the wrong tool for the job. I know python has it’s fans, and it might deserve higher marks on more sequential workloads. However given my experience with it I worry about it’s reliability and am not likely to take it up again unless I were convinced the robustness of python has improved significantly since python 3.
Edited 2018-11-28 02:58 UTC
Were you using threads? With Python, forking is the better option, and it does like sequential code better. That’s a much more well-worn path.
I like Python, but it’s was supposed be used for prototypes which would then get translated into C/C++. It’s been expanded to many different areas since then with some being better then others.
Scientists did run it to good effect on the cluster when I was working in HPC. The scientific libraries are fairly high quality, and of course, the code was sequential communicating over the network.
Edited 2018-11-28 03:59 UTC
Flatland_Spider,
I could like python more if it improves. I’m actually considering trying node.js next, not because I particularly like javascript, but because I see value in consolidating server and client-side languages for web development and I don’t like the duplicate effort that using two languages can entail.
Edited 2018-11-28 12:00 UTC
I was more talking about the reliability of the results of the mathematic/scientific usages of Python. No one has verified the Python usages to the extent that Fortran has, but I argue neither has anyone really verified Fortran, otherwise we’d have extensive test suites that prove the results are reliabe. And if we had those test suites, we wouldn’t be stuck on Fortran 77 for those libraries at the very least because upgrading/porting/translating to newer Fortrans or languages would be easy to verify and would have already been done by now.
I haven^A't seen a single important numerical python library that was not actually an interface to libraries coded in “C” or in Fortran.
Because while Python is great at making it easy to code, the standard implementation kind of sucks at memory efficiency, and is slower than native code in most cases. That, and it’s actually pretty easy to give existing, well tested, properly optimized libraries Python bindings.
I was addressing the claim that
.
I would argue that the only cases where interpreted code, or bytecodes for what is worth, are faster happens when the original code fails to test cases where computations could be avoided and on such cases the right thing to do should be to fix the original code. It is an argument I already used when a Java developer started to spread the message that jvm could “produce code” that was faster than compiled ones, what I really never saw on my entire life. Compiled codes are faster all the time, except when negligent or “stupid” implementations are used.
acobar,
I agree with you, but I did want to step in to clarify one point. I don’t think anyone should say the JVM was faster due to being a bytecode or anything of that sort, but rather because java uses garbage collection. Sometimes garbage collection can be faster than alloc & free because the costs of many “free” calls can be substituted for a cheaper pass.
Consider a model where every garbage collection cycle, the entire working working set gets copied into a new memory region. This can seem inefficient, but what’s important in this context is that it’s generally a fixed expense. It has other benefits in that the memory is now defragmented, which allows the “alloc” function to become trivial. So for this example:
Alloc/Free cost = sum(alloc+free)
Garbage collected cost = sum(unfragmented alloc) + GC cycle.
So for some kinds of algorithms this can be true: sum(alloc+free) > sum(unfragmented alloc) + GC cycle. Of course it’s important to note that GC usually uses a lot more memory and introduces jitter, which are negatives.
Now back on topic, I think “native” static languages almost always come out ahead given the same algorithms, however garbage collection is often an overlooked difference simply because it’s hidden inside a language’s runtime libraries rather than in user code. In principal there’s no reason for GC to be exclusive to scripting languages. C’s lack of reflection support makes it impractical to implement GC generically for it, though I had a discussion with christian here on osnews, and he had given it a shot http://www.osnews.com/thread?662858 .
Edited 2018-11-28 15:34 UTC
Alfman,
Not sure if any GC does a better job than glibc malloc, I really dunno some does, for start malloc does not issue a syscall every time, it uses the allocated space returned by a previous brk or mmap syscall, and even try to reuse freed segments and consolidate them when possible. I know that buffer overflow and misuse of freed space have a long history of unleashing trouble in “C” lineage languages and understand the shortcomings associated to direct memory block references but I suspect that the overhead associated to any GC implementation nullify any possible advantage when the main concern is speed. Security is a different matter, of course.
acobar,
So, GC can win if the cost of a GC cycle is less than the cost of all the calls to “free”. Any algorithm that produces a large number of allocations with a relatively small working set can create conditions that favor GC. Well it’s easy to come up with a contrived example where this is true.
For example, a tiny daemon that handles 1M requests per minute. Assume each request requires 4 object allocations to do it’s work.
Assume that the daemon has an active working set of 10k objects at any given moment and the garbage collection runs every 5 minutes.
So the cost of a GC cycle is essentially the time it takes to copy the small 10k working set into a clean unfragmented memory space. Meanwhile the costs of “free” in the same period is 1M/minute * 4 * 5minutes = 20M operations. No matter how efficient “free” is, you can see how it’s costs add up linearly while the GC does not.
I actually prefer the deterministic behavior of alloc/free and I think it usually does better than GC in most cases, but when some people say GC can do better in certain cases, they’re not wrong.
I had a project where the goal was to have a hybrid solution using both models: normal alloc/free for large/infrequent data allocations and a GC for transient data with a short shelf life. I found this would be especially beneficial for temporary string processing where the persistent working set is nearly empty and the GC cycle cost is nearly free, but it kind of fell off my radar though, haha
Edited 2018-11-29 03:53 UTC
But that kind of performance issue is already solved by things like memory pools and ring buffers and just plain better allocators.
In C++ parlance, if the 1M requests per minute of 4 allocations each, but those objects are “trivially destructible”, then using a ring buffer, you don’t even need to really alloc and free. Or rather, an “alloc” would simply be moving a pointer, and a “free” would simply be what happens when the ring buffer starts reusing memory that is no longer claimed.
So you’d only have one real allocation and one real free for the ring buffer, and that would beat the pants off a GC used to alloc millions of objects individually.
kwan_e,
Don’t forget just how trivial a GC allocator can be when the heap is defragmented. It’s a simple matter of incrementing a single pointer, which is practically identical to your ring buffer!
Of course the ring buffer would have an advantage in not having to eventually perform a GC cycle. But the ring buffer assumes that requests have a fixed lifespan and are always completed in FIFO order. If that’s not the case (ie some requests can block and live longer than others), then a ring buffer is not suitable.
Obviously you could create a more advanced structure that allows allocation and freeing of objects of arbitrary lifetimes, but it would come full circle as it would essentially become the alloc/free and GC models that we started with
Edited 2018-11-29 12:28 UTC
No. The point is when you’re done with your object, you just let go of your handle to the object. The (destroyed) object stays in the ring buffer, and the objects around it that are still live keeps going as they are. You can have mixed lifetime objects in a ring buffer. It just puts a limit to when you can reclaim the space, but that only needs to happen when the head of the ring buffer wraps around to meet up with the tail.
And even then maybe not. For example, what if your tiny daemon processing millions of requests is processing millions of UDP packets. By their nature, you can drop a request. The common case may be that the request processing finishes up on time and you’ll hardly bump up against the tail of the ring buffer, but in case it does, you can just drop the request processing for the tail and say “oops didn’t complete the request in time, too bad”, kill that processor and reclaim the space.
Again, that’s not hand optimizing allocation, since, depending on the problem being solved, all those decisions about when it’s okay to drop a request would have been made before choosing an allocator anyway.
kwan_e,
In cases where alloc/free looses, sure you can use the power of C to implement something better, heck you can even implement a custom garbage collector if need be. However none of that refutes what I’m saying. Sometimes people assume that GC is always going to be the slower than alloc/free for heap memory allocation, but that’s not true. We need to admit that sometimes a GC can beat alloc/free! I’m just trying to highlight scenarios that favor GC.
I feel like I’m taking on religious dogma in some kind of turf war, but software engineering shouldn’t be this way. Oh well, haha.
Edited 2018-11-30 12:36 UTC
Well, no, because people are clearly talking about the average case. No one in computer science ever says one method is always better than another in all cases, but I don’t think the edge cases where GC is theoretically better than alloc/free is going to come up enough times to matter. And I actually doubt it will ever come up in real world scenarios.
Even in your example, chosen to favour GC, doesn’t favour GC. Most alloc/free pairs happen in close proximity to each other. In some languages, the compiler would be allowed to elide alloc/free in those cases. In cases where that doesn’t happen, the alloc/free would still mostly happen close together in time that the memory that is allocated (and the bookkeeping data structures involved) would still be in the cache. Whereas in a GC system, the GC cycle runs long after the memory is no longer needed, meaning it cannot benefit from the implicit cache locality of alloc/free pairs that happen in short order.
So now that I think about it, no, your tiny daemon example doesn’t demonstrate a scenario that GC will be better than alloc/free either. There will be theoretical scenarios where GC is better than alloc/free, but that’s like saying there are theoretical scenarios where linked lists are better than flat arrays because of Big-O complexity.
Really, the points raised by Alfman make sense but I really side with you on this, there are theoretical cases where GC could do a better job but on all them it would be easily beat by planned allocation strategy.
I’m not saying that GC should not be used, it certainly has it values, but developers need to be aware of its shortcomings and on languages that implement it the developers should have a “straight path around it” (yes, it sounds weird ).
kwan_e,
I mean no disrespect, but your reactions are typical of what we can expect from many hardcore C/C++ programmers who have such a difficult time questioning their own dogmatic views. It’s frustrating because I really don’t get why skilled software engineers, and I have no problem calling you that, are so closed minded. You understand *why* it’s true that the GC does less work, and yet you remain highly biased and argumentative against any outcomes where GC might come out ahead. Can you explain that to me?
I really wish we didn’t have to argue so much, but fortunately there’s a different way to get the point across. It took more effort on my part, especially since I haven’t coded in java in a decade, but I went ahead and wrote two programs in C++ and Java that build the same recursive tree structure using the native memory allocation for both C++ and Java. Obviously the Java implimentation is garbage collected whereas C++ is not.
Here is the code for each. C++ was compiled with -O3.
TestC.cc
https://pastebin.com/ssh27ubt
TestJava.java
https://pastebin.com/pqzdnE6Q
Both programs take two parameters: tree depth and repetions. Because the tree is recursive, the number of allocations grows exponentially. I tested the depth from 1 to 14, which got pretty slow on the computer I used at 1000reps. Both programs output min time, max time, and avg time. I’ve plotted these on the following screen shot that shows Java in Orange and C++ in Blue. The X-axis depicts the number of objects counted in the tree. Note that the y-axis is logrithmic randing from extremely fast (0s) to very slow (10s).
https://i.postimg.cc/nL5zqpk0/gc-benchmark.png
The Java version exhibited a lot of jitter between min and max times. Also, because Java is compiled at run time, the first couple iterations are much slower than C++, this is reflected in the “max time” at the beginning of the chart. While it wasn’t my intention to measure the JIT overhead, I decided against throwing out the first few slow iterations. 1000 reps was chosen to establish a steady state for Java’s JIT compiler.
Now for the big reveal…comparing the average performance between Java and C++. I shouldn’t be surprised becuase the data fits what I’ve been saying (more allocations compared to working set should favor GC), but I’m surprised at how much it favored GC. For large allocation counts C++ was 4-5X slower! I had not expected that much of a difference. By the end of the test at depth=15 the java version appeared to be hitting some JVM memory limits that I haven’t yet investigated.
Obviously it’s just one test for recursive data structures, but I hope it’s nevertheless eye opening.
No, because I already accepted your reasoning for allocation scenarios like this. I just have a hard time imagining seeing either version of the code in real world scenarios. It’s like “great, but how can I see similar results in programs I’m actually working on?”.
kwan_e,
Great. Choosing good high level algorithms is certainly one of the most important aspect to high performance code, but I’m not going to let you use this as an excuse to pivot from the fact that eventually most software is going to need to persist data on the heap, so it’s not unreasonable for developers to be more open minded about heap allocation overhead and what can be done about it.
I do realize that this discussion is going to be a dead end. It’s not that you don’t make some valid points at times, but you keep using the same diversion tactics rather than acknowledging the things that we should be agreeing on. I hope that you find these discussions a little educational at heart, even if you keep insisting on arguing with me publicly
Edited 2018-12-02 06:19 UTC
In your Java program, after the line “count = t.Count();”, add the line “System.gc();”.
That was education for me.
kwan_e,
This happens because the GC cycle has to go through and mark every accessible object on the heap. Objects that aren’t accessible don’t have to be marked in the GC cycle, so it goes much faster if you do the garbage collection when the objects are no longer referenced.
It might be possible to optimize the GC’s performance by hand tuning when it happens. I’m sure people who deal with java a lot would have something to say about it, but for the example I thought it was important to leave out any optimizations and just measure the performance out-of-the-box.
Edited 2018-12-02 07:14 UTC
No, you’re not allowed to bring up hand tuning GCs, since I’m not allowed to bring up non-default heap allocation strategies. They’re your rules, after all.
What really shits me, which I realized just now, is you knowingly submitted a biased and downright deceptive example to “prove” your point, and then accuse me of bias and dogmatism.
Fudging the numbers and trying to diguise that by attacking the other person’s character is just dirty.
I hear Volkwagen have a few positions open.
kwan_e,
Edited 2018-12-02 10:38 UTC
Nope, you don’t get to reset your own rules once they work against you. You wanted to compare allocators, and NOTHING ELSE. Well guess what? That means you’re not allowed, by your own rules, to depend on the rest of HotSpot. That means you are not allowed, by your own rules, to ignore the true cost of GC deallocation, which necessarily must include mark-and-sweep under non-ideal conditions.
You said you were tired before. You know what I’m tired of? People here not understanding the idea that proper comparison requires similar definitions and conditions on both sides. People setting up one-sided conditions and then going back on them, just for their own side, when it doesn’t work out for them. People not understanding that bad data is worse than no data.
—————————
btw, the memory usage of such a simple Java program, without the calls to System.gc(), is atrocious. I got around 4GB, in comparison to 300K for the C program. This may be beside the point of comparing mere time efficiency, but that, to me, is a terrible trade for the wishful thinking that a real Java program will have the same memory allocation profile of simple program with a bad allocation strategy.
Edited 2018-12-02 13:31 UTC
If you’re interested in comparing allocators, why did you, from the beginning, set the terms of discussion limited only to one style of allocators then? And then having the gall of misrepresenting me as negating the heap’s importance?
kwan_e,
Ah, I was hoping your other post meant we could move forward past this and finally get to more interesting waters, no? With this in mind, I’m going to ignore your last post. Wink
Edited 2018-12-02 07:17 UTC
I don’t see how discussions about anything can move past when one side is putting words/arguments in other people’s mouths.
kwan_e,
Really, you seems bent to prove your point, what is understandable but, really, Alfman is a nice guy as you can guess by reading his comments here about lots of subjects.
I think you were extremely unfair to suggest he was cheating, he was just using java memory management the way it should and using c++ standard way of it too.
The fact that the standard memory management of c++ sucks for some scenarios is no secret to anyone and it is the reason I sided with you about planned memory allocation strategy.
Last, you too seems to be a knowledgeable and principled person, if possible, when things are not that important, be a bit more flexible is a nice trait.
Hey, I was the flexible one. But he was the one dogmatic about discussing only that one single class of allocators and testing only one thing, and then trying to go back on his own rules when it doesn’t work out for him.
kwan_e,
You seem unwilling to admit when there are facts that don’t align with your opinion. IMHO that’s what makes these discussions so difficult because it makes it exceedingly difficult for the discussion to coverage on the true nature of things.
So anyways, I’d like to stick to the facts and address your concerns in a reasonable way. To address your concern that the garbage collection is not being accounted for, I’ve removed all the timing from the code and instead switched to using linux “time” command to collect the timing data.
Also, I’ve tested the Java version both with and without System.gc() before exiting the process. It does seem to add a slight bit more time. It isn’t something any applications would have a reason to do in production code, but I want to make sure it’s as fair as possible for you. Here is the code:
TestJava.java
https://pastebin.com/QfE4Qakh
TestC.cc
https://pastebin.com/N2Z7Zk8U
This changes the semantics a little bit because these times aren’t per rep, they’re about 1000X greater. Also it adds more overhead for the JVM load & compile time. Here are the results, remember the y axis is logrithmic!
https://i.postimg.cc/VvJyPS08/gcbenchmark.png
If you have a problem with this new data, then please let me know in a nice way
Edited 2018-12-03 00:13 UTC
Continuing in new comment because computer crashed and can’t edit old one:
If I want to rewrite it, I wouldn’t be doing it for the allocator inefficiencies. I’d do it for data locality. Put all the tree nodes into a flat buffer so I don’t have to chase pointers and miss the cache as often. Sure, that would have the benefit of also removing allocator inefficiencies, but that goes to my point. These decisions are made long before I start tuning for allocators.
So a 4-5x improvement means nothing if I can’t leverage the same design without killing the performance of other tasks. Call that dogmatic if you want, but I fear you have a strange definition of dogmatism. To me, dogmatic would be trying to make that tree design work in complex programs just because it comes up good on an isolated benchmark, instead of accounting for the other properties of a tree that will be required.
And this happens in Java environments just as well as native code. Java engineers spend just as much or more effort to get other properties like data locality.
While I barely understand your examples (I’m not even a code monkey… ), I upvoted you anyhow.
zima,
Thank you. I suspect I would loose the popular vote though, haha.
All the tests would do is prove that the old code written for the old compilers still work the way they did, but the tests would do nothing in terms of testing the new features of the compiler, new language constructs, or anything else now offered that the old tests weren’t testing for in the first place.
At best they might be able to take advantage of better compiler optimizations on their old code IF their tests indeed checked for potential problems in the optimizers.
But, either way, you certainly wouldn’t have any confidence that the “new” Fortran 90 code behaved at all like the original “Not Fortran 90” test code without having to redo the entire suite.
It^aEURTMs one of the langauages I learned from 1980-1982.
FORTRAN IV
COBOL
RPG II
BASIC
And others.
BTW: I learned BASIC on an Atari 400 with 4K of RAM with no floppy drove and no HDD but only cassette tapes to save and load programs and data from. Those were not the good days.
Weird as it seems in critical industrial applications I know developers still using Fortran, Cobol and Pascal.
For what it is worth, I have programmed in both FORTRAN 77 and FORTRAN 95 for my job this year. I think FORTRAN still has a useful niche for scientific and engineering software that is written by scientists and engineers who are not computer scientists.
Fortran runs on everything. Even an IBM 1401: https://www.youtube.com/watch?v=uFQ3sajIdaM&vl=en
Not quite everything. It runs on all systems for which there exists a FORTRAN compiler, which is actually fewer systems than ANSI C.
Also, I would have been more surprised if there wasn’t a FORTRAN implementation for the 1401, it was one of the most ubiquitous computer platforms of it’s time, and it’s a popular target for historical computing because it’s got a really unusual design by modern standards.
Probably the NASA Fortran compiler is validated for correctness.
The same formal methods must be applied to any new software system before it can replace the old one.
Unit tests are joke in this case.
Wiki is your friend:
^aEURoeOptimizing compilers are so difficult to get right that we dare say that no optimizing compiler is completely error-free! Thus, the most important objective in writing a compiler is that it is correct.^aEUR[7] Fraser & Hanson 1995 has a brief section on regression testing; source code is available.[8] Bailey & Davidson 2003 cover testing of procedure calls[9] A number of articles confirm that many released compilers have significant code-correctness bugs.[10] Sheridan 2007 is probably the most recent journal article on general compiler testing.[11] Commercial compiler compliance validation suites are available from Solid Sands,[12] Perennial,[13] and Plum-Hall.[14] For most purposes, the largest body of information available on compiler testing are the Fortran[15] and Cobol[16] validation suites.”
Edited 2018-11-28 10:57 UTC
For 4 years I was developing a 3D CAD system, and most of its performance-critical code was written in Fortran, while the rest was in C. Luckily, I studied Fortran at the university back in 1997…
Is the discussions why it should or should not be used.
COBOL exists because FORTRAN’s FOO = 5; syntax was too confusing to business analysts or “Non scientists”.
Now people ask why its used, and the answer is “Its good at science!”
IMHO, its a perfectly crumulant language, but much of a language are the libraries written for it. There are linear accelerators that have FORTRAN libraries. But not many twitter api integration libraries.
Do you have a synonym for “crumulant”? Never heard this word and it isn^A't inside any dictionary I searched.
Edited 2018-11-29 01:09 UTC
I enjoy misspelling that particular word on purpose for these kinds of discussions.
https://en.wiktionary.org/wiki/cromulent
Nice.
twitter is crumulant.