The history of computing could arguably be divided into three eras: that of mainframes, minicomputers, and microcomputers. Minicomputers provided an important bridge between the first mainframes and the ubiquitous micros of today. This is the story of the PDP-11, the most influential and successful minicomputer ever.
A deep dive into the inner workings of the PDP-11, specifically on how to use the machine to do actual computing tasks. I lack the skills to do anything with a machine like this, but they look and feel so incredibly nice.
??
All a PDP-11 is, is a terminal based system. So that means you have no graphics. Therefore, it’s pretty trivial to code for, as all you’re dealing with is characters and nothing else. Therefore, if it’s a text mode program, you can probably implement it on a PDP-11.
With the addition of C compilers and BSD UNIX, coding for it isn’t too much different than writing text mode programs for any other UNIX like.
I think they mean they lack the skills to do anything interesting under those resource constraints and they like the look and feel of the hardware.
Since I don’t feel like creating an Ars account, I’ll just comment here.
As a Rust programmer, this “blanket statement” phrasing makes my inner child want to respond with “Oh yeah? Then why have none of the ‘GC everything’ languages displaced C or C++?” (And I do say that, recognizing that “technique” as a non-plural form could be a typo and reference counted pointers are a form of garbage collection by the academic definition of the term.)
Assuming a non-superscalar architecture. With modern CPUs, it’s highly impractical to find a sufficient supply of people who satisfy the “assuming a programmer who understands the architecture as well as a compiler does” requirement.
ssokolow,
I haven’t read the full article yet, but I agree that paragraph is a bit silly
I also take objection to the author’s use of “always” there too!
It’s gotten pretty hard to beat an optimizing compiler. I have found cases where I could do better in assembly but it often came down to the fact that the C language doesn’t expose all the opcodes available on the CPU. Since you couldn’t use the opcode, in C you’d be forced to writing a high level algorithm that performs worse. However this has gotten better with the availability of intrinsics in GCC.
https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html
(ie something like __builtin_clz)
Several years back I kind of disappointed in GCC’s ability to optimize SSE/XMM code, but once again intrinsics could help without the need to drop into assembly. Intel’s compiler had the more advanced optimizer but it would be interesting to retest it today.
It is not just impractical to find sufficient supply of people, the author discounted one very important mantra of current programming techniques: maintainability. It is very unlikely that you would, even when you use assembly, sacrifice clarity for some not so big code speed up, what is very different from the days of yore. Compilers don’t worry about it and may use whichever trick they have available.
Naturally, but it’s an article about a system from the era when the trade-offs were reversed, so I didn’t think that needed saying.
Good grief, Acobar. The context of the article is assembly programming on the PDP-11 in the 70’s and 80’s. There was no reference to current programming techniques or maintainability. C’mon guys.
I wrote the original article and I think both of those statements are pretty accurate as stated. if you disagree, please provide evidence to the contrary. Please note that I said nothing against Rust, which is quite excellent. The context was Assembly Language which has zero memory management.
AndrewZ,
Just a nitpick, but “Programming today uses the modern technique of garbage collection to automate memory management.” seems to suggest that all modern programs use garbage collection. Garbage collected languages have gained popularity but there are still tons of projects using unmanaged languages like C/C++.
This one is more complex “These are the key advantages of assembly programming over high-level languages—assembler code always runs faster and uses less memory.”
I’m not convinced about the “always” bit, but I am unclear on your semantics. Do you restrict “high level languages” to mean garbage collected languages only? GC programs tend not to be memory efficient because they keep objects around longer. Although performance-wise sometimes they can actually rival unmanaged algorithms..
I’ve done quite a bit of x86 assembly and IMHO it’s steadily gotten harder to beat a good compiler. The thing is assembly used to be easy when micro-architectures were simple, but as time went on things turned into a game of chess where one change here can have ramifications several instructions down the line. A compiler with deep knowledge of the micro-architecture can analyze deep paths, which is hard for humans to do consistently across a large body of code.
Of course it would take a long time before compilers would produce good code! Early compilers often produced naive suboptimal machine code.
That’s basically my issue. Both of those statements imply that certain generalizations hold true about modern computing when they don’t.
For example. What is most machine learning code written in? C or C++. Why? Because CPython is the popular language for ML, CPython is slow unless you do the heavy lifting in a compiled extension, CPython has its own garbage collection using reference counting and a cycle collector, and garbage collectors are solitary beasts.
What is the Linux kernel written in? C. What is Windows written in? C and C++. What is XNU written in? C and C++. What did Google choose to write the Zircon kernel for Fuchsia in? C++.
None of those are going to be implemented purely using
std::shared_ptr
, so all of them are counterexamples to even the most liberal interpretation of “Programming today uses the modern technique of garbage collection to automate memory management.” They’re examples of “programming today” that doesn’t use garbage collection.the irony being that modern CPUs are basically more parallel and less orthogonal reinventions of the PDP-11
Except for some very well defined numerical methods, I don’t think I have seen direct assembler programming beat the output of an optimizing compiler for the same routine. Unless someone is a bit masochist or working on deeply embedded systems, assembly is just not worth the while.
At least interesting and non-political article, that’s hard to find on other FLOSS/Linux-oriented sites