Dedication asks each of its adherents to have faith even as time and energy pass through from one year to the next. Dedication brings with it a variety of challenges, but also rewards. Dedication is something most people claim to have, but few readily exhibit it in the face of adversity. As of today, Aug. 18, 2021, the Haiku Project is celebrating two decades of dedication, marking the 20th anniversary of the founding of the Haiku operating system and the start of this ride to save, maintain, and expand upon the BeOS legacy it spawned from.
Congratulations to the Haiku project and all of its contributors.
I glanced through the O’Reilly book on BeOS. The first chapter really gets into explaining BeOS. Glancing through the end of the chapter BeOS code reminds me a lot of Borland’s C++ Builder.
It’s interesting noting from discussion how a lot of developers wrapped existing application code in a single thread rather than rearrange their code to take advantage of BeOS. It’s around this rough timeline that some developers would recompile a C application with a C++ compiler and use C++ as a marketing buzzword.
Erlang is pretty niche too.
“Glancing through the end of the chapter BeOS code reminds me a lot of Borland’s C++ Builder.”
IIRC it was a version of Metrowerks CodeWarrior.
BeIDE was a reimplementation of the CodeWarrior IDE.
Are you sure it was just a reimplementation? I seem to recall it was actually licensed from Metrowerks (including use of the mwcc compiler on BeOS PPC).
BeIDE was a reimplementation by MetwoWerks. It used BeAPI, where as CodeWarrior at the time used Mac Toolbox. The UI was reimplemented using BeAPI, but looked a lot like the CodeWarrior API because MetroWerks made it.
The compilers were the same codebase as the ones for MacOS, but again, at the time the Mac had absolutely no standard Terminal. So the ones for BeOS were built to use a standard Unix style terminal. The code the compiler generated was identical, bar the extra stuff BeOS used that was not strictly part of the PEF format spec (resources, for example, that are not in a resource fork.)
Well isn’t that always the way with engineers? You don’t have time/desire/belief to port code to the new thing so you make the new thing look like the old thing. With exceptions around async, concurrent collections, and the like engineers are still bad about writing parallel code.
It’s getting better slowly but most people still just don’t think in terms of parallelism, and that gets reflected in the code they write
jockm,
We write enough bugs before throwing multiple threads into the mix, haha.
While we can use multithreading in all sorts of software designs like callbacks, handle client socket blocking, consumer/provider algorithms, etc, Personally I’m against using threads for micro operations like this. They don’t scale well, make it notoriously complex to handle race conditions and concurrent access faults, thread synchronization can be a nightmare. If you need to cancel a thread operation, it’s very easy to create memory leaks. There are huge advantages to single threading, including not needing to use any synchronization primitives to protect data structures and not bogging the CPU down with cache coherency. So I prefer using single threaded event loops and only using threads if there are CPU bottlenecked loads that demand it.
These days with GPUs being far more scalable than CPUs, I think that’s technically the better way to go for parallelism. Although I have some serious gripes about nvidia as the dominant GPU maker.
@jockm
People barely get portability. God knows I’ve written about portability layers enough times for people to get it. There isn’t a single framework from SDL to wxWindows which gets it. The portability layer is a very thin layer simply dealing with versioning and quirks between compilers and OS and SDK versions and bit length ie 32 and 64. One #include and you can work with multiple compilers and OS. You also need an OpenGL portability file to cope with extensions and static and dynamic loading, and a third file to provide function wrappers. All of this is a really thin layer barely above the layer of the compiler where you maximise portability.
I based my framework on Borland VCL and pretty much didn’t use the standard library where I could avoid it. (Back then you’d probably use Boost as the STL was generally flakey.) Handling memory safe allocation and threads and exception handling and garbage collection tuned for performance was pretty easy. A few years later at the compiler and general toolkit level things like memory allocation and garbage collection etcetera became more of a thing but back then if you wanted high performance and real time stuff doing which wouldn’t blow up in your face you had to do it yourself. Personally I think you still do but I don’t kow the state of compilers and toolkits today.
Given the performance of most CPUs for the past 2-3 generations for the majority of everyday applications for most users most of them are GPU limited not CPU limited.
Since I no longer code I’ll confess I haven’t thought too deeply about how to implement parallelism but it definately can provide benefit in making the best use of the CPU whether locally or remotely. With games a fair few number of tasks can be broken down to operate in parallel and this can be done gracefully but ideally you need to think about before you begin. I don’t know about your typical business class application as I don’t spend any time thinking about it but don’t see why a similar approach cannot be taken.
As for GPU’s they’re not really parallel in common use on the developer side of the API. You typically have a single pipeline you bang stuff through. Stuff has to be done in a certain order to prevent pipeline stalls. From that point on it’s a question of the GPU then breaking everything down and parcelling it out so the more GPU streams you have relative to the number of pixels and operations per pixel the faster it goes. Back when I was doing high performance real time graphics you could only work with one thread attached to a graphics surface as things tended not to work if you used more than one. Could you use more than one thread if it was supported? Probably. It’s something you need to look into deeper because you need to balance keeping the GPU pipeline full versus CPU versus how expensive the thread switch is. There’s no reason why the code abstraction around this cannot be a common component.
It’s really old now but for the curious they may like to flip through “Michael Abrash’s Graphics Programming Black Book, Special Edition.” The books is available as a free download. I’ve never read it myself as I didn’t need to but the basics still make sense today.
https://www.gamasutra.com/view/news/91373/Abrashs_Graphics_Programming_Black_Book_Available_As_A_Free_Download.php
http://floppsie.comp.glam.ac.uk/download/pdf/abrash-black-book.pdf
With respect to portability for a glimpse into Microsoft’s “Extend, embrace, extinguish” monoculture attitude to portability may wish to compare the independent Id Software attitude and approach to Quake from then versus the Microsoft owned attitude and approach to Quake today.
I don’t personally agree with the editorial line of DF retro but that’s another topic!
DF Retro: Quake – The Game, The Technology, The Ports, The Legacy
https://www.youtube.com/watch?v=0KxRXZhQuY8
Quake – Official Trailer (2021)
https://www.youtube.com/watch?v=vi-bdUd9J3E
Borland’s VCL was written in Object Pascal. That used to drive C++ programmers insane. They could not handle that the library restricted them to single inheritance and such like.
I feel like your world view on portability might be a little dated. The issue I see most these days is “I wrote this code in niche language and so you need to port an entire compiler suite to use it on Platform X”. Most of the C and C++ I deal with is very portable, and we have code that compiles for microcontrollers, Windows, Linux and various flavours of Android via NDK. It’s not very hard to make stuff cross platform, especially when a build server fails tests if you break stuff.
henderson101,
The C/C++ languages themselves are highly portable, but it’s really the libraries and APIs we use that are the weakest links in the chain. For example, I’ve written a C app that grabs frames from a webcam on linux, but it certainly won’t work on windows.
There are portable frameworks like openCV that can do this, but they can carry unwanted bloat and dependencies can add up. In my case I was targeting an ARM SBC without much ram or upstream library support. I figured it would be easier to write a linux specific framegrabber than porting openCV, most of which would end up being bloat anyways.
Regarding android, in the past I have tried to unify android and traditional linux programming, but the reality is these are very different targets despite supporting the same languages & kernel API. So while language support is crucial, we also need to look at the library/framework ecosystem for easy portability.
It depends, in the old days supporting windows and linux would mean two backend implementations. You could mask everything behind a common API, but it still meant more work to support both. These days many of our frameworks have already done this work for us, so you can use one API and it will work on a number of platforms. What you say is true, but only if your C/C++ code uses a multiplatform framework.
HollyB,
I get the need for portability obviously, but can I ask why you don’t think SDL is good in terms of portability?
I don’t use SDL much because the library is inadequate for my GUI needs, it’s more for game development where authors typically write their own in-game interface. But strictly in terms of portability SDL gets you there, no?
I’ve looked at more toolkits than I can remember, but most of my work is backend development. Unless you count HTML, haha. I think unity is a popular framework for multiplatform game today, at least commercial ones.
Well, I think it depends, most game engines still rely on the CPU to advance the game state & physics, which can sometimes cause a bigger bottleneck than GPU rendering. It really depends on the game. This is the case with kerbal space program for example. Other games have trivial physics and all the load is GPU.
I personally would write all computationally intensive operations including physics to use GPGPUs instead of CPU, since I believe they are better suited for scaling up operations to run in parallel across thousands of items. But this brings up an interesting dilemma because you have to address the fact that cuda, the most popular GPGPU platform, is proprietary and not compatible with competing GPUs including intel, apple, AMD, etc. Nvidia’s proprietary drivers can be difficult to use in linux. OpenCL is clearly the most portable choice, but nvidia has not put as much effort into optimizing it and apple has even deprecated it. Many platforms don’t include it out of the box making it unreliable and difficult to install.
Conversely if you write code to use generic CPU code & opengl, it will work just about everywhere as long as you use a portable framework, which is why I think most developers have tended to go this route.
Cuda is multithread safe now. Off the top of my head I don’t know about opencl or opengl.
I owned it and read it
I did my share of DOS & VGA programming and had a lot of fun doing it. It would not be very useful today though, at least not without a lot of updates since people don’t program graphics that way anymore. I suppose the code should still work in an emulator.
> C++ Builder
Not really. The C++ Builder code sat on top of the VCL and was therefore by definition very single inheritance. The BeOS code lends more to the Apple MacApp framework, and that also inspired the VCL.
A lot of developers from big corporations with large code bases tried to port the code with a single thread. But we indies just made BeOS multithreaded apps.
There was also the opposite approach. There was a office suite native to BeOS, Gobe Productive. If I remember correctly they made a Windows version by porting the necessary BeOS’s APIs to Windows.
I believe this happened more than once. I think the version of the Eddie editor for Linux also ported BeAPI.
Congratulations to the amazing and dedicated Haiku developers!
BeOS was a breath of fresh air when it was new. Haiku has completely superseded it as far as I am concerned. It is a shockingly well-rounded operating system that is continuing to keep the original dream and priorities alive while evolving it in to a contemporary platform.
It brings me a smile every time I fire it up. What more can one ask of a niche operating system?
It’s beautiful isn’t it?