Lilith is “a POSIX-like x86-64 kernel and userspace written in Crystal”, which only raises the question what Crystal, exactly, is. It’s a programming language whose syntax is “heavily inspired by Ruby’s, so it feels natural to read and easy to write, and has the added benefit of a lower learning curve for experienced Ruby devs”.
Neat: another ground-up graphical operating system written in a little-known language. It even shares a name with Wirth’s 1977-ish effort: https://en.wikipedia.org/wiki/Lilith_(computer) (written from scratch in Modula-2, for custom hardware made of 2900-series bit-slice chips).
Wirth’s Lilith only had 256k (bytes) of RAM. I wonder how much the new Crystal Lilith requires?
That’s about as much as the original Mac, which at 128k was the same RAM size as the original Lilith and was also partly written in a Wirth-ian language: Pascal. Also derived from the Xerox Alto, FWIW.
Please remember that the original 128K Macintosh was almost useless, even the presentation demo, mostly a slideshow, had to run on a 512K version. The original 128K Mac’s monochrome resolution of 512×342 pixels frame buffer was already weighting 21888 bytes in RAM. Yes you can run a full graphical operating system in 64K (Geos) or 128K (Geos 2, SymbOS) but on 8 bits computers it’s less a problem due to ISA size and limited data length. Even the initial 128K version of the Atari ST (GEM desktop) was abandoned before launch for a base version of 512K. Who said 640K is enough for everybody ? The more RAM, the more possibilities, even with a slow CPU.
As for Crystal, it looks like a rather decent PL, much like Nim or Julia. Give it a try…
Kochise,
Yeah, it was always an extremely limiting factor. Some old languages including pascal offered “overlays” to help work around memory limitations.
https://en.wikipedia.org/wiki/Overlay_(programming)
A side benefit is that limited resources really discouraged bloat! Today’s software development culture has stopped caring about bloat, which drives me nuts because it often means things need more expensive CPUs, ram, go slower, and consume more battery than they need to. But as a dev myself, I see why it’s happening, most software companies just don’t care and they reward the quickest and cheapest developers – this encourages rushing at the expense of debugging & optimization.
This language is garbage collected, and they have stated in their FAQ that removing the GC will “never, ever” happen, because the whole language would need to change. (FAQ https://github.com/crystal-lang/crystal/wiki/FAQ#language-x-has-feature-y-why-dont-you-have-such-feature)
Many would consider this a major drawback of the language. Despite what some people say about what modern GCs can do, one can always hand-code a similar or better solution. There will always be GC pauses, even if small, and the smaller and longer time inbetween, the longer it takes to reclaim memory.
Invincible Cow,
This has come up on osnews before and opinions are strong on the matter of garbage collection. Most garbage collectors do cause, jitter, but did you know that it can still be faster overall? Having different opinions is good for diversity, however I’ve found that a lot of people in computer science keep making assumptions about GC performance that sometimes end up contradicting the evidence. I admit that I was in this group as well, making assumptions until I ended up benchmarking comparable programs under different types of memory management. What I learned forced me to reconsider my bias against GC and become more open minded when it came to thinking about GC performance.
Please read more carefully. I know that a particular GC program can have more throughput than a particular handwritten program. But every handwritten program that is slower than another program, can be rewritten to match that other program’s performance (even if that other program uses a GC), because you can just hardcode the same behaviour as the GC.
I have seen one such a benchmark, and it could be trivially rewritten in such a way (literally one mouse drag and one keystroke). However, many people were misled.
Also, almost no one cares about faster overall. They care about fluid animations, realtime audio without skips, and fast response when you click on something.
Low latency should be the requirement, and throughput only one of the tools to deliver it.
Invincible Cow,
Your original post did not make clear that you were suggesting re-implementing garbage collection. Sure you can do that, and you would get the performance, but not necessarily the safety of a GC language.
While you can implement your own GC in a language that doesn’t have one, there are problems doing this with old languages like C that lack reflection, which is needed to map out the heap. Technically you can create your reflection data by hand, but it becomes impractical and error prone compared to a language that supports it natively. The language’s native libraries will not support GC directly without writing your own framework. While two memory management paradigms can coexist, that’s difficult for human developers to keep track of without language aides. Your own GC is not likely to become as mature as those already developed by more resourceful teams.
https://blog.golang.org/ismmkeynote
So although I don’t disagree that technically we can implement a GC in C, it’s inherently messier than using the right tools for the job. Maybe you could justify a subset of full blown garbage collection in niche cases, but in general I suspect you are better off with a real GC language if you know you want to use GC.
I have no idea what you are talking, you need to link a source
“almost no one cares about faster overall. ” is not really true though. There are many cases where average throughput is more important than latency. A raytracer that maximizes throughout can be more desirable than one that has the most consistent latency. On batch processing jobs, overall throughput is often more important than task latency. For hosting, throughput is often more important as long as your GC has a ceiling on the worstcase latency. I do agree there are times strict latency requirements are more important in realtime applications, but I disagree that we can/should rope everything into the same bucket. There isn’t one best way for all jobs.
You can also always handcode a better implementation for an algorithm in assembly than any language. It’s a similar waste of time in most cases.
Crystal is yet another of those bastardization of a language where it tries to use a dynamic language syntax with a language that is anything but dynamic. It’s sorta worst of both worlds.
Then try Pony.