TrueOS, formerly PC-BSD, has a desktop environment called Lumina. It’s getting a big overhaul for Lumina 2.0, and this short interview gives some more details about what’s coming.
With Lumina Desktop 2.0 we will finally achieve our long-term goal of turning Lumina into a complete, end-to-end management system for the graphical session and removing all the current runtime dependencies from Lumina 1.x (Fluxbox, xscreensaver, compton/xcompmgr). The functionality from those utilities is now provided by Lumina Desktop itself.
[…]
The entire graphical interface has been written in QML in order to fully-utilize hardware-based GPU acceleration with OpenGL while the backend logic and management systems are still written entirely in C++. This results in blazing fast performance on the backend systems (myriad multi-threaded C++ objects) as well as a smooth and responsive graphical interface with all the bells and whistles (drag and drop, compositing, shading, etc).
It is just as slow and bloated as Windows 10. The App Cafe defeats the primary strength of BSD – the ports.
Recommended Requirements
64-bit processor
4 GB of RAM
20 – 30 GB of free hard drive space on a primary partition for a graphical desktop installation.
Network card
Sound card
3D-accelerated video card
https://www.trueos.org/handbook/introducing.html
All we really need is a polished desktop BSD with a very simple installation process.
Egh. PC-BSD never played nicely with ports. My PC-BSD installs usually ended up being incredibly broken because of adding ports.
Don’t get me wrong. Ports with a build server are why I really like FreeBSD on servers, but the binary packages have gotten much better in the last decade.
I’m going to go out on a limb and say most of those requirements are due to running ZFS as the default FS. 20-30GB of disk space is right inline with Fedora Workstation install needs, by the way. I can be squeezed into a smaller space, but long term, it’s going to need about 40GB.
3D acceleration is just the bar these days for GUIs.
OpenBSD is nice. Clean, tight, minimalist.
TrueOS, as the name hints, is yet another case of a nice idea – Beginner-friendly-desktop-directed FreeBSD – becoming a Silicon Valley mess. Once the money entered the scene, it’s goodbye good ideas and hello crap wrapped in the most ridiculous brand name in tech history.
You can see the same line of thinking in Canonical with their “Unity on Mir” which has now, a few dozens of millions of dollars down the drain later, became “Gnome on Wayland”.
The Mir and Unity situation is not quite as dire. Mir will end up a Wayland compositor (like many of us predicted back then). Unity 8 lives on in community projects and might even deliver a feature complete desktop.
In current versions of TrueOS everything is implemented using packages in the back end (even the base system is packaged) the old PBIs are dead. When you install something with AppCafe it installs the relevant packages (there is a cli for everything as well). The primary reason for the fairly steep system requirements are due to root-on-ZFS.
I’ve been out of the loop on the PC-BSD for a long time I guess. Last time I used it, it came with KDE and was a install/user friendly alternative to Desktop-BSD while trying to overhaul the file-system.
Need to give this version a shot.
As for the requirements, it’s almost for sure due to 64-bit system operation, you don’t want to be memory starved while running the OS and apps. You could probably get by on less memory, but the system will start thrashing when it fills up the memory while multitasking.
I would not feel comfortable with the idea that any script or tool crash will crash your entire desktop. I see it is labeled as a security “feature”, but to me it looks like very user unfriendly.
Yes, that was my reaction too. Running the entire desktop in a single (multi-threaded) process does have some advantages for performance and memory efficiency, but it sounds like a nightmare for reliability… any bug anywhere in that code has the ability to crash the entire user session. There’s a reason why the big desktops favour multi-process models, where potential problems can be isolated…
Plus, the entire thing is multi-threaded C++… and while I concede that it’s possible to write good multi-threaded C++ code, it’s damned hard to get right. There’s a reason the Mozilla guys looked at the idea of better exploiting multi-threading in their C++ codebase, and decided that the first step was to design a new language to do it in…
Would you rather have your desktop environment crash and restart or have a component crash and have things break in unexpected and unpredictable ways with little indication as to why?
Crash big and crash now.
Linux bug_on vs warn_on is the same discussion.
Crash a little. At least that gives me time to save and clean up. If you really think crashing big is preferable, clearly you’ve not been employed in any serious capacity.
The general idea is to avoid having to crash at all, which is beyond the scope of this discussion. If you want to debate what is more likely to crash to begin with, FreeBSD or ex. Ubuntu I know what I’ll bet on.
But if something is broken you want it to fail. If something crashes little you’ll never get it fixed. If you want to build reliable systems you need to kill them when they break so you can examine the corpse.
See Bryan Cantrill’s talk ‘Zebras all the way down’ at Uptime 2017 on YouTube.
For the record I’m currently employed and have to deal with software that partially fails all the time. Clearly you aren’t a developer or an admin that actually has to fix crap when it breaks.
I am an end user working as a freelancer with photography, graphics, DTP and web stuff on a Linux desktop. Stability wise is a little better than Win 3.x, but not by much.
This is known as “fail fast”, and it’s a strong methodology even outside native programming.
There is a difference from being error tolerant (not crashing when you read a corrupt file), and accepting errors (allowing unexpected results from internal operations). The risk of not immediately halting when an unexpected internal error occurs is that data may be corrupted, security may be compromised, and the defect may become much harder to detect. This is as opposed to intentionally crashing (or halting with an invariant), which signals the first moment an issue was detected, and assumes that since an invariant is not accurate, no action is viable.
As an example, if you detect memory corruption in a file manager, you should NOT try to save any partially copied files, meta data, etc. You should just crash, or you should halt without any further action. The worst case scenario would be that there was an out-of-bounds array write of a user’s password (or private keys), and it is now in the meta data or copied file buffer as plain text, and saving the data out would place it on the filesystem.
Yes, but if you can use interprocess isolation to keep things separate, then you can limit what has to fail to maintain safety and integrity.
That’s good in theory. You should dump Linux and use Minix or HURD.
The Linux desktop is strewn with single points of failure. We have to solve that problem before a multiprocess model is going to be of genuine benefit.
That has very little to do with the DE.
I’d argue it does because if your kernel panics because of a driver for your network card your DE is going to go down too. I’m not arguing a general concept of modularity is bad. I’m just arguing you can’t look at any piece of the system in isolation.
So far people have made a lot of claims that Lumina 2 will essentially be less reliable than XYZ some other thing because of a single factor, when in reality when you look at the system in its entirety it could be more reliable than whatever you are using and nobody has any data what so ever.
A *properly designed single process DE could be equally reliable to a stack of components, especially if the total quantity of code running is significantly smaller (which I can virtually guarantee),
I think its a bit ridiculous to argue that a DE needs to be broken down into individual processes communicating via sockets and dbus in the name of reliability at the expense of complexity (cough reliability cough) but not also agree the kernel should be broken down the same way. If its good for the goose.
I’m not saying the kernel shouldn’t be broken apart for reliability.
I’m just saying that, given my practical day-to-day needs, I’ll avoid growing the processes which hold my desktop session open and I’ll take the kernel+driver combo which both runs for months on end without issue and comfortably runs my GOG.com games during my leisure breaks.
It’s bad enough that I have to send wary glances at systemd for not understanding the concept of designing PID 1 in a microkernel-like fashion for stability and security.
Except that Lumina is agnostic of the implementation of the kernel, and so it’s not related to discussing the implementation of Lumina.
Your “general idea is to avoid having to crash at all” is also good in theory…
Its funny how nobody said that when I was discussing how one can develop software thats easier to debug and maintain.
I was just responding to your talking point…
Considering a desktop is something where you do work, I pretty much want not to lose data. A big crash usually brings data loss.
Like I said before the goal is to not crash at all. A little crash is likely to cause you to loose the data you’re working on. A little crash could cause you to loose data you’re working on and potentially persist unnoticed causing you to loose data without you realizing it. This is why Unix (including Linux) has panic. It could try to march on but it’s dangerous.
Wishful thinking much? Reality check: list one really useful application that is not overlay complex and that does not crash eventually. MS Word, MS Excel, IDE(s), Autocad (common Autodesk, make it at least runable on Linux, even if only under Wine!), Matlab, just to name few. To any user, it is way preferable to be able to save his work and restart the offending application than it is to point fingers to particular application developers and scream “Fix your mess or I will not use this piece of garbage anymore!”.
It is a fact that as applications grow, the ones that have no option but to grow to accommodate new features as they evolve, will see new bugs introduce while some old are shredded.
Unless your needs are really trivial, you better have a chance to save that fck work you have been doing at closing hours where every minute counts.
And I’m not even talking about from where the fateful error crawled from: OS, hardware, driver?
Really, reality should trump wishes, always.
If you really care about your data you’d think you’d want to be running a desktop with a filesystem designed around protecting data. You’d think you’d want an operating system that has “the principle of least astonishment” as a key part of development.
All this talk about what kind of crashes we want is a bit academic considering we don’t actually know how reliable Lumina 2.0 is or will be.
A general design principle of complex systems is to make failures deterministic and reliable so a) you know the system has failed and it is no longer in a consistent state and b) you stand a chance at actually being able to fix the root problems.
We’d all love things to be truly fault tolerant (not just fault ignorant) but the reality is that basically nothing on the Linux desktop is. In fact the only piece of software quite a few people on Linux actually use that is actually fault tolerant to a large degree is ZFS.
So while yes in some circumstances an end user is going to be happier they can save their Firefox tabs before having to reboot a broken system, the fact of the matter is that this inevitably leads to sloppier software and can lead to unexpected damaged data. Broken software is essentially data corruption after all.
Edited 2018-03-14 07:00 UTC
All desktops have the goal to not crash at all, but all of them crash sometime.
In a desktop workflow is preferable to lose *some* of your data than lose *all* of your data.
Well yes, the goal is to never crash. But if the choices are a) one bug takes out the entire desktop and losing all unsaved work, vs b) one bug causes some small component of the desktop to crash and restart without affecting anything else, guess which one I prefer?
Almost all the comments so far are about PC-BSD/TrueOS rather than Lumina, so I want to toss in some of my experience with the desktop. One thing I like about Lumina is it’s very portable – minimal dependencies allow it to run on almost any UNIX-like platform, including odd systems like GNU/kFreeBSD.
I ran Lumina for a little over a year, with various window managers (the default Fluxbox and Kwin mostly). It’s light, it doesn’t have many features, but is pleasantly flexible. Almost everything can be moved, resized or rethemed.
My only serious concern, riding the desktop from about version 0.8 through to 1.3, was that each update introduced a lot of changes. It was still a growing project and each new release relied on new packages, changed the look or location of items. It meant rapid progress, but also some re-learning if you were staying on the cutting edge.
I’m curious to see how the new, unified stack works out. One complaint I had with Lumina in the past was I didn’t like the way the underlying Fluxbox WM does some things, and swapping in an alternative window manager would break short-cut keys or other little features. Having the desktop be the window manager will (I’m hoping) fix all the little problems which came up with communication between the desktop and the window manager.
I thought ZFS still required ECC memory?
Requires is a strong word. It works without it but you loose some of the advantages of the checksumming. The general consensus from the ZFS developers is that zfs without ecc ram is not any worse than any other fs without ecc ram.
The XFS developers claim that XFS beats EXT and ZFS out of the water if neither has ECC.
In what respect?
Except according to a fairly well-regarded blog post by Louwrentius (and I admit it’s a dated article):
Comparing ZFS to ext4 with fsck is not even close to apples for apples. ZFS has more comprehensive error correction than fsck built in, constantly working. It doesn’t just handle superblocks and metadata either, it also checks and heals the data itself which fack does not do.
Fact of the matter is, with ZFS you’re massively more likely to notice data errors due to memory corruption immediately whereas with ext4 and xfs you’ll find out some day when you try to use the data. ext4 will gladly read back corrupt data and give it to you, it’s oblivious. ZFS is going to tell you, it checks every block.
The idea that you’re going to trash your zpool completely because of memory errors is so unlikely as to be irrelevant when comparing to ext4. Ext4 is going to hose your data way before ZFS does due to bad ram.
I would say it would be like saying you shouldn’t wear a seatbelt (zfs) because there’s a 0.02% chance it’ll make it hard to escape a burning car. Okay, but the 99.98% of the time it’s going to save your life.
ZFS has never required ECC memory. It’s just a nice bonus to have if you want to use internal checksums. Using ZFS without ECC is just like using every other file system without ECC.
I leave my desktop logged in for weeks on end. This would be anathema to me, given that every process on my desktop has crashed, frozen, deadlocked, or livelocked (same outward symptoms as a deadlock, but with a CPU core pegged at 100%) on me at some point or other and I’m very glad that, 99% of the time, the crash is in neither /usr/bin/X nor whichever session manager I happened to be using at the time. (ksmserver, lxsession, etc.)
Heck, it’s for that reason that I’ll be very cautious about moving to Wayland. I want to wait for KWin (SSD or DWD forever) and my apps to incorporate some variation on Enlightenment’s compositor crash recovery protocol and I want to verify the claim in one of Martin Fl~APser’s previous blog posts that KWin is now notable for the work put into it to improve code quality and broaden automated testing.
When (not if) something like KWin or Openbox or Plasma or LXpanel or PCManFM or any other desktop component that isn’t the X server or the session manager dies, the session manager restarts it and I continue working.
Edited 2018-03-14 02:34 UTC
No we don’t want ZFS and FreeBSD, I don’t like the fact that Lumina 2.0 will actually crash if tries reading another processes memory. What I really want is Systemd and Btrfs which doesn’t work properly to begin with. Throws the champagne out with the cork.