A graphical client for plain-text protocols written in Rust with GTK. It currently supports the Gemini, Gopher and Finger protocols.
Just cool.
A graphical client for plain-text protocols written in Rust with GTK. It currently supports the Gemini, Gopher and Finger protocols.
Just cool.
This project is so pretentious, it uses a gtk theme that make it look like it’s from 20 years ago, despite being gtk+3 based.
gtk3 looks looks like 80’s design. can not even handle graphics only css.
That’s just the NsCDE default theme he was using. Castor uses your desktop’s Gtk theme like any other application.
You say that as if its perfectly normal to use NsCDE.
Nothing cool about the Gemini protocol. Just braindead and useless.
That was my reaction reading the protocol, too. It’s pretty much just HTTP 0.9 with an explicitly defined markup that is Markdown-ish with fewer features. And then, despite all the simplicity they gain from this lobotomized protocol and format, they mandate TLS.
I mean, what are they trying to address here? If it’s that HTTP is too complex, you could do a lot better than simply dialing back the version to 0.9. If it’s that HTML/CSS/JS is too complex, just invent a new file format, give it a mime-type, and define a standard for how its rendered.
It’s a strangely arbitrary, and frankly uninspiring, set of design choices, given all we’ve learned about the hidden complexity in web standards and protocols in the past 30 years.
They’re specifically trying to go for tracking-resistant, hence mandating TLS so MITM-based tracking doesn’t work, I believe.
But only if the client is not sending the domain in clear text (SNI: server name indication). Too bad they forgot to mention that in the spec …
eSNI will hopefully resolve that problem.
anevilyak,
It requires infrastructure changes that are not so widely supported yet, so I don’t think it is very useful for the time being. E-SNI could slowly happen, but it actually shifts the problem over to DNS, which is another weak point for anonymity.
The “solution” being promoted for private DNS is to shift DNS lookups to centralized DNS providers like google & cloudflare over encrypted TLS/HTTPS. However this compromises the benefits of decentralized DNS and opens up the potential for a majority of users to be tracked via their DNS requests to these centralized providers, all of which are in US jurisdictions today as far as I can tell. Should we really trust these private corporations to never monetize the data passing through their servers? Historically that’s a bad bet. And even if you want to have faith in them not to track us, do you really trust the US and other governments not to exploit their legal power to siphon up our metadata exactly in the same way they already do with telephone calls?
I say centralized services are inherently much worse for privacy. I am not against encrypted DNS, but I am against the way it’s currently being deployed. IMHO this centralization is something that we will ultimately regret, centralized DNS is going to hurt privacy down the line.
None of this is really directed at you, anevilyak
The people behind the Gemini projects are Gopher enthusiasts who are trying to remedy a few of the shortcomings of Gopher. It is meant to appeal to people who already use Gopher or are interested in exploring the technology of yesteryear. In that sense, it’s not all that different than AROS, Haiku, or any other recreation of the past.
The design has nothing to do with HTTP, HTML, CSS, or JS. It would be a mistake to try to understand Gemini in that context.
Man I haven’t fingered anybody since the last time!
For my oldies, I’ve given up on accessing the Web directly, SSL is just not going to happen in anything prior to DOS. On Dos I use the Links 2 Browser, which does a good job. So this niche is not really for the oldies as it requires SSL.
On anything older, I just use a good term program with Xmodem support. Then I either use a GuruModem or a Raspberrt PiZero with an RS-232 Hat, and the user is good to go from there . If the oldie supports Vt-100, then Links 2 at the Linux console does a great job at accessing just about any site. For older Term Programs (e.g., on CP/M)., I “export TERM=adm3a” and then use Lynx, which respects the TERM setting. Links 2 doesn’t in that scenario, and you wind up with a lot of Terminal Escape codes everywhere.
Using this combination, I’ve been able to surf the modern web even on an old Epson PX-8 running CP/M and only an 8-line screen – “stty rows 8”. I use Modem7 on the CP/M machines, including the PX-8., as it has good Xmodem support.
fretinator,
One time I went on a trip and brought an older laptop with me that hadn’t been used or updated in a long time. When I went to use it the WIFI where I was staying was blocked by an HTTPS authentication form, however they had blocked older SSL protocols (around the time of the ssl poodle vulnerabilities IIRC), and all of the browsers I tried were rejected by the access point’s authentication methods. It was an awful catch-22 scenario where the browser was too old, yet I couldn’t update it because the authentication to go to the internet to download a new browser required that I already have a new browser.
You don’t really think of SSL being a barrier for computers with modern operating systems, but it is in fact a moving target. That sucked