The search functionality of FrogFind is basically a custom wrapper for DuckDuckGo search, converting the results to extremely basic HTML that old browsers can read. When clicking through to pages from search results, those pages are processed through a PHP port of Mozilla’s Readability, which is what powers Firefox’s reader mode. I then further strip down the results to be as basic HTML as possible.
FrogFind is a clever and incredibly useful search engine if you like to play around with old, outdated hardware with terrible browsers. It makes a lot of the web accessible, fast, and usable on my old Palm devices, for instance, but truly anything that at least has a browser should work just fine.
There are quite a few old and unmaintained platforms out there that cannot access the current web anymore, but tools like FrogFind address this problem in a very usable way. It’s the creation of YouTuber ActionRetro, an excellent YouTube channel with tons of awesome vintage Mac (and other platforms) content.
This is interesting, but it seems to have problems with basic images
Huh. Funny how coincidences come up some times.
I’m currently poking at a similar “bake Readability into a server to on-demand reformat content” idea too.
Mine is to have something similar to miniserve but instead heuristically turning a folder full of arbitrary HTML, PDF, EPUB, DOC, DOCX, ODT, RTF, TXT, etc. files into an eReader app with at least a minimal library view.
(The intent being so I can just fire it up on my LAN, tap a bookmark or scan a QR code, and use any mobile device with a browser, including the hand-me-down iPhone I have for website testing, as a way to read eBooks without having to worry about inconsistencies in format, app UI, or how to bookmark my recent document history and scroll position within a given file as I move from device to device.)
Well… the scroll position part will require JavaScript and WebSockets, but the rest should be retro-compatible once turn my attention away from things like putting together a test corpus to scrape together suitably reliably heuristics for converting things like PDF-extracted text into well-formatted HTML.
Interesting, I’ve thought of setting up squid on my 0wn home network and sharing the config around to retro sites for using old computers to browse. One of the issues with most sites, is they force https (both a good and bad thing), and decrypting SSL even on a 68060@50mhz is slow…
But what about proxying https websites via http? Vintage computers can’t access most modern HTTPS websites
@birdie that is what I was suggesting. AmiSSL is very up to date for the Amiga, but even with that it is slow outside of a PPC one, and even then it isn’t terribly speedy (420mhz one here). Hell, even on my 1.2ghz G3 Macbook, it is fairly slow. Encryption takes a lot of CPU, we just don’t notice it these days as modern cpus have instructions built in to help.
Ech, I get it old computers cant do modern TLS or JS/html. But, a big advantage of DDG is that it is TLS, and a trusted no log search engine. Frog find obviously isn’t as well known or trusted, plus its TLS cert that it has for https://frogfind.com is wrong. Might as well do the same html /tls scrubbing for google.com. Using duck duck gives no advantage for privacy here.
Ah, the web I remember.
Quick. Simple. No fancy bells and whistles. No garbage nonsense. No flash or action!
This could become my permanent search for today’s machines!
Now turn this into a browser!
Call it when the web was good, or back when the web was useful, or something like that!
I’d pay for that!
I’m assuming you mean pre Geocities/AngelFire