A few times a year, a claim will make the rounds that the largest PDF you can make is a square covering about the middle section of Germany – 381 km × 381 km. Turns out, this is only the maximum size Acrobat Reader can display, and not the limit of the format itself at all. So, how big can you go? Very big:
If you’re curious, that width is approximately the distance between the Earth and the Moon. I’d have to get my ruler to check, but I’m pretty sure that’s larger than Germany.
I could keep going. And I did. Eventually I ended up with a PDF that Preview claimed is larger than the entire universe – approximately 37 trillion light years square. Admittedly it’s mostly empty space, but so is the universe. If you’d like to play with that PDF, you can get it here.
Please don’t try to print it.
Alex Chan
Don’t worry, I’m out of magenta anyway.
It was pretty wild to open up the PDF’s properties and see the size of the universe specified in millimetres.
Once you start using 64bit and double types, you can represent some huge ranges whether or not it makes sense for the software to do so. I’m guessing that PDFs use 64bit integers or doubles.
https://learn.microsoft.com/en-us/cpp/cpp/data-type-ranges?view=msvc-170
Large/high precision variables may seem wasteful, but they actually make things a lot easier mathematically. When using low res floating points, you loose precision quickly and it can cause a lot of bugs/artifacts. Turns out many popular games have rendering and simulation artifacts because their engines use 32bit floats.
Minecraft has rendering bugs…
https://www.youtube.com/watch?v=TY-19gc6E8k
Higher precision is a simple fix, but game engines usually stick to less precise types for performance reasons. Unfortunately floating point types have bad mathematical properties for game maps: too much precision gets wasted on
objects close to the origin, and a lack of precision the farther you get. A lot of application developers use floats just because it’s easier to represent arbitrary fractions, but IEEE 754 32bit floats only have 23bits for fraction, which isn’t very much.
https://en.wikipedia.org/wiki/IEEE_754-1985
Using fixed point math gives you hard boundaries, but at least gives you consistent flat distribution across the entire map. 64bit integers and doubles give us a lot more precision. To get an idea of relative performance on a GPU, I benchmarked these types with multiply accumulate operations on a 3080ti.
I don’t believe that nvidia’s consumer oriented GPUs implement high precision types in silicon, they have to be emulated from lower precision types with significant performance loss. It’s interesting that 32bit ints are faster than 8/16bit ones. They’re probably using the same 32bit ALU with an additional shifting/masking step.