You may recall the recent OSAlert article about Linux Fund getting donations to supply developers with OGD1 boards. (OGD1 is a what you might call an “open source graphics card,” with all designs, documentation and source code available under Free Software licenses. Technically, however, OGD1 is an FPGA-based prototyping platform with memory and video encoders on it. See the wikipedia article.) Since then, the FSF got involved and is asking for volunteers to help with the OGP wiki. The OGP had shown OGD1 driving a graphics display back in 2007 at OSCON. And now, the OGP has just announced technical success with the rather difficult challenge of emulating legacy VGA text mode. They even put up a video on YouTube of a display, driven by OGD1, showing a PC booting into Gentoo.
There is something I really do not understand. If they build a graphic chip with an FPGA. Why do they need to emulate VGA? Doesnt the chip produce VGA by itself?
There is something I really do not understand. If they build a graphic chip with an FPGA. Why do they need to emulate VGA? Doesnt the chip produce VGA by itself?
No, the chip on the OGD1 is a general purpose CPU. You can program it to do anything you wish, even run a freaking web server if you wish. It just happens to be also capable of doing graphical output, but it has to be programmed to do that first. This would be easy if they could just implement their own calls for setting video mode and all such and implement their own driver for it. But VGA emulation is a completely different beast; you are emulating a whole different card.
No it doesn’t have a general purpose CPU it has a FPGA which can be used to make a CPU or in this case a GPU
If you check the materials list you will see it actually has two FPGAs
http://spreadsheets.google.com/pub?key=ps980ejSIf-3DeoBkURvDZQ
A Xilinx and a Lattice
Edited 2009-04-23 13:08 UTC
Hmm… perhaps I didnt make myself quite clear here. With the FPGA they essentially construct a chip that works like a VGA-chip. The thing I did not quite understand was the use of the word “emulate” because when the FPGA has been programmed it does not “emulate” a VGA-chip. It IS a VGA-chip.
I guess you can use the word “emulate” to explain the programming of the FPGA to look like an VGA-chip but that doesnt mean it emulates it? Ah whatever…
On a sidenote: I think it is weird to get voted down for asking a question?
It is not a VGA chip in the sense that it doesn’t use the same internal working than the real (old) VGA chips. But that should be the case of most other graphic card today I suppose. But, it is VGA compatible.
VGA is realy just a specification, just like VESA VBE or DirectX 10 Certified graphic card.
As the original architect of the way VGA is done on this board, perhaps I can offer an explanation.
There is perhaps a more straightforward way of implementing VGA than the way we did it. The direct route would require two components. One piece is the host interface that interprets I/O and memory accesses from PCI and manipulates graphics memory appropriate. The other piece is a specialized video controller that is able to translate text (which is encoded in two bytes as an ASCII value and color indices) in real-time into pixels as they’re scanned out to the monitor. This is actually how others still do it.
To us, VGA is legacy. It should be low-priority and have minimal impact on our design. We didn’t want to hack up our video controller in nasty ways (or include alternate logic) for such a purpose, and we didn’t want to dedicate a lot of logic to it. Doing it the usual way was going to be too invasive and wasteful. Also, we want eventually to do PCI bus-mastering, which requires some high-level control logic, typically implemented in a simple microcontroller.
So we thought, if we’re going to have a microcontroller anyhow, why not give it dual purpose. When in VGA mode, the uC we designed (which we call HQ) intercepts and services all PCI traffic to OGD1. Microcode we wrote interprets the accesses and stores text appropriately in graphics memory. Then, to avoid hacking up the video controller, we actually have HQ perform a translation from the text buffer to a pixel buffer over and over in the background. Its input is VGA text. Its output is pixels suitable for our video controller.
Aside from the logic reduction, this has other advantages. The screen resolution as seen by the host is decoupled from the physical display resolution. So while VGA thinks it’s 640×400, the monitor could be at 2560×1600, without the need for a scaler. It’s easily programmable, and we have complete control over how the text is processed into pixels; for instance, we could have HQ do some scaling or use a higher-res font different from what the host thinks we’re using.
We call it emulation because, in a way, our VGA is implemented entirely in software, albeit microcode that’s loaded into or own microcontroller.
Thank you for a very good explanation. Keep up the good work!
I forgot to mention this:
http://www.linuxfund.org/projects/ogd1/
Linux Fund is raising funds to buy 10 OGD1 boards (at cost) for developers.
Edited 2009-04-24 01:39 UTC
That is good enough for nethack