Recently, Intel bought Altera, one of the largest producers of FPGAs. Intel paid a whopping $16.7 billion, making it their largest acquisition ever. In other news, Microsoft is using FPGAs in its data centers, and Amazon is offering them on their cloud services. Previously, these FPGAs were mainly used in electronics engineering, but not so much in software engineering. Are FPGAs about to take off and become serious alternatives to CPUs and GPUs?
FPGAs are used extensively by e.g. the Amiga community to recreate older chipsets.
FPGAs will remain a toy until the complete SDKs, or at least the complete terminal/CLI compiler toolchains, become open source. CPUs and GPUs can both be programmed by and with Free Software, which makes them very easy to operate at scale. FPGA vendors are incredibly squirrely about releasing the kind of information that would be necessary to provide that sort of openness.
The fact that FPGAs are widely used belies your statement.
Like, very widely used.
Then why can a complete Amiga chipset be recreated in such a chip? They are not toys.
brostenen,
No, they are not toys, but I do think tidux is right about the lack of openness. I like the capabilities that FPGAs offer a whole lot. One of these days I’d like to be able to endorse a specific product, but so far I think the development stacks are too proprietary, which holds back both development potential and adoption.
Correct, but for now they have good enough sales even without opening specifications of their products.
Ha ha. No.
FPGAs are actually at least as widely used as CPUs. Maybe not within the FOSS community, but the FOSS community is not everyone. Even then, there are still a decent number of uses of them in the FOSS community. See for example the Parallela (the coprocessor is entirely synthesized in an FPGA).
Would it be better if it were more open? Definitely.
Does that keep them from being actually usable for real world applications? Absolutely not.
Parallela uses the FPGA to interface to the coprocessor, the actual coprocessor ASIC is a complete separate chip.
Notice it sitting right next to the Xilinx ZYNC chip (which is the ARM + FPGA you are referring to).
https://www.parallella.org/
http://www.orbit-lab.org/raw-attachment/wiki/Hardware/bDomains/cSan…
Oh, you’re right. It’s been far too long since I actually looked at one of those.
I’ve sometimes wondered if FPGAs could become a standard computer component. Just imagine if a FPGA were part of your PC and software were written to take advantage of it (so ‘software’ would no longer be just software, but actually a combination of software + hardware, where the FPGA would be reprogrammed on the fly to solve custom tasks with the CPU as orchestrator or fallback). There would be huge benefits for things such as Photoshop filters, not to mention the ability of replicating any hardware you like inside your PC, like any sound card or music synthesizer. Emulation would be taken to the next level, too.
It’s not likely to happen, though. FPGAs would be very hard or impossible to share between programs, programming them is very hard, all the existing tools are incredibly bad (and incredibly expensive) and, at the end of the day, we already have highly customizable hardware which specializes in heavy parallel tasks, in the form of GPUs.
Darak,
I actually think it is likely to happen sooner or later because this marriage of these two technologies makes a lot of sense; and conventional CPU architectures have plateaued due to diminishing returns. High performance FPGAs are still pricey, but with large-scale production it should become viable for intel/amd/arm to incorporate FPGAs as integrated features. FPGAs offer new innovation that the industry is currently lacking. I believe this has the potential to be genuinely revolutionary and not just some marketing gimmick that CPU fabs have been dishing out these past several generations. That’s not to say the corporations won’t drop the ball somehow, but I remain optimistic.
Edited 2018-08-15 02:09 UTC
I’m sure sooner or later someone will have figured a way to reprogram the FPGA remotely…
There are already PCI express board around which contain big monster FPGAs like the US+ from Xilinx.
Only problem: FPGA programming is not that easy. But sooner or later there will be drag’n’drop programming …
Well not the only problem, lack of competition at the top end and low volume manufacture of large FPGAs means high costs.
People that use Xilinx FPGAs or Altera FPGAs end up optimizing their design for one or the other and so don’t switch from one to the other very often. Even more the case for those that use the embedded peripherals.
It’s not like writing software where you can just swap out to a different brand of x86 or ARM or what have you and it mostly still work.
A truly competitive FPGA would consist of a standardized SERDES or PCIE IO only. In a standard footprint + a chipset for standard IO functions. A company doing something like that would be like what IBM did with the PC.
…
Is that something similar to this?
http://amigaonthelake.com/blog/the-xena-research-project
..they didn^A't emulate the unreleased 8 chan chip.
Don^A't worry though I had Multichannel MIDI
https://www.youtube.com/watch?v=-Wkkd9UTLxQ
Am I wrong or the compilation for the FPGA takes hours making it completely unfit for runtime usage by programs?
And seem a single core PC it’s a better fit to compile such chips
At now seem to me Photoshop needs to be sold with it’s own hardware and the FPGA already programmed
Edited 2018-08-16 09:00 UTC
nickb,
I don’t know if FPGAs would work well for runtime compilation, that’s an interesting question. Nevertheless, does it really matter? As a developer, it takes a while to compile software like linux/samba/ffmpeg/etc, but as a user that’s largely irrelevant since we just run the precompiled binaries. Similarly, FPGAs can be programmed and running much quicker than the time it takes to compile the binary.
We’re only in the early days of FPGA tech. As these become more common we’ll undoubtedly see novel approaches to use the hardware in unanticipated ways much like how the early demoscene did with early graphics technology. There’s tons of evolution paths that become possible with FPGAs. Compilers themselves will be able to take advantage of the FPGAs in order to significantly reduce compile time.
Normal CPUs achieve multitasking/multitasking through task switching/time sharing, but by their nature, FPGAs are can benefit from running many computations concurrently. We’ll need new hardware and software platforms to make it happen, but I think it will come. FPGAs may need some kind of logic separation, much like how the “MMU” allows an operating system to run multiple independent tasks.
I’m excited about this because it opens up new kinds of software design. IMHO innovation in the industry has gotten too stale and FPGAs are exactly what we need to shake it up! I just hope it happens in an open and accessible way rather than under the lock and key of powerful corporations.
I don’t think you adressed the gist of nickb comment? That (apparently?) the programming of an FPGA for a different workload takes a bit of time – so it’s not like running precompiled binaries, it’s more like compiling them every time we want to run the program?
Not only the amiga chipset is actually re implemented on a FPGA but also an advanced core for the 68060 called “68080” adding MMX instructions etc equivalent to a 200 mhz pentium.
http://www.apollo-core.com/index.htm?page=features
That’s actually kinda sad. Once Amiga led the pack…
The new Amiga X5000 comes with a Xena multi core co-processor that is maybe a more feasible half step to useful reconfigurable FPGA type capability.
https://www.generationamiga.com/2017/02/27/what-is-xena-or-xmos-xs1-…
Meanwhile mainstream platforms enjoy, for a ~decade already, fast GPGPU, not some expensive niche half steps… (oh yeah, Amiga was affordable, too) and also old, I remember reading about xmos good several years ago.