While NVIDIA’s usual presentation efforts for the year were dashed by the current coronavirus outbreak, the company’s march towards developing and releasing newer products has continued unabated. To that end, at today’s now digital GPU Technology Conference 2020 keynote, the company and its CEO Jensen Huang are taking to the virtual stage to announce NVIDIA’s next-generation GPU architecture, Ampere, and the first products that will be using it.
Don’t let the term GPU here fool you – this is for the extreme high-end, and the first product with this new GPU architecture will set you back a cool $199,000. Any consumer-oriented GPUs with this new architecture is at the very least a year away.
“GPU” means “Generic Processing Unit”, it is no longer about graphics. On some podcast I heard an engineer describing nVidia as “computing & AI hardware company”.
You have been listening to enterprise marketing for too long ;). I am sure they want you to believe that, but of course it just means Graphics processing unit. Please explain why they always attach to a HDMI/DP port otherwise. Also please explain this article by NVidia: https://nvidia.custhelp.com/app/answers/detail/a_id/4637/kw/graphics%20processing%20unit)
Ofcourse these cards (not so much geforce but quadro) are extremely good at parallel processing and we cannot ignore the cuda-platform. But don’t hijack the term GPU. If it was really generic you should be able to boot a pc and run the os on it without needing a cpu
There are GPU cards that don’t have any kind of video output on them. They are used in compute heavy applications.
avgalen,
You aren’t wrong about the definition, however they have sold variants without video ports for compute applications. IMHO the most correct term to use at this point is GPGPU.
There are GPUs that don’t have those connectors. For example there’s at least one nvidia GPU model with no ports at all that was explicitly made for crypto mining. There’s also a GPU out there with only ethernet connectors made to run billboards.
Yes but it’s still a GPU and can still be used as a GPU. You can output from the GPU direct to the main boards HDMI / DP port even if the GPU doesn’t have a physical output
Using a CPU to perform graphics calculations doesn’t make it a GPU. Likewise using a GPU to perform machine learning inference doesn’t magically turn it into an AI processor or “General Processing Unit”
Any processor could act as a GPU, and render the graphics into a framebuffer in exactly the same way. In fact, before dedicated GPUS were common some high end graphics boards actually used (often several) general purpose processors, the sgi realityengine being one such example that used intel i860 risc chips.
The difference is that generally a GPU is a special purpose chip that needs to be a slave to something else and cannot bootstrap the system on its own.
bert64,
I agree with this. There are a lot of ways to tackle the problem, maybe not as efficiently, but still we used to build render farms out of general purpose processors.
Well, it’s not so clear cut when you look at the SBC market where CPU and GPU are on the same chip. Even with NVIDIA GPUs I believe many (if not all) their latest chips already include a real CPU capable of running independently from the host.
https://riscv.org/wp-content/uploads/2017/05/Tue1345pm-NVIDIA-Sijstermans.pdf
The slides suggest Nvidia’s onboard CPU is intended for DRM purposes (not very inspiring or creative on their part), however as far as technical capabilities go I wouldn’t be surprised if it could support a specialized operating system to create a self-hosted platform. A big problem for 3rd party inventors is that nvidia’s platform is proprietary and their license terms are restrictive. If it were open, it would be an amazing SBC platform! Hell I’d even be interested in throwing my hat into the ring to make my own OS for it!
There’s a lot of contexts more than just DRM that significantly increased security is helpful.
Just to name a couple, Virtualized GPU and GPGPU compute workloads is already a not-insignificant chunk of NVidia’s business, and they also want their chips in all sorts of self-driving vehicles and vehicles with driver-assist, and there is a lot of surface area to attack – each sensor in the car is a potential entry point, at the very least.
Drumhellar,
Nvidia’s own example highlights a host based attack, there isn’t much a GPU can do to stem host based attacks like this one…
https://www.cnbc.com/2016/09/20/chinese-company-hacks-tesla-car-remotely.html
It makes sense to extend OS memory isolation features from the host CPU to the GPU (such that multiple processes using the GPU have strong isolation from one another), but IMHO the host remains the weakest point. If the host is compromised, I doubt there’s much nvidia would be able to do about it.
FWIW
There are lots of compute-centric products that both NVIDIA/AMD sell that do not have any display output whatsoever.
Actually some GPU products run a small OS on board, and have their own ARM controller.
I think the proper naming at this point is GPGPU for compute-centric applications.
Please no backronyms. Use the term GPGPU if you want to emphasize the “general-purpose” computing abilities of modern GPUs.
zdzichu, you have mixed up GPU and GPGPU
GPU absolutely does not mean “Generic Processing Unit”
The rumor mill has pinned consumer GPUs at September 2020, so just a quarter away.