This blog post is article three in an educational series designed to give our readers an opportunity to learn more about how computers function without getting burdened with highly technical details. Last month, we discussed the RAM. Today, we will be exploring the graphics processing unit, or GPU.
A visual interface is one of the most distinguished features of modern computing. Whether you are playing a videogame, browsing the web, navigating your operating system, or reading this article, you are looking at some sort of image on a screen. These graphics define our interactions with modern electronic devices, from phones to smartwatches to full desktop PCs, and to generate these images, computers rely on dedicated hardware called the graphics processing unit (GPU).
At first, the concept of a GPU might confuse a few of you who have read our previous articles on how computers work. After all, why is a specialized processor required when computers already have a central processing unit (CPU) that does complex computations already? On the surface, the GPU may appear to be a redundant piece of hardware, but a look under the hood reveals an entirely different story. While the CPU is designed as a general multipurpose computing tool that can handle all sorts of different tasks quickly, a GPU is built from the ground up as dedicated graphics processing tool, and this difference in purpose is reflected in the microarchitecture of the GPU. Without getting too detailed, the process of rendering graphics requires millions of similar calculations that are accomplished most quickly if the hardware is designed to handle each these tasks at the same time (in parallel). This is a fundamentally different design from a CPU, where a fully parallel design would not allow for the flexibility that its different tasks demand of it.
There are two main types of GPUs used in the consumer PC space: discrete and integrated GPUs. A discrete GPU (commonly called a graphics card) is a separate PCB that connects to the motherboard over the PCI interface. It contains the GPU chip, video ports (VGA, DP, DVI, HDMI, etc.), video random access memory (VRAM), power connector (sometimes omitted in low power cards that draw enough electricity from the PCI connector), and a cooling solution (typically a heatsink and fan). In many respects, a graphics card could be thought of as a “mini-computer” itself designed to render graphics. They operate in a similar way to a CPU and RAM computing system, where textures are loaded into VRAM and 3D or 2D model rendering occurs in the actual processor. Graphics cards are generally the most expensive parts of consumer computers, and some of the higher end models have fancy lighting effects, cool paint jobs, and other distinguished features. Unlike a discrete GPU, an integrated GPU is built on the same die as the CPU (hence its integration into the chip). Because the GPU and CPU are housed so closely to each other, they can communicate very quickly with each other, reducing latency that an interface like PCI introduces. Unfortunately, because of space and power limitations, there are fewer resources available to integrated GPUs, so they are generally less powerful. Examples of integrated GPUs include the AMD APU Series and Intel Integrated Graphics.
The two major players in graphics cards are NVidia and Advanced Micro Devices (AMD). Each of them are sporting powerful GPU designs (the latest being codenamed Pascal and Polaris, respectively). These different GPU architectures are sold in several different variants, with Nvidia’s most powerful consumer offering being branded Titan X and their most budget oriented card being the GTX 1050. AMD currently produces highly popular RX480 and RX470 graphics cards targeted at the “sweet spot” of price and performance.
On the subject of performance, how exactly is it measured for GPUs? VRAM, like regular RAM, is typically measured in gigabytes (GBs). Generally, more is better, though if you are gaming at resolutions like 1080p or rendering only small scenes, 4GBs is considered comfortable, whereas if you render highly complex scenes or play games at 4K, 8GBs is probably more suitable. Professional graphics card offerings have up to 24GBs of VRAM available (as featured on NVidia’s Quadro Series GPUs for scientific applications). As with CPUs, core count and frequency are considerations yet again. GPUs have a high number of cores because they are simpler, parallel processors optimized for throughput, but because of how significant architectural changes are, you should not strictly compare core count or even core frequency between generations of GPUs. Power consumption is another metric that is important to many people who want to make sure their computer power supply can handle their new graphics cards by supplying sufficient wattage. There are more metrics including memory bandwidth and speed, but our honest opinion is that you should read and watch professional reviews of graphics cards to gauge their real world performance in the applications you care about if you are considering a purchase.
To summarize, modern GPUs enable our highly visual interactions with computers. Designed for efficient graphics processing, they enable computers to handle complex graphics scenes in our increasingly 3D computing experience. Offered in both integrated and discrete varieties, GPUs vary greatly in processing ability and speed, and selecting between AMD and NVidia (and the many “add-in-board-partners,” such as ASUS, EVGA, and MSI, who sell variations of the reference versions these two companies create) depends on how graphics-intensive the different games and applications you use are and what sort of budget you have available.
If you are interested in upgrading your GPU, or if you suspect that it may be failing to function properly, feel free to bring your computer to one of Geek ABC’s drop off locations, give us a call, or email us at email@example.com! Our technicians are more than happy to address whatever issues your computer might be facing. Thanks for reading, and be sure to check out our other blog posts! We hope to see you back for next month’s installment in “How Does My Computer Work?”.