GPU Computing and its Applications

Sidra Shaikh
6 min readJun 23, 2022
nvidia.com

GPU computing is the usage of a graphics processing unit (GPU) to carry out highly parallel unbiased calculations that had been once handled by means of the central processing unit (CPU).

GPU Computing- History

Historically, GPUs had been used to boost up memory-intensive calculations for pc graphics like image rendering and video decoding. Those issues are susceptible to parallelization. because of numerous cores and advanced memory bandwidth, a GPU appeared to be an imperative part of graphical rendering.

At the same time as GPU-driven parallel computing was important to graphical rendering, it also appeared to work truly well for a few scientific computing jobs. Consequently, GPU computing started to adapt extra swiftly in 2006, becoming suitable for a wide array of widespread-purpose computing tasks.

Existing GPU instruction sets have been improved and more of them were allowed to be executed within a single clock cycle, allowing a consistent boom in GPU computing performance. these days, as Moore’s law, has slowed, and some even say it’s over, GPU computing is retaining its pace.

Nvidia Investor Day 2017 Presentation

CPU vs. GPU- How are they different?

At the same time as a CPU is latency-orientated and may manage complex linear tasks at speed, a GPU is throughput-oriented, which permits for massive parallelization.

Architecturally, a CPU consists of a few cores with plenty of cache memory which could take care of a few software threads at the same time using sequential serial processing. In contrast, a GPU is composed of lots of smaller cores that could manage multiple threads concurrently.

Even though a CPU can manage a great variety of tasks, it would not be as speedy as GPU doing so. A GPU breaks down complex issues into lots of separate tasks and works through them concurrently.

GPU Computing Strengths & Weaknesses

A GPU is a specialized co-processor that excels at some tasks and isn’t always so excellent at others. it works in tandem with a CPU to increase the throughput of data and the number of concurrent calculations in the application.

So how exactly does GPU computing excel?

Arithmetic Intensity

GPUs can cope extraordinarily well with high arithmetic intensity. The algorithm is a great candidate for a GPU acceleration if its ratio of math to memory operations is at least 10:1. If this is the case, your algorithm can benefit from the GPU’s primary linear algebra subroutines (BLAS) and numerous arithmetic logic units (ALU).

High Degree of Parallelism

Parallel computing is a type of computation wherein many independent calculations are done simultaneously. huge problems can regularly be divided into smaller portions that are then solved simultaneously. GPU computing is designed to work like that. as an example, if it is possible to vectorize your data and alter the algorithm to work on a set of values suddenly, you can without difficulty attain the benefits of GPU computing.

https://www.dreamstime.com/

Sufficient GPU Memory

preferably your data batch has to fit into the local memory of your GPU, as a way to be processed seamlessly. Even though there are workarounds to apply multiple GPUs concurrently or streamline your data from system memory, restricted PCIe bandwidth might also grow to be a primary overall performance bottleneck in such situations.

Enough Storage Bandwidth

In GPU computing you typically work with huge quantities of data wherein storage bandwidth is essential. these days the bottleneck for GPU-based scientific computing is not floating points per second (FLOPS), but I/O operations per second (IOPS). as a rule of thumb, it’s usually a good idea to assess your system’s worldwide bottleneck. in case you discover that your GPU acceleration profits may be outweighed by way of the storage throughput limitations, optimize your storage solution first.

GPU Computing Applications

GPU computing is being used for numerous actual-world applications. Many prominent technologies and engineering fields that we take for granted nowadays would have no longer advanced so rapidly, if not GPU computing.

Deep Learning

Deep learning is a subset of machine learning. Its implementation is primarily based on artificial neural networks. it mimics the brain, having neuron layers work in parallel. for the reason that data is represented as a set of vectors, deep learning is properly-suited for GPU computing. you may without difficulty enjoy as much as 4x performance gains whilst training your convolutional neural network on a committed Server with a GPU accelerator. As a cherry on top, each main deep learning framework like TensorFlow and PyTorch already allows you to apply GPU computing out-of-the-box with no code modifications.

Drug Design

The successful discovery of recent drugs is difficult in each respect. we’ve all become aware of this throughout the Covid-19 pandemic. Eroom’s law states that the cost of coming across a new drug roughly doubles every nine years. cutting-edge GPU computing aims to shift the trajectory of Eroom’s law. Nvidia is currently building Cambridge-1 — the most effective supercomputer in the united kingdom — dedicated to AI studies in healthcare and drug design.

Weather forecast

Weather forecasting

climate forecasting has substantially benefited from the exponential increase of mere computing power in recent many years, but this free ride is sort of over. these days weather forecasting is being pushed by way of fine-grained parallelism this is based on extensive GPU computing. This approach alone can ensure 20 times faster weather forecasting models.

Seismic Imaging

Seismic imaging is used to offer the oil and gas enterprise an understanding of Earth’s subsurface structure and detect oil reservoirs. The algorithms used in seismic data processing are evolving swiftly, so there’s a huge demand for additional computing strength. for instance, the reverse Time Migration method can be improved up to 14 times whilst the use of GPU computing.

Automotive design

flow field computations for temporary and turbulent flow issues are fairly compute-intensive and time-consuming. traditional techniques often compromise on the underlying physics and aren’t very efficient. a new paradigm for computing fluid flows relies on GPU computing that can help reap huge speed-u.s.over a single CPU, even up to a factor of a hundred.

Astrophysics

GPU has dramatically modified the landscape of excessive overall performance computing in astronomy. Take an N-body simulation as an example, that numerically approximates the evolution of a system of our bodies in which everybody continuously interacts with each other body. you can boost up the all-pairs N-body algorithm up to 25 instances using the usage of GPU computing instead of using a highly tuned serial CPU implementation.

Options pricing

The purpose of the option pricing concept is to offer traders an option’s truthful value that could then be integrated into their trading techniques. some kind of Monte Carlo algorithm is regularly utilized in such simulations. GPU computing can help you achieve 27 times higher overall performance per dollar in comparison to the CPU-simplest method.

Modern GPU card

GPU Computing in the Cloud

Even though GPU computing was once frequently associated with graphical rendering, it has grown into the primary driving force of high-performance computing in many special scientific and engineering fields.

most of the GPU computing work is now being achieved inside the cloud or using the usage of in-house GPU computing clusters.

Cloud companies have democratized GPU computing, making it available for small and medium businesses global-wide. If Huang’s law lasts, the overall performance of GPU will more than double every two years, and innovation will keep sprouting.

--

--