The huge rise in demand for GPU compute corresponds to the dramatic growth of compute-intensive applications utilizing AI/ML, blockchain, gaming, etc. Fortunately, for companies looking to harness GPU compute power, they can rent GPU from cloud providers like Amazon, instead of investing in expensive hardware upfront. Amazon has several EC2 instance options with GPU, categorized into the G and P families, which we will compare, mainly focusing on use case and price.
GPU Recap
GPUs (graphics processing units) are designed to handle parallel processing tasks. While CPUs are ideal for general-purpose tasks and management tasks, GPUs are ideal for compute-intensive tasks, such as machine learning (ML), data analytics, rendering graphics, and scientific simulations.
They are extremely popular and in demand because of the need for high-performance compute and the rise in complexity of the tasks just listed. As such, there have been issues with supply and demand imbalances. There are more options to choose from now, though keep in mind availability may still be limited and region-specific.
Amazon EC2 GPU Instances
Within the accelerated computing instances, two families consist of GPU-based instances, the G and P families. The P family was the first of the accelerated computing instances and was designed for general-purpose GPU compute tasks. The family has since evolved and has become widely adopted for ML workloads, with AI companies, like Anthropic and Cohere, using P family instances. The G family is optimized for graphics-intensive applications and has since also expanded its use cases to cover the ever-popular ML use cases.
Amazon EC2 P Family Instances
EC2 Instance | Use Cases | GPU | CPU | Network Bandwidth (Gbps) | vCPUs |
---|---|---|---|---|---|
P5 | Training and deploying demanding generative AI applications: question answering, code generation, video and image generation, speech recognition HPC applications at scale: pharmaceutical discovery, seismic analysis, weather forecasting, financial modeling | 8 NVIDIA H100 Tensor Core GPUs | 3rd Gen AMD EPYC processors (AMD EPYC 7R13) | Up to 3,200 | 192 |
P4 | ML training and deploying, HPC, computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles, drug discovery | 8 NVIDIA A100 Tensor Core GPUs | 2nd Generation Intel Xeon Scalable processors (Cascade Lake P-8275CL) | 400 | 96 |
P3 | Machine/Deep learning training and deploying, high performance computing, computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles, drug discovery | Up to 8 NVIDIA V100 Tensor Core GPUs | High frequency Intel Xeon Scalable Processor (Broadwell E5-2686 v4) or High frequency 2.5 GHz (base) Intel Xeon Scalable Processor (Skylake 8175) | Up to 100 | Up to 96 |
P2 | General-purpose GPU compute ML, high performance databases, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, rendering | Up to 16 NVIDIA K80 GPUs | High frequency Intel Xeon Scalable Processor (Broadwell E5-2686 v4) | Up to 25 | Up to 64 |
Amazon EC2 G Family Instances
EC2 Instance | ML Use Cases | Graphics-Intensive Use Cases | GPU | CPU | Network Bandwidth (Gbps) | vCPUs |
---|---|---|---|---|---|---|
G6 | Training and deploying ML models for natural language processing: language translation, video and image analysis, speech recognition, personalization | Creating and rendering real-time, cinematic-quality graphics, game streaming | Up to 8 NVIDIA L4 Tensor Core GPUs | 3rd generation AMD EPYC processors (AMD EPYC 7R13) | Up to 100 | Up to 192 |
G5g | ML inference and deploying deep learning applications | Android game streaming, graphics rendering, autonomous vehicle simulations | Up to 2 NVIDIA T4G Tensor Core GPUs | AWS Graviton2 Processor | Up to 25 | Up to 64 |
G5 | Training and inference deep learning models for simple to moderately complex ML use cases: natural language processing, computer vision, recommender systems | Remote workstations, video rendering, cloud gaming to produce high fidelity graphics in real time | Up to 8 NVIDIA A10G Tensor Core GPUs | 2nd generation AMD EPYC processors (AMD EPYC 7R32) | Up to 100 | Up to 192 |
G4dn | ML inference and small-scale/entry-level ML training jobs: adding metadata to an image, object detection, recommender systems, automated speech recognition, language translation | Remote graphics workstations, video transcoding, photo-realistic design, game streaming in the cloud | Up to 8 NVIDIA T4 Tensor Core GPUs | 2nd Generation Intel Xeon Scalable Processors (Cascade Lake P-8259CL) | Up to 100 | Up to 96 |
G4ad | N/A | Remote graphics workstations, video transcoding, photo-realistic design, game streaming in the cloud | AMD Radeon Pro V520 GPUs | 2nd Generation AMD EPYC Processors (AMD EPYC 7R32) | Up to 25 | Up to 64 |
G3 | N/A | 3D visualizations, graphics-intensive remote workstation, 3D rendering, application streaming, video encoding | NVIDIA Tesla M60 GPUs, each with 2048 parallel processing cores and 8 GiB of video memory | High frequency Intel Xeon Scalable Processors (Broadwell E5-2686 v4) | Up to 25 | Up to 64 |
Choosing an EC2 GPU Instance for your ML Workload
For graphics workloads, the choice between the P and G family is usually much simpler—pick an instance in the G family. For ML workloads, more factors affect choice, the main being: use case, performance, instance size, and price. There are other factors too, such as availability and hardware compatibility, that will further narrow down the options.
ML Use Case (Training vs Inference vs Deploying) and Performance
The first thing to consider is the use case, whether it’s training models, performing inference, or deploying pre-trained models. Certain instances are designed to handle these requirements better than others.
P family instances are generally much more powerful than comparable G family instances, making them an excellent choice for demanding ML tasks, such as large-scale model training or high-performance computing (HPC) workloads. Another obvious rule of thumb is that the later generations of an instance type tend to be more performant than the previous generations. So, if your use case requires the highest amounts of performance, consider P5 or P4 instances.
However, in many cases, such as for deploying pre-trained models or performing inference, you just don’t need that level of compute. In those scenarios, the G5 or G4dn instances can be a more suitable and cost-effective choice.
Instance Size
The size of the instance, in terms of CPU and memory capacity, is another important consideration since it significantly impacts performance and cost-effectiveness. The G family offers a wider range of instance sizes, allowing you to choose the appropriate CPU and memory capacity based on your workload requirements. In contrast, the P family has fewer options; for example, the P5 and P4 series each only have one instance size available.
EC2 GPU Pricing
G family instances tend to be much more cost-effective than their P family counterparts, potentially resulting in significant cost savings for organizations that don’t require the highest levels of GPU performance.
g4dn
The g4dn instance receives a lot of attention, and rightfully so. It is the lowest cost of the EC2 GPU instances and is performant for ML inference and small-scale training.
Conclusion
Selecting an EC2 GPU instance, though not as much of an investment as setting up and purchasing your own hardware, is still a big investment with lots of factors to consider. The two accelerated computing families with GPU instances, the P family and G family, both have several options to choose from. While the P family has instances that are better suited for demanding tasks like large-scale model training, G family instances have a good balance between performance and cost-effectiveness that is a good choice for many workloads.
Monitor your AWS costs.