The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration—at every scale—to power the world’s highest performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. As the engine of the NVIDIA data center platform, A100 provides up to 20x higher performance over the prior NVIDIA Volta generation. A100 can efficiently scale up or be partitioned into seven isolated GPU instances, with Multi-Instance GPU (MIG) providing a unified platform that enables elastic data centers to dynamically adjust to shifting workload demands.
A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to deliver real-world results and deploy solutions into production at scale, while allowing IT to optimize the utilization of every available A100 GPU.
NVIDIA Ampere-Based Architecture
Third-Generation Tensor Cores
TF32 for AI: 20x Higher Performance, Zero Code Change
Double-Precision Tensor Cores: The Biggest Milestone Since FP64 for HPC
Multi-Instance GPU (MIG)
HBM2e
Structural Sparsity
Next Generation NVLink
Every Deep Learning Framework, 700+ GPU-Accelerated Applications
Virtualization Capabilities
Structural Sparsity: 2X Higher Performance for AI
Warranty
3-Year Limited Warranty
Dedicated NVIDIA professional products Field Application Engineers
Resources
Links
Contact pnypro@pny.eu for additional information.
Go to cart/checkout Continue Shopping