
900-21001-0020-100 NVIDIA A100 80GB PCIE - Graphics Card - New Retail
The NVIDIA® A100 80GB PCIe card delivers unprecedented acceleration to power the world’s
highest-performing elastic data centres for AI, data analytics, and high-performance
computing (HPC) applications. NVIDIA A100 Tensor Core technology supports a broad range of math precisions, providing a single accelerator for every compute workload. The NVIDIA A100 80GB PCIe supports double precision (FP64), single precision (FP32), half precision (FP16), and integer (INT8) compute tasks.
The NVIDIA A100 80GB card is a dual-slot 10.5 inch PCI Express Gen4 card based on the
NVIDIA Ampere GA100 graphics processing unit (GPU). It uses a passive heat sink for cooling,
which requires system airflow to properly operate the card within its thermal limits. The
NVIDIA A100 80GB PCIe operates unconstrained up to its maximum thermal design power
(TDP) level of 300 W to accelerate applications that require the fastest computational speed
and highest data throughput. The latest generation A100 80GB PCIe doubles GPU memory and debuts the world’s highest PCIe card memory bandwidth up to 1.94 terabytes per second
(TB/s), speeding time to solution for the largest models and most massive data sets.
The NVIDIA A100 80GB PCIe card features Multi-Instance GPU (MIG) capability, which can be
partitioned into as many as seven isolated GPU instances, providing a unified platform that
enables elastic data centers to dynamically adjust to shifting workload demands. When using
MIG to partition an A100 GPU into up to seven smaller instances, A100 can readily handle
different-sized acceleration needs, from the smallest job to the biggest multi-node workload.
A100 80GB versatility means IT managers can maximize the utility of every GPU in their data
centre.
NVIDIA A100 80GB PCIe cards use three NVIDIA® NVLink® bridges that allow multiple A100
80GB PCIe cards to be connected together to deliver 600 GB/s bandwidth or 10x the bandwidth of PCIe Gen4, in order to maximize application throughput with the larger workloads.
Product Information
Product Information
Shipping & Returns
Shipping & Returns
Description
The NVIDIA® A100 80GB PCIe card delivers unprecedented acceleration to power the world’s
highest-performing elastic data centres for AI, data analytics, and high-performance
computing (HPC) applications. NVIDIA A100 Tensor Core technology supports a broad range of math precisions, providing a single accelerator for every compute workload. The NVIDIA A100 80GB PCIe supports double precision (FP64), single precision (FP32), half precision (FP16), and integer (INT8) compute tasks.
The NVIDIA A100 80GB card is a dual-slot 10.5 inch PCI Express Gen4 card based on the
NVIDIA Ampere GA100 graphics processing unit (GPU). It uses a passive heat sink for cooling,
which requires system airflow to properly operate the card within its thermal limits. The
NVIDIA A100 80GB PCIe operates unconstrained up to its maximum thermal design power
(TDP) level of 300 W to accelerate applications that require the fastest computational speed
and highest data throughput. The latest generation A100 80GB PCIe doubles GPU memory and debuts the world’s highest PCIe card memory bandwidth up to 1.94 terabytes per second
(TB/s), speeding time to solution for the largest models and most massive data sets.
The NVIDIA A100 80GB PCIe card features Multi-Instance GPU (MIG) capability, which can be
partitioned into as many as seven isolated GPU instances, providing a unified platform that
enables elastic data centers to dynamically adjust to shifting workload demands. When using
MIG to partition an A100 GPU into up to seven smaller instances, A100 can readily handle
different-sized acceleration needs, from the smallest job to the biggest multi-node workload.
A100 80GB versatility means IT managers can maximize the utility of every GPU in their data
centre.
NVIDIA A100 80GB PCIe cards use three NVIDIA® NVLink® bridges that allow multiple A100
80GB PCIe cards to be connected together to deliver 600 GB/s bandwidth or 10x the bandwidth of PCIe Gen4, in order to maximize application throughput with the larger workloads.




