AnandTech's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, November 9th, 2021
| Time |
Event |
| 4:30a |
Samsung Announces First LPDDR5X at 8.5Gbps 
After the publication of the LPDDR5X memory standard earlier this summer, Samsung has now been the first vendor to announce new modules based on the new technology.
The LPDDR5X standard will start out at speeds of 8533Mbps, a 33% increase over current generation LPDDR5 based products which are running at 6400Mbps.
| | 4:30a |
NVIDIA Announces Jetson AGX Orin: Modules and Dev Kits Coming In Q1’22 
Today as part of NVIDIA’s fall GTC event, the company has announced that the Jetson embedded system kits will be getting a refresh with NVIDIA’s forthcoming Orin SoC. Due early next year, Orin is slated to become NVIDIA’s flagship SoC for automotive and edge computing applications. And, has become customary for NVIDIA, they are also going to be making Orin available to non-automotive customers through their Jetson embedded computing program, which makes the SoC available on a self-contained modular package.
Always a bit of a side project for NVIDIA, the Jetson single-board computers have none the less become an important tool for NVIDIA, serving as both an entry-point for helping bootstrap developers into the NVIDIA ecosystem, and as a embedded computing product in and of itself. Jetson boards are sold as complete single-board systems with an SoC, memory, storage, and the necessary I/O in pin form, allowing them to serve as commercial off the shelf (COTS) systems for use in finished products. Jetson modules are also used as the basis of NVIDIA’s Jetson developer kits, which throw in a breakout board, power supply, and other bits needed to fully interact with Jetson modules.
| NVIDIA Jetson Module Specifications |
| |
AGX Orin |
AGX Xavier |
Jetson Nano |
| CPU |
12x Cortex-A78AE
@ 2.0GHz |
8x Carmel
@ 2.26GHz |
4x Cortex-A57
@ 1.43GHz |
| GPU |
Ampere, 2048 Cores
@ 1000MHz |
Volta, 512 Cores
@ 1377MHz |
Maxwell, 128 Cores
@ 920MHz |
| Accelerators |
2x NVDLA v2.0 |
2x NVDLA |
N/A |
| Memory |
32GB LPDDR5, 256-bit bus
(204 GB/sec) |
16GB LPDDR4X, 256-bit bus
(137 GB/sec) |
4GB LPDDR4, 64-bit bus
(25.6 GB/sec) |
| Storage |
64GB eMMC 5.1 |
32GB eMMC |
16GB eMMC |
| AI Perf. (INT8) |
200 TOPS |
32 TOPS |
N/A |
| Dimensions |
100mm x 87mm |
100mm x 87mm |
45mm x 70mm |
| TDP |
15W-50W |
30W |
10W |
| Price |
? |
$999 |
$129 |
With NVIDIA’s Orin SoC set to arrive early in 2022, NVIDIA is using this opportunity to announce the next generation of Jetson AGX products. Joining the Jetson AGX Xavier will be the aptly named Jetson AGX Orin, which integrates the Orin SoC.
Orin featuring 12 Arm Cortex-A78AE “Hercules” CPU cores and an integrated Ampere architecture GPU with 2048 CUDA cores, adding up to 17 billion transistors, Given Orin's mobile-first design, NVIDIA is being fairly conservative with the clockspeeds here; the CPU cores for Jetson AGX Orin top out at 2GHz, while the GPU tops out at 1GHz. Otherwise, the SoC also contains a pair of NVIDIA’s latest generation dedicated Deep Learning Accelerators (DLA), as well as a vision accelerator to further speed up and efficiently process those tasks.

Rounding out the Jetson AGX Orin package, the Orin SoC is being paired with 32GB of LPDDR5 RAM, which is attached to a 256-bit memory bus, allowing for 204GB/second of memory bandwidth. Meanwhile storage is provided by a 64GB eMMC 5.1 storage device, which is twice the capacity of the previous generation Jetson AGX.
All told, NVIDIA is promising 200 TOPS of performance in INT8 machine learning workloads, which would be a 6x improvement over Jetson AGX Xavier. Presumably those performance figures are for the module’s full 50W TDP, while performance is proportionally lower as you move towards the module’s minimum TDP of 15W.

Meanwhile, for this generation NVIDIA will be maintaining pin and form-factor compatibility with Jetson AGX Xavier. So Jetson AGX Orin modules will be the same 100mm x 87mm in size, and use the same edge connector, making Orin modules drop-in compatible with Xavier.
Jetson AGX Oron modules and dev kits are slated to become available in Q1 of 2022. NVIDIA has not announced any pricing information at this time.
| | 7:30a |
NVIDIA Launches A2 Accelerator: Entry-Level Ampere For Edge Inference 
Alongside a slew of software-related announcements this morning from NVIDIA as part of their fall GTC, the company has also quietly announced a new server GPU product for the accelerator market: the NVIDIA A2. The new low-end member of the Ampere-based A-series accelerator family is designed for entry-level inference tasks, and thanks to its relatively small size and low power consumption, is also being aimed at edge computing scenarios as well.
Along with serving as the low-end entry point into NVIDIA’s GPU accelerator product stack, the A2 seems intended to largely replace what was the last remaining member of NVIDIA’s previous generation cards, the T4. Though a bit of a higher-end card, the T4 was designed for many of the same inference workloads, and came in the same HHHL single-slot form factor. So the release of the A2 finishes the Ampere-ficiation of NVIDIA accelerator lineup, giving NVIDIA’s server customers a fresh entry-level card.
| NVIDIA ML Accelerator Specification Comparison |
| |
A100 |
A30 |
A2 |
| FP32 CUDA Cores |
6912 |
3584 |
1280 |
| Tensor Cores |
432 |
224 |
40 |
| Boost Clock |
1.41GHz |
1.44GHz |
1.77GHz |
| Memory Clock |
3.2Gbps HBM2e |
2.4Gbps HBM2 |
12.5Gbps GDDR6 |
| Memory Bus Width |
5120-bit |
3072-bit |
128-bit |
| Memory Bandwidth |
2.0TB/sec |
933GB/sec |
200GB/sec |
| VRAM |
80GB |
24GB |
16GB |
| Single Precision |
19.5 TFLOPS |
10.3 TFLOPS |
4.5 TFLOPS |
| Double Precision |
9.7 TFLOPS |
5.2 TFLOPS |
0.14 TFLOPS |
| INT8 Tensor |
624 TOPS |
330 TOPS |
36 TOPS |
| FP16 Tensor |
312 TFLOPS |
165 TFLOPS |
18 TFLOPS |
| TF32 Tensor |
156 TFLOPS |
82 TFLOPS |
9 TFLOPS |
| Interconnect |
NVLink 3
12 Links |
PCIe 4.0 x16 +
NVLink 3 (4 Links) |
PCIe 4.0 x8 |
| GPU |
GA100 |
GA100 |
GA107 |
| Transistor Count |
54.2B |
54.2B |
? |
| TDP |
400W |
165W |
40W-60W |
| Manufacturing Process |
TSMC 7N |
TSMC 7N |
Samsung 8nm |
| Form Factor |
SXM4 |
SXM4 |
HHHL-SS PCIe |
| Architecture |
Ampere |
Ampere |
Ampere |
Going by NVIDIA’s official specifications, the A2 appears to be using a heavily cut-down version of their low-end GA107 GPU. With only 1280 CUDA cores (and 40 tensor cores), the A2 is only using about half of GA107’s capacity. But this is consistent with the size and power-optimized goal of the card. A2 only draws 60W out of the box, and can be configured to drop down even further, to 42W.
Compared to its compute cores, NVIDIA is keeping GA107’s full memory bus for the A2 card. The 128-bit memory bus is paired with 16GB of GDDR6, which is clocked at a slightly unusual 12.5Gbps. This works out to a flat 200GB/second of memory bandwidth, so it would seem someone really wanted to have a nice, round number there.

Otherwise, as previously mentioned, this is a PCIe card in a half height, half-length, single-slot (HHHL-SS) form factor. And like all of NVIDIA’s server cards, A2 is passively cooled, relying on airflow from the host chassis. Speaking of the host, GA107 only offers 8 PCIe lanes, so the card gets a PCIe 4.0 x8 connection back to its host CPU.
Wrapping things up, according to NVIDIA the A2 is available immediately. NVIDIA does not provide public pricing for its server cards, but the new accelerator should be available through NVIDIA’s regular OEM partners.
| | 9:00a |
The Intel Z690 Motherboard Overview (DDR5): Over 50+ New Models To support the launch of Intel's latest 12th generation 'Alder Lake' processors, Intel has also pulled the trigger on its latest Z690 motherboard chipset. Using a new LGA1700 socket, some of the most significant advancements with Alder Lake and Z690 include PCIe 5.0 support from the processor, as well as a PCIe 4.0 x8 link from the processor to the chipset. In this article, we're taking a closer look at over 50+ different DDR5 enabled motherboards designed to not only use the processing power of Alder Lake but offer users a myriad of high-class and premium features. |
|