NVIDIA Jetson Orin Nano System-On-Modules Advances Edge AI & Robotics Performance
September 20, 2022
SANTA CLARA, CA – Today, NVIDIA released a new line of Jetson Orin Nano system-on-modules (SoMs), an expansion of the previous Jetson Nano series that’s 80x faster with the ability to deliver up to 40 trillion operations per second (TOPS) of AI performance in a small, compact package.
Available in January for $199, the Orin Nano will feature in two versions, an 8GB with 40 TOPS and power configurable from 7W to 15W, and a 4GB SoM with up to 20 TOPS and 5W to 10W power options.
The new Orin Nano SoMs feature the NVIDIA Ampere GPU architecture which includes eight streaming multiprocessors, made up of 1024 CUDA cores and up to 32 Tensor Cores, for AI processing. The GPU architecture enables the creation of complex AI models for retail analytics and industrial quality control. The modules enable the design of one system with pin-compatible support for several Jetson modules, such as Orin NX modules, and full emulation with the AGX Orin developer kit.
Additional processing is distributed by the 6-core Arm Cortex-A78AE CPU for video decode engine, ISP, video image compositor, audio processing engine, and video input block. The Orin Nano also supports the ability to simultaneously run multiple AI pipelines with the support of high speed I/O, such as seven lanes of PCIe Gen3, three 10-Gbps USB 3.2 Gen2 ports, eight lanes of MIPI CSI-2 camera ports, and diverse sensor I/O options.
Further support for the Orin Nano includes the NVIDIA JetPack™ SDK, powered by the NVIDIA CUDAX™ accelerated computing stack for AI applications, with even more support provided by the most recent Isaac software for ROS.
For more information about the Orin Nano, visit: nvidianews.nvidia.com