

Nvidia Jetson Orin
Amid the flood of news coming out of Nvidia’s GPU Technology Conference (GTC) today were pair of announcements aimed at accelerating the development of AI on the edge and enabling autonomous mobile robots, or AMRs.
First, let’s cover Nvidia’s supercomputer for edge AI, dubbed Jetson. The company today launched Jetson AGX Orin, its most powerful GPU-powered device designed for AI inferencing at the edge and for powering AI in embedded devices.
Armed with an Ampere-class Nvidia GPU, up to 12 Arm Cortex CPU, and up to 32 GB of RAM, Jetson AGX Orin is capable of delivering 275 trillion operations per second (TOPS) on INT8 workloads, which is more than an 8x boost compared to the previous top-end device, the Jetson AGX Xavier, Nvidia said.
Jetson AGX Orin is pin and software compatible to the Xavier model, so the 6,000 or so customers that have rolled out products with the AI processor in them, including John Deere, Medtronic, and Cisco, can basically just plug the new device into the solutions they have been developing over the past three or four years, said Deepu Talla, Nvidia’s vice president of embedded and edge computing.
The developer kit for Jetson AGX Orin will be available this week at a starting price of $1,999, enabling users to get started with developing solutions for the new offering. Delivery of production-level Jetson AGX Orin devices will start in the fourth quarter, and the units will start at $399.
Recent developments at Nvidia will accelerate the creation of AI applications, Talla said.
“Until a year or two ago, very few companies could build these AI products, because creating an AI model has actually been very difficult,” he said. “We’ve heard it takes months if not a year-plus in some cases, and then it’s…a continuous iterative process. You’re not done ever with the AI model.”
However, Nvidia has been able to reduce that time considerably by doing three things, Talla said.
The first one is including pre-trained models for both computer vision and conversational AI. The second is the ability to generate synthetic data on its new Omniverse platform. Lastly, transfer learning gives Nvidia customers the ability to take those pre-trained models and customize them to a customer’s exact specifications by training with “both physical real data and synthetic data,” he said.
“We are seeing tremendous amount of adoption because just make it so easy to create AI bots,” Talla said.

Nvidia is developing simulation tools to help developers create AMRs that can navigage complex real-world environments (Image courtesy Nvidia)
Nvidia also announced the release of Isaac Nova Orin, is a reference platform for developing AMRs trained with the company’s AI tech.
The platform combines two of the new Jetson AGX Orin discussed above, giving it 550 TOPS of compute capacity, along with additional hardware, software, and simulation capabilities to enable developers to create AMRs that work in specific locations. Isaac Nova Orin also will be outfitted with a slew of sensors, including regular cameras, radar, lidar, and ultrasonic sensors to detect physical objects in the real world.
Nvidia will also ship new software and simulation capabilities to accelerate AMR deployments. A key element there is another offering called Isaac Sim on Omniverse, which will enable developers to leverage virtual 3D building blocks that simulate complex warehouse environments. The developer will then train and validate a virtual version of the AMR to navigate that environment.
The opportunity for AMRs is substantial across multiple industries, including warehousing, logistics, manufacturing, healthcare, retail, and hospitality. Nvidia says research from ABI Research forecasts the market for AMRs to grow from under $8 billion in 2021 to more than $46 billion by 2030.
“The old method of designing the AMR compute and sensor stack from the ground up is too costly in time and effort, says Nvidia Senior Product Marketing Manager Gerard Andrews in an Nvidia blog post today. “Tapping into an existing platform allows manufacturers to focus on building the right software stack for the right robot application.
Related Items:
Models Trained to Keep the Trains Running
Nvidia’s Enterprise AI Software Now GA
Nvidia Inference Engine Keeps BERT Latency Within a Millisecond
February 21, 2025
- SADA Recognized for AI, Security, and Data Analytics in Google Public Sector
- Gartner Survey Finds One-Third of CDAOs Cite Measuring Data, Analytics and AI Impact as Top Challenge
- Alabama Power and E Source Win DOE Prize for Data Analytics Innovation
- AWS and InfluxData Launch Amazon Timestream for InfluxDB Read Replicas
- Kioxia and Sandisk Unveil Next-Gen 3D Flash Memory Tech Achieving 4.8Gb/s NAND Interface Speed
- DataChat’s Gen AI Platform for Analytics on Track for Continued Growth in 2025
- Lucidworks Launches AI-Powered Commerce Studio and Analytics Studio, Available in New Fast-Start Packages
- Arize AI Secures $70M Series C to Expand AI Observability and LLM Evaluation
February 20, 2025
- Accenture Invests in Voltron Data to Help Organizations Use GPU Tech to Simplify Large-Scale Data Processing
- DDN Unveils Infinia 2.0 to Streamline AI Data Management and GPU Utilization
- Together AI Raises $305M Series B to Power AI Model Training and Inference
- Starburst Closes Record FY25, Fueled by Rising AI Demand and Growing Enterprise Momentum
- GridGain Brings Apache Ignite Community Together for Ignite Summit 2025
- Elasticsearch Open Inference API now Supports Jina AI Embeddings and Rerank Model
- HarperDB Named an IDC Innovator for Edge Inference Delivery
- EDB Postgres AI Outperforms Oracle, SQL Server, and MongoDB in New Benchmark
- CData and Ellie.ai Partner to Streamline Enterprise Data Modeling
- Privacera Aligns AI Governance with NIST Standards to Mitigate AI Risks
February 19, 2025
- OpenTelemetry Is Too Complicated, VictoriaMetrics Says
- What Are Reasoning Models and Why You Should Care
- Three Ways Data Products Empower Internal Users
- Keeping Data Private and Secure with Agentic AI
- PayPal Feeds the DL Beast with Huge Vault of Fraud Data
- Memgraph Bolsters AI Development with GraphRAG Support
- Three Data Challenges Leaders Need To Overcome to Successfully Implement AI
- What Leonardo DaVinci Teaches Us About Data Management
- Demystifying AI: What Every Business Leader Needs to Know
- Top-Down or Bottom-Up Data Model Design: Which is Best?
- More Features…
- Meet MATA, an AI Research Assistant for Scientific Data
- AI Agent Claims 80% Reduction in Time to Complete Data Tasks
- DataRobot Expands AI Capabilities with Agnostiq Acquisition
- Snowflake Unleashes AI Agents to Unlock Enterprise Data
- EDB Says It Tops Oracle, Other Databases in Benchmarks
- Collibra Bolsters Position in Fast-Moving AI Governance Field
- Microsoft Open Sources Code Behind PostgreSQL-Based MongoDB Clone
- AI Making Data Analyst Job More Strategic, Alteryx Says
- VAST Data Expands Platform With Block Storage And Real-Time Event Streaming
- Databricks Unveils LakeFlow: A Unified and Intelligent Tool for Data Engineering
- More News In Brief…
- Informatica Reveals Surge in GenAI Investments as Nearly All Data Leaders Race Ahead
- Gartner Predicts 40% of Generative AI Solutions Will Be Multimodal By 2027
- PEAK:AIO Powers AI Data for University of Strathclyde’s MediForge Hub
- DataRobot Acquires Agnostiq to Accelerate Agentic AI Application Development
- Cloudera Welcomes Tom Brady as Keynote Speaker at ELEVATE26
- Starburst Closes Record FY25, Fueled by Rising AI Demand and Growing Enterprise Momentum
- TigerGraph Launches Savanna Cloud Platform to Scale Graph Analytics for AI
- EY and Microsoft Unveil AI Skills Passport to Bridge Workforce AI Training Gap
- Alluxio Enhances Enterprise AI with Version 3.5 for Faster Model Training
- DeepSeek-R1 models now available on AWS
- More This Just In…