NVIDIA has announced major expansions to two frameworks on its NVIDIA Jetson Platform for edge AI and robotics. These updates include NVIDIA’s new Jetson Generative AI Lab for developers to use with the latest open-source generative AI models, and that the NVIDIA Isaac ROS 2.0 robotics framework has entered general availability.
Currently, more than 10,000 companies are building robots on the NVIDIA Jetson platform, and those companies can now use generative AI, APIs, and microservices to accelerate industrial digitization. NVIDIA hopes that this new AI Lab will accelerate AI application development and deployments at the edge.
More and more, AI is being used to address increasingly complicated scenarios robots might find themselves in, and developers are being pushed to build AI applications for the edge. Reprogramming robots and AI systems on the fly to meet changing environments, manufacturing lines, and automation needs of customers can be time-consuming and requires expert skill.
With generative AI, robotic models can recognize things specifically unseen before in training. And a natural language interface can simplify the development and management of AI at the edge.
The Jetson Generative AI Lab gives developers access to optimized tools and tutorials for deploying open-source LLMs, diffusion models to generate interactive images, vision language models (VLMs), and vision transformers (ViTs). These tools combine vision AI with natural language processing to give robots a comprehensive understanding of the scene.
Developers will also have access to the NVIDIA TAO Toolkit, which can help them create efficient and accurate AI models for edge applications. TAO features a low-code interface to allow users to fine-tune and optimize vision AI models, including ViT and vision foundational models.
With TAO, developers can also customize and fine-tune foundational models like NVIDIA NV-DINOv2 or public models like OpenCLIP to create highly accurate vision AI models with very little data. TAO also now includes VisualChangeNet, a new transformer-based model for defect inspection.
NVIDIA Metropolis updates
NVIDIA Metropolis aims to make it easier and more cost-effective for enterprises to embrace world-class, vision AI-enabled solutions to improve critical operational efficiency and safety problems. The platform brings together a collection of application programming interfaces and microservices for developers to quickly develop complex vision-based applications.
Metropolis will include an expanded set of APIs and microservices by the end of the year that are aimed at helping developers quickly build and deploy scalable vision AI applications.
NVIDIA Isaac Robotics platform updates
Hundreds of customers use the NVIDIA Isaac platform to develop high-performance robotics systems across diverse domains, including agriculture, warehouse automation, last-mile delivery, and service robots, among others.
Built on the Robot Operating System (ROS), Isaac ROS brings perception to automation, giving eyes and ears to the things that move. With the general availability of the latest Isaac ROS 2.0 release, developers can now create and bring high-performance robotic systems to market with Jetson.
Enhancements in the latest release include native ROS 2 Humble support, NITROS ROS bridge, CUDA NITROS, Stereolabs ZED camera integration, Nova Carter support, and ESS 3.0 performance.
“ROS continues to grow and evolve to provide open-source software for the whole robotics community,” Geoff Biggs, CTO of the Open Source Robotics Foundation, said. “NVIDIA’s new prebuilt ROS 2 packages, launched with this release, will accelerate that growth by making ROS 2 readily available to the vast NVIDIA Jetson developer community.”
Tell Us What You Think!