The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Humanoids
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Business
    • Financial
      • Investments
      • Mergers & Acquisitions
      • Earnings
    • Markets
      • Agriculture
      • Healthcare
      • Logistics
      • Manufacturing
      • Mining
      • Security
    • RBR50
      • RBR50 Winners 2025
      • RBR50 Winners 2024
      • RBR50 Winners 2023
      • RBR50 Winners 2022
      • RBR50 Winners 2021
  • Resources
    • Automated Warehouse Research Reports
    • Digital Issues
    • eBooks
    • Publications
      • Automated Warehouse
      • Collaborative Robotics Trends
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
  • Advertise
  • Subscribe

Helm.ai launches VidGen-1 generative video model for autonomous vehicles, robots

By Eugene Demaitre | July 15, 2024

VidGen-1 generated a video of a Tokyo street scene. Source: Helm.ai

VidGen-1 generated a realistic video of a Tokyo street scene. Source: Helm.ai

Training machine learning models for self-driving vehicles and mobile robots is often labor-intensive because humans must annotate a vast number of images and supervise and validate the resulting behaviors. Helm.ai said its approach to artificial intelligence is different. The Redwood City, Calif.-based company last month launched VidGen-1, a generative AI model that it said produces realistic video sequences of driving scenes.

“Combining our Deep Teaching technology, which we’ve been developing for years, with additional in-house innovation on generative DNN [deep neural network] architectures results in a highly effective and scalable method for producing realistic AI-generated videos,” stated Vladislav Voroninski, co-founder and CEO of Helm.ai.

“Generative AI helps with scalability and tasks for which there isn’t one objective answer,” he told The Robot Report. “It’s non-deterministic, looking at a distribution of possibilities, which is important for resolving corner cases where a conventional supervised-learning approach wouldn’t work. The ability to annotate data doesn’t come into play with VidGen-1.”


SITE AD for the 2026 Robotics Summit save the date.

Helm.ai bets on unsupervised learning

Founded in 2016, Helm.ai is developing AI for advanced driver-assist systems (ADAS), Level 4 autonomous vehicles, and autonomous mobile robots (AMRs). The company previously announced GenSim-1 for AI-generated and labeled images of vehicles, pedestrians, and road environments for both predictive tasks and simulation.

“We bet on unsupervised learning with the world’s first foundation model for segmentation,” Voroninski said. “We’re now building a model for high-end assistive driving, and that framework should work regardless of whether the product requires Level 2 or Level 4 autonomy. It’s the same workflow.”

Helm.ai said VidGen-1 allows it to cost-effectively train its model on thousands of hours of driving footage. This in turn allows simulations to mimic human driving behaviors across scenarios, geographies, weather conditions, and complex traffic dynamics, it said.

“It’s a more efficient way of training large-scale models,” said Voroninski. “VidGen-1 is able to produce highly realistic video without spending an exorbitant amount of money on compute.”

How can generative AI models be rated? “There are fidelity metrics that can tell how well a model approximates a target distribution,” Voroninski replied. “We have a large collection of videos and data from the real world and have a model producing data from the same distribution for validation.”

He compared VidGen-1 to large language models (LLMs).

“Predicting the next frame in a video is similar to predicting the next word in a sentence but much more high-dimensional,” added Voroninski. “Generating realistic video sequences of a driving scene represents the most advanced form of prediction for autonomous driving, as it entails accurately modeling the appearance of the real world and includes both intent prediction and path planning as implicit sub-tasks at the highest level of the stack. This capability is crucial for autonomous driving because, fundamentally, driving is about predicting what will happen next.”

VidGen-1 could apply to other domains

“Tesla may be doing a lot internally on the AI side, but many other automotive OEMs are just ramping up,” said Voroninski. “Our customers for VidGen-1 are these OEMs, and this technology could help them be more competitive in the software they develop to sell in consumer cars, trucks, and other autonomous vehicles.”

Helm.ai said its generative AI techniques offer high accuracy and scalability with a low computational profile. Because VidGen-1 supports rapid generation of assets in simulation with realistic behaviors, it can help close the simulation-to-reality or “sim2real” gap, asserted Helm.ai.

Voroninski added that Helm.ai’s model can apply to lower levels of the technology stack, not just for generating video for simulation. It could be used in AMRs, autonomous mining vehicles, and drones, he said.

“Generative AI and generative simulation will be a huge market,” said Voroninski. “Helm.ai is well-positioned to help automakers reduce development time and cost while meeting production requirements.”

About The Author

Eugene Demaitre

Eugene Demaitre is editorial director of the robotics group at WTWH Media. He was senior editor of The Robot Report from 2019 to 2020 and editorial director of Robotics 24/7 from 2020 to 2023. Prior to working at WTWH Media, Demaitre was an editor at BNA (now part of Bloomberg), Computerworld, TechTarget, and Robotics Business Review.

Demaitre has participated in robotics webcasts, podcasts, and conferences worldwide. He has a master's from the George Washington University and lives in the Boston area.

Tell Us What You Think! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles Read More >

Agibot humanoid robot in an assembly workcell.
AgiBot deploys its Real-World Reinforcement Learning system
Actuators such as this are a key application area for Stanyl polyamide 46 compounds, says Envalior.
Envalior offers PFAS-free materials for wear and friction applications
Vision language models enable mobile manipulators and other robots to handle a range of warehouse tasks. Source: Robotec.ai
Robotec.ai works with AMD, Liquid AI to apply agentic AI to warehouse robots
Siemens, Foxconn Fii, Wistron, and Caterpillar are among the companies using NVIDIA Omniverse to build digital twins of their factories.
How NVIDIA is bringing physical AI to its industrial customers

RBR50 Innovation Awards

“rr
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for Robotics Professionals.

Latest Episode of The Robot Report Podcast

Automated Warehouse Research Reports

Sponsored Content

  • Supporting the future of medical robotics with smarter motor solutions
  • YUAN Unveils Next-Gen AI Robotics Powered by NVIDIA for Land, Sea & Air
  • ASMPT chooses Renishaw for high-quality motion control
  • Revolutionizing Manufacturing with Smart Factories
  • How to Set Up a Planetary Gear Motion with SOLIDWORKS
The Robot Report
  • Automated Warehouse
  • RoboBusiness Event
  • Robotics Summit & Expo
  • About The Robot Report
  • Subscribe
  • Contact Us

Copyright © 2025 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Humanoids
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Business
    • Financial
      • Investments
      • Mergers & Acquisitions
      • Earnings
    • Markets
      • Agriculture
      • Healthcare
      • Logistics
      • Manufacturing
      • Mining
      • Security
    • RBR50
      • RBR50 Winners 2025
      • RBR50 Winners 2024
      • RBR50 Winners 2023
      • RBR50 Winners 2022
      • RBR50 Winners 2021
  • Resources
    • Automated Warehouse Research Reports
    • Digital Issues
    • eBooks
    • Publications
      • Automated Warehouse
      • Collaborative Robotics Trends
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
  • Advertise
  • Subscribe