The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Humanoids
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Business
    • Financial
      • Investments
      • Mergers & Acquisitions
      • Earnings
    • Markets
      • Agriculture
      • Healthcare
      • Logistics
      • Manufacturing
      • Mining
      • Security
    • RBR50
      • RBR50 Winners 2025
      • RBR50 Winners 2024
      • RBR50 Winners 2023
      • RBR50 Winners 2022
      • RBR50 Winners 2021
  • Resources
    • Automated Warehouse Research Reports
    • Digital Issues
    • eBooks
    • Publications
      • Automated Warehouse
      • Collaborative Robotics Trends
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
  • Advertise
  • Subscribe

Closing the latency gap: Why physical AI requires edge-first architectures

By Madhu Gaganam | May 3, 2026

An image of a cobot showing where the direct controller link and edge processor are.

Latency can pose safety risks in collaborative assembly cells. Source: Cogniedge.ai

Cloud-based vision systems have improved industrial analytics and predictive maintenance, but they fall short when real-time safety and throughput matter most on the shop floor. In high-mix collaborative assembly cells, even modest network latency can turn a promising human-robot collaboration (HRC) setup into a stop-and-go bottleneck.

The industry’s shift toward more collaborative robots demands more than safer cages or slower speeds. It requires architectures that let cobots dynamically adapt to human movement and fatigue while maintaining cycle time and safety.

The key is moving AI inference to the edge and establishing a direct, low-latency bridge from the edge processor straight to the robot controller, bypassing the legacy PLC (programmable logic controller) for dynamic kinematic adjustments.

The physics of latency in speed and separation monitoring

A diagram showing the difference between computing on the cloud and the edge.

Source: Cogniedge.ai

ISO/TS 15066 defines speed and separation monitoring (SSM) as a core safety method for collaborative robots. The standard requires the robot to maintain a protective separation distance from the operator and reduce speed or stop if that distance is breached.

Consider a typical high-fidelity depth camera feeding skeletal tracking data to a remote server. Round-trip latency, including image transmission, inference, and command return, commonly ranges from 100 to 200 milliseconds. 

At a moderate arm speed of 2 m/s, the robot travels 200 to 400 mm (7.8 to 15.7 in.) during that delay. In a compact collaborative cell, a 300 mm (11.8 in.) blind spot is the difference between safe operation and potential injury. 

To compensate, engineers widen safety zones and program conservative speeds or frequent protective stops. The result is reduced throughput that defeats the purpose of collaborative automation.

True real-time SSM in dynamic environments requires deterministic end-to-end latency below 30 ms—something that’s only possible when processing occurs millimeters from the sensor and the decision path connects directly to the motion controller.

Why legacy PLCs create an unacceptable bottleneck

Most brownfield cells still rely on traditional PLCs for safety logic. These devices were engineered for deterministic, discrete IO and scan cycles typically ranging from 10 to 50 ms. They excel at reading a light curtain or an e-stop but struggle with the high-bandwidth, multidimensional data streams coming from modern vision systems such as skeletal tracking, micro movement analysis, and operator state estimation.

Routing edge AI inferences through the PLC adds another full scan cycle plus fieldbus overhead. The cumulative delay destroys the determinism needed for proactive SSM.

In practice, many integrators find themselves forced to run the robot at reduced speeds or accept frequent interruptions even when the AI knows the situation is safe.

A diagram showing traditional architecture versus edge architecture.

Source: Cogniedge.ai

Building the direct edge-to-controller bridge

The solution is a localized real-time safety processor that sits at the workcell and communicates directly with the robot controller, bypassing the PLC for non-safety-critical but time-sensitive adjustments.

This layer ingests multi-modal sensor data (depth cameras, IMUs, force-torque sensors) at the edge, runs low-latency AI inference, and injects updated commands into the robot’s motion planner via high-speed industrial protocols. Common implementation paths include:

  • EtherCAT or PROFINET IRT for sub-millisecond deterministic cycles when the controller supports fieldbus extension.
  • Real-time UDP or native robot APIs (URScript for Universal Robots, RAPID for ABB, KAREL for FANUC) for direct socket communication to the motion controller.

The safety-rated PLC continues to handle certified emergency stops and SIL/PL-rated functions. The edge processor acts as a parallel, high-speed channel that continuously updates trajectory, speed, and force setpoints without waiting for the next PLC scan. This “safety coprocessor” architecture maintains full compliance while enabling proactive behavior.

Adjusting kinematics on the fly in high-mix cells

With the latency gap closed and a direct command path established, the cobot can move from reactive stopping to continuous, adaptive collaboration.

In a high-mix assembly station, an operator’s movements may become slower or more erratic toward the end of a shift, which can be an early indicator of fatigue. The edge processor detects these micro deviations in real time through skeletal tracking and velocity profiling.

Instead of triggering a protective stop, the system issues immediate kinematic adjustments:

  • Reduce maximum acceleration from 5 m/s² to 2 m/s².
  • Widen the approach angle by 15° to give the operator more space.
  • Lower torque limits on approach axes to reduce collision energy.

This approach keeps the cell in continuous motion. The robot adapts its behavior to the human’s immediate state rather than defaulting to a hard stop, preserving both safety and productivity. A simplified flow chart illustration of the decision loop looks like this:

A chart showing a deterministic cycle of decision making.

Source: Cogniedge.ai

The hardware requirements for edge-first safety for collaborative robots

Factory floors have limited space and power. Edge processors for this use case must operate below 1 W while delivering real-time inference on temporal data streams. Neuromorphic chips and Spiking Neural Networks (SNNs) are particularly well suited because they process change detection and time-series data with extreme efficiency and low latency.

These compact, fanless modules mount directly in or near the work cell, connect via standard industrial Ethernet, and integrate with existing robot controllers without requiring new cabinets or major rewiring.


SITE AD for the 2026 Robotics Summit save the date.

Practical benefits for systems integrators

By implementing direct edge-to-controller architectures, the industry can finally deliver on the ultimate promise of high-mix collaborative cells: fluid interaction that maintains takt time without sacrificing safety. This shift unlocks immediate value across the entire manufacturing ecosystem.

For systems integrators, it offers a scalable approach that works in brownfield environments, leverages standard protocols across robot brands, and preserves existing investments in safety-rated PLCs. For manufacturers, it protects the bottom line by eliminating the frequent micro-stops that traditionally destroy cycle times. Most importantly, for the operators working on the line, it creates a safer, fatigue-aware environment where the robot acts as a true, responsive partner rather than a rigid machine.

As collaborative automation grows more complex, closing the latency loop at the controller level will be the defining factor that separates successful, high-throughput deployments from those limited by legacy bottlenecks.

About the author

Madhu Gaganam.Madhu Gaganam is the founder and CEO of Cogniedge.ai and an engineering technologist with more than 30 years of industrial automation experience at companies including Rockwell Automation, Gartner, NXP, and Dell. A recognized industry authority, he is a Top 10 Robotics Thought Leader on Thinkers360, co-chair of the Digital Twin Consortium, and an active IEEE RAS member.

Tell Us What You Think! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles Read More >

Physical AI is enabling robots such as this surgical system to learn more quickly, says Fictiv.
Why physical AI is the real manufacturing revolution
head shot of Neil Hansch and the podcast logo.
The art of crossing the chasm: When is a startup ready for enterprise adoption?
The Sanctuary AI hand holding a block, a robot arm picking a box, two robot arms folding clothes, and ABB's new line of cobots.
Top 10 robotics stories of April 2026
The UR8 robot welding.
Teradyne Robotics revenue rises at the start of 2026

RBR50 Innovation Awards

“rr
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for Robotics Professionals.

Latest Episode of The Robot Report Podcast

Automated Warehouse Research Reports

Sponsored Content

  • Surgical robotics: Why motion architecture matters more than ever
  • How gearbox ratio selection impacts inertia matching, servo tuning, and machine performance
  • How to avoid over- or under-sizing a servo gearbox
  • Supporting the future of medical robotics with smarter motor solutions
  • YUAN Unveils Next-Gen AI Robotics Powered by NVIDIA for Land, Sea & Air
The Robot Report
  • Automated Warehouse
  • RoboBusiness Event
  • Robotics Summit & Expo
  • About The Robot Report
  • Subscribe
  • Contact Us

Copyright © 2026 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Humanoids
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Business
    • Financial
      • Investments
      • Mergers & Acquisitions
      • Earnings
    • Markets
      • Agriculture
      • Healthcare
      • Logistics
      • Manufacturing
      • Mining
      • Security
    • RBR50
      • RBR50 Winners 2025
      • RBR50 Winners 2024
      • RBR50 Winners 2023
      • RBR50 Winners 2022
      • RBR50 Winners 2021
  • Resources
    • Automated Warehouse Research Reports
    • Digital Issues
    • eBooks
    • Publications
      • Automated Warehouse
      • Collaborative Robotics Trends
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
  • Advertise
  • Subscribe