The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Exoskeletons
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Markets
    • Agriculture
    • Healthcare
    • Logistics
    • Manufacturing
    • Mining
    • Security
  • Financial
    • Investments
    • Mergers & Acquisitions
    • Earnings
  • Resources
    • Careers
    • COVID-19
    • Digital Issues
    • Publications
      • Collaborative Robotics Trends
      • Robotics Business Review
    • RBR50 Winners 2022
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • Healthcare Robotics Engineering Forum
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
    • Leave a voicemail

Hierarchical Reinforcement Learning helping Army advance drone swarms

By The Robot Report Staff | August 10, 2020

Hierarchical Reinforcement Learning

Army researchers developed Hierarchical Reinforcement Learning that allows swarms of unmanned aerial and ground vehicles to optimally accomplish various missions while minimizing performance uncertainty on the battlefield.

Army researchers developed a reinforcement learning approach that allows swarms of unmanned aerial and ground vehicles to optimally accomplish various missions while minimizing performance uncertainty.

Swarming is a method of operations where multiple autonomous systems act as a cohesive unit by actively coordinating their actions.

Army researchers said future multi-domain battles will require swarms of dynamically coupled, coordinated heterogeneous mobile platforms to overmatch enemy capabilities and threats targeting U.S. forces.

The Army is looking to swarming technology to be able to execute time-consuming or dangerous tasks, said Dr. Jemin George of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory.

“Finding optimal guidance policies for these swarming vehicles in real-time is a key requirement for enhancing warfighters’ tactical situational awareness, allowing the U.S. Army to dominate in a contested environment,” George said.

Reinforcement learning provides a way to optimally control uncertain agents to achieve multi-objective goals when the precise model for the agent is unavailable; however, the existing reinforcement learning schemes can only be applied in a centralized manner, which requires pooling the state information of the entire swarm at a central learner. This drastically increases the computational complexity and communication requirements, resulting in unreasonable learning time, George said.

In order to solve this issue, in collaboration with Prof. Aranya Chakrabortty from North Carolina State University and Prof. He Bai from Oklahoma State University, George created a research effort to tackle the large-scale, multi-agent reinforcement learning problem. The Army funded this effort through the Director’s Research Award for External Collaborative Initiative, a laboratory program to stimulate and support new and innovative research in collaboration with external partners.

A small unmanned Clearpath Husky robot, which was used by ARL researchers to develop a new technique to quickly teach robots novel traversal behaviors with minimal human oversight. Photo Credit: U.S. Army

The main goal of this effort is to develop a theoretical foundation for data-driven optimal control for large-scale swarm networks, where control actions will be taken based on low-dimensional measurement data instead of dynamic models.

The current approach is called Hierarchical Reinforcement Learning, or HRL, and it decomposes the global control objective into multiple hierarchies – namely, multiple small group-level microscopic control, and a broad swarm-level macroscopic control.

Related Story: How the U.S. Army is improving soldier-robot interaction

“Each hierarchy has its own learning loop with respective local and global reward functions,” George said. “We were able to significantly reduce the learning time by running these learning loops in parallel.”

According to George, online reinforcement learning control of swarm boils down to solving a large-scale algebraic matrix Riccati equation using system, or swarm, input-output data.

The researchers’ initial approach for solving this large-scale matrix Riccati equation was to divide the swarm into multiple smaller groups and implement group-level local reinforcement learning in parallel while executing a global reinforcement learning on a smaller dimensional compressed state from each group.

Hierarchical Reinforcement Learning

Army researchers envision a hierarchical control for ground vehicle and air vehicle coordination. | U.S. Army graphic

Their current Hierarchical Reinforcement Learning scheme uses a decupling mechanism that allows the team to hierarchically approximate a solution to the large-scale matrix equation by first solving the local reinforcement learning problem and then synthesizing the global control from local controllers (by solving a least squares problem) instead of running a global reinforcement learning on the aggregated state. This further reduces the learning time.

Experiments have shown that compared to a centralized approach, HRL was able to reduce the learning time by 80% while limiting the optimality loss to 5%.

“Our current HRL efforts will allow us to develop control policies for swarms of unmanned aerial and ground vehicles so that they can optimally accomplish different mission sets even though the individual dynamics for the swarming agents are unknown,” George said.

George stated that he is confident that this research will be impactful on the future battlefield, and has been made possible by the innovative collaboration that has taken place.

“The core purpose of the ARL science and technology community is to create and exploit scientific knowledge for transformational overmatch,” George said. “By engaging external research through ECI and other cooperative mechanisms, we hope to conduct disruptive foundational research that will lead to Army modernization while serving as Army’s primary collaborative link to the world-wide scientific community.”

The team is currently working to further improve their HRL control scheme by considering optimal grouping of agents in the swarm to minimize computation and communication complexity while limiting the optimality gap.

They are also investigating the use of deep recurrent neural networks to learn and predict the best grouping patterns and the application of developed techniques for optimal coordination of autonomous air and ground vehicles in Multi-Domain Operations in dense urban terrain.

Editor’s Note: This article was republished from the U.S. Army CCDC Army Research Laboratory.

Tell Us What You Think! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles Read More >

myoshirt
ETH Zurich develops wearable muscles
csail simulation
MIT CSAIL releases open-source simulator for autonomous vehicles
A3 robots
Robot sales hit record high in first quarter of 2022
tiny robot on penny
Researchers create walking robot half a millimeter wide

2021 Robotics Handbook

The Robot Report Listing Database

Latest Robotics News

Robot Report Podcast

Brian Gerkey from Open Robotics discusses the development of ROS
See More >

Sponsored Content

  • Magnetic encoders support the stabilization control of a self-balancing two-wheeled robotic vehicle
  • How to best choose your AGV’s Wheel Drive provider
  • Meet Trey, the autonomous trailer (un)loading forklift
  • Kinova Robotics launches Link 6, the first Canadian industrial collaborative robot
  • Torque sensors help make human/robot collaborations safer for workers

RBR50 Innovation Awards

Leave us a voicemail

The Robot Report
  • Mobile Robot Guide
  • Collaborative Robotics Trends
  • Field Robotics Forum
  • Healthcare Robotics Engineering Forum
  • RoboBusiness Event
  • Robotics Business Review
  • Robotics Summit & Expo
  • About The Robot Report
  • Subscribe
  • Advertising
  • Contact Us

Copyright © 2022 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Exoskeletons
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Markets
    • Agriculture
    • Healthcare
    • Logistics
    • Manufacturing
    • Mining
    • Security
  • Financial
    • Investments
    • Mergers & Acquisitions
    • Earnings
  • Resources
    • Careers
    • COVID-19
    • Digital Issues
    • Publications
      • Collaborative Robotics Trends
      • Robotics Business Review
    • RBR50 Winners 2022
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • Healthcare Robotics Engineering Forum
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
    • Leave a voicemail