Javascript must be enabled to continue!
Development of a Distributed and Scalable Testbed for UAVs using Reinforcement Learning
View through CrossRef
Abstract
The aim of this project is to develop a Testbed for designing and training Multi-agent Reinforcement Learning (RL) algorithms for cooperative and self-organizing Unmanned Aerial Vehicles (UAVs). The main purpose of the development of a scalable and distributed testbed based on Multi-agent RL algorithms is to enable UAVs to make decisions using real-time data and perform tasks autonomously. In this project, a novel testbed is developed that allows the integration of different Multi-agent RL algorithms with a flight simulator. This testbed supports UAVs that learn to fly and coordinate together in the simulated environment to accomplish the objective of target tracking. It employs novel techniques that enable faster learning and higher performance as compared to conventional Multi-agent RL methods. FlightGear is the flight simulator used in this project. This testbed can be used to train control models for a wide variety of use cases. As a proof of concept, a problem is formulated regarding target tracking of UAVs. The tracking aircraft follows the path of the target aircraft. Both tracking and target aircraft are controlled by different Multi-agent RL models and fly on a common flight simulator. This testbed can also scale up the number of tracking aircraft and can be distributed to several systems.
Springer Science and Business Media LLC
Title: Development of a Distributed and Scalable Testbed for UAVs using Reinforcement Learning
Description:
Abstract
The aim of this project is to develop a Testbed for designing and training Multi-agent Reinforcement Learning (RL) algorithms for cooperative and self-organizing Unmanned Aerial Vehicles (UAVs).
The main purpose of the development of a scalable and distributed testbed based on Multi-agent RL algorithms is to enable UAVs to make decisions using real-time data and perform tasks autonomously.
In this project, a novel testbed is developed that allows the integration of different Multi-agent RL algorithms with a flight simulator.
This testbed supports UAVs that learn to fly and coordinate together in the simulated environment to accomplish the objective of target tracking.
It employs novel techniques that enable faster learning and higher performance as compared to conventional Multi-agent RL methods.
FlightGear is the flight simulator used in this project.
This testbed can be used to train control models for a wide variety of use cases.
As a proof of concept, a problem is formulated regarding target tracking of UAVs.
The tracking aircraft follows the path of the target aircraft.
Both tracking and target aircraft are controlled by different Multi-agent RL models and fly on a common flight simulator.
This testbed can also scale up the number of tracking aircraft and can be distributed to several systems.
Related Results
Development of a Scalable Testbed for Mobile Olfaction Verification
Development of a Scalable Testbed for Mobile Olfaction Verification
The lack of information on ground truth gas dispersion and experiment verification information has impeded the development of mobile olfaction systems, especially for real-world co...
Journal of Smart Environments and Green Computing
Journal of Smart Environments and Green Computing
Aim: The rapid growth in the number of ground users over recent years has introduced the issues for a base station of providing more reliable connectivity and guaranteeing the reas...
The Effect of Compression Reinforcement on the Shear Behavior of Concrete Beams with Hybrid Reinforcement
The Effect of Compression Reinforcement on the Shear Behavior of Concrete Beams with Hybrid Reinforcement
Abstract
This study examines the impact of steel compression reinforcement on the shear behavior of concrete beams reinforced with glass fiber reinforced polymer (GFRP) bar...
Study on Scheme Optimization of bridge reinforcement increasing ratio
Study on Scheme Optimization of bridge reinforcement increasing ratio
Abstract
The bridge reinforcement methods, each method has its advantages and disadvantages. The load-bearing capacity of bridge members is controlled by the ultimat...
Reinforcement Learning Based Topology Control for UAV Networks
Reinforcement Learning Based Topology Control for UAV Networks
The recent development of unmanned aerial vehicle (UAV) technology has shown the possibility of using UAVs in many research and industrial fields. One of them is for UAVs moving in...
A local filtering-based energy-aware routing scheme in flying ad hoc networks
A local filtering-based energy-aware routing scheme in flying ad hoc networks
Abstract
Flying ad hoc network (FANET) is a new technology, which creates a self-organized wireless network containing unmanned aerial vehicles (UAVs). In FANET, routing pr...
RISE: Rolling-Inspired Scheduling for Emergency Tasks by Heterogeneous UAVs
RISE: Rolling-Inspired Scheduling for Emergency Tasks by Heterogeneous UAVs
The multiple unmanned aerial vehicles (UAVs) system is highly sought after in the fields of emergency rescue and intelligent transportation because of its strong perception and ext...
ToAM: A Task-oriented Authentication Model for UAVs Based on Blockchain
ToAM: A Task-oriented Authentication Model for UAVs Based on Blockchain
Abstract
The pervasive collaboration of groups of UAVs has become vogue and popularized due to the reduced cost and widely adoption of these gadgets. It is believed that su...

