Javascript must be enabled to continue!
View-Invariant Spatiotemporal Attentive Motion Planning and Control Network for Autonomous Vehicles
View through CrossRef
Autonomous driving vehicles (ADVs) are sleeping giant intelligent machines that perceive their environment and make driving decisions. Most existing ADSs are built as hand-engineered perception-planning-control pipelines. However, designing generalized handcrafted rules for autonomous driving in an urban environment is complex. An alternative approach is imitation learning (IL) from human driving demonstrations. However, most previous studies on IL for autonomous driving face several critical challenges: (1) poor generalization ability toward the unseen environment due to distribution shift problems such as changes in driving views and weather conditions; (2) lack of interpretability; and (3) mostly trained to learn the single driving task. To address these challenges, we propose a view-invariant spatiotemporal attentive planning and control network for autonomous vehicles. The proposed method first extracts spatiotemporal representations from images of a front and top driving view sequence through attentive Siamese 3DResNet. Then, the maximum mean discrepancy loss (MMD) is employed to minimize spatiotemporal discrepancies between these driving views and produce an invariant spatiotemporal representation, which reduces domain shift due to view change. Finally, the multitasking learning (MTL) method is employed to jointly train trajectory planning and high-level control tasks based on learned representations and previous motions. Results of extensive experimental evaluations on a large autonomous driving dataset with various weather/lighting conditions verified that the proposed method is effective for feasible motion planning and control in autonomous vehicles.
Title: View-Invariant Spatiotemporal Attentive Motion Planning and Control Network for Autonomous Vehicles
Description:
Autonomous driving vehicles (ADVs) are sleeping giant intelligent machines that perceive their environment and make driving decisions.
Most existing ADSs are built as hand-engineered perception-planning-control pipelines.
However, designing generalized handcrafted rules for autonomous driving in an urban environment is complex.
An alternative approach is imitation learning (IL) from human driving demonstrations.
However, most previous studies on IL for autonomous driving face several critical challenges: (1) poor generalization ability toward the unseen environment due to distribution shift problems such as changes in driving views and weather conditions; (2) lack of interpretability; and (3) mostly trained to learn the single driving task.
To address these challenges, we propose a view-invariant spatiotemporal attentive planning and control network for autonomous vehicles.
The proposed method first extracts spatiotemporal representations from images of a front and top driving view sequence through attentive Siamese 3DResNet.
Then, the maximum mean discrepancy loss (MMD) is employed to minimize spatiotemporal discrepancies between these driving views and produce an invariant spatiotemporal representation, which reduces domain shift due to view change.
Finally, the multitasking learning (MTL) method is employed to jointly train trajectory planning and high-level control tasks based on learned representations and previous motions.
Results of extensive experimental evaluations on a large autonomous driving dataset with various weather/lighting conditions verified that the proposed method is effective for feasible motion planning and control in autonomous vehicles.
Related Results
Autonomous localized path planning algorithm for UAVs based on TD3 strategy
Autonomous localized path planning algorithm for UAVs based on TD3 strategy
AbstractUnmanned Aerial Vehicles are useful tools for many applications. However, autonomous path planning for Unmanned Aerial Vehicles in unfamiliar environments is a challenging ...
Eyes on Air
Eyes on Air
Abstract
We at ADNOC Logistics & Services have identified the need for a Fully Integrated Inspection and Monitoring Solution to meet our operational, safety and ...
Volume 10, Index
Volume 10, Index
<p><strong>Vol 10, No 1 (2015)</strong></p><p><strong> </strong></p><p><a href="http://www.world-education-center.org/index...
Research on the Duties of Remote safety Inspector for Autonomous
Commercial Vehicles
Research on the Duties of Remote safety Inspector for Autonomous
Commercial Vehicles
<div class="section abstract"><div class="htmlview paragraph">At present, autonomous commercial vehicles can automatically complete all
transport se...
Characterizing spatiotemporal population receptive fields in human visual cortex with fMRI
Characterizing spatiotemporal population receptive fields in human visual cortex with fMRI
AbstractThe use of fMRI and computational modeling has advanced understanding of spatial characteristics of population receptive fields (pRFs) in human visual cortex. However, we k...
Mapping the Visual Icon
Mapping the Visual Icon
Abstract
It is often claimed that pre-attentive vision has an ‘iconic’ format. This is seen to explain pre-attentive vision's characteristically high processing capa...
Use of LiDAR for Negative Obstacle Detection: A Thorough Review
Use of LiDAR for Negative Obstacle Detection: A Thorough Review
Abstract
Negative obstacles for field autonomous land vehicles refer to potholes, ditches, cliffs, pits, or any type of obstacles that are on the road but not in a v...
Implications of Human-Machine Interface for Inclusive Shared Autonomous Vehicles
Implications of Human-Machine Interface for Inclusive Shared Autonomous Vehicles
Autonomous Vehicles (AVs), also known as self-driving cars, driverless cars, or robot cars, can perceive their environment and drive safely with little or no human inputs. Under th...

