Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Pathfinding in stochastic environments: learning vs planning

View through CrossRef
Among the main challenges associated with navigating a mobile robot in complex environments are partial observability and stochasticity. This work proposes a stochastic formulation of the pathfinding problem, assuming that obstacles of arbitrary shapes may appear and disappear at random moments of time. Moreover, we consider the case when the environment is only partially observable for an agent. We study and evaluate two orthogonal approaches to tackle the problem of reaching the goal under such conditions: planning and learning. Within planning, an agent constantly re-plans and updates the path based on the history of the observations using a search-based planner. Within learning, an agent asynchronously learns to optimize a policy function using recurrent neural networks (we propose an original efficient, scalable approach). We carry on an extensive empirical evaluation of both approaches that show that the learning-based approach scales better to the increasing number of the unpredictably appearing/disappearing obstacles. At the same time, the planning-based one is preferable when the environment is close-to-the-deterministic (i.e., external disturbances are rare). Code available at https://github.com/Tviskaron/pathfinding-in-stochastic-envs.
Title: Pathfinding in stochastic environments: learning vs planning
Description:
Among the main challenges associated with navigating a mobile robot in complex environments are partial observability and stochasticity.
This work proposes a stochastic formulation of the pathfinding problem, assuming that obstacles of arbitrary shapes may appear and disappear at random moments of time.
Moreover, we consider the case when the environment is only partially observable for an agent.
We study and evaluate two orthogonal approaches to tackle the problem of reaching the goal under such conditions: planning and learning.
Within planning, an agent constantly re-plans and updates the path based on the history of the observations using a search-based planner.
Within learning, an agent asynchronously learns to optimize a policy function using recurrent neural networks (we propose an original efficient, scalable approach).
We carry on an extensive empirical evaluation of both approaches that show that the learning-based approach scales better to the increasing number of the unpredictably appearing/disappearing obstacles.
At the same time, the planning-based one is preferable when the environment is close-to-the-deterministic (i.
e.
, external disturbances are rare).
Code available at https://github.
com/Tviskaron/pathfinding-in-stochastic-envs.

Related Results

Initial Experience with Pediatrics Online Learning for Nonclinical Medical Students During the COVID-19 Pandemic 
Initial Experience with Pediatrics Online Learning for Nonclinical Medical Students During the COVID-19 Pandemic 
Abstract Background: To minimize the risk of infection during the COVID-19 pandemic, the learning mode of universities in China has been adjusted, and the online learning o...
Pathfinding visualizer
Pathfinding visualizer
Visualizations of algorithms contribute to improving computer science education. The process of teaching and learning of algorithms is sometimes, complex and hard to understand pro...
Stochastic Modeling Of Space Dependent Reservoir-Rock Properties
Stochastic Modeling Of Space Dependent Reservoir-Rock Properties
Abstract Numerical modeling of space dependent and variant reservoir-rock properties such as porosity, permeability, etc., are routinely used in the oil industry....
Stochastic Models for Ontogenetic Growth
Stochastic Models for Ontogenetic Growth
Based on allometric theory and scaling laws, numerous mathematical models have been proposed to study ontogenetic growth patterns of animals. Although deterministic models have pro...
Systematics of Literature Reviews: Learning Model of Discovery Learning in Science Learning
Systematics of Literature Reviews: Learning Model of Discovery Learning in Science Learning
The development of the 21st century has affected the world of education. Current education students must be led to learn more creatively and actively. This study aims Furthermore, ...
IDENTIFYING BARRIERS IN E – LEARNING, A MEDICAL STUDENT’S PERSPECTIVE
IDENTIFYING BARRIERS IN E – LEARNING, A MEDICAL STUDENT’S PERSPECTIVE
Objective: To recognize the barriers in different modes of e learning, from the medical student’s perspective during the period of Covid 19 pandemic.   Study Desi...
Evaluating Evolutionary and Gradient-Based Algorithms for Optimal Pathfinding
Evaluating Evolutionary and Gradient-Based Algorithms for Optimal Pathfinding
AbstractPathfinding in complex topographies poses a challenge with applications extending from urban planning to autonomous navigation. While numerous algorithms offer potential so...
ANALISIS DAN EVALUASI MEDIA E- LEARNING DALAM PEMBELAJARAN PENDIDIKAN AGAMA ISLAM
ANALISIS DAN EVALUASI MEDIA E- LEARNING DALAM PEMBELAJARAN PENDIDIKAN AGAMA ISLAM
The Internet can be used as a way to transfer knowledge from teacher to student. Elearning is one of learning media which uses internet. The purpose of this research is to describe...

Back to Top