Session: 07-10-02: Mobile Robots and Unmanned Ground Vehicles
Paper Number: 112606
112606 - A Comparison of Motion Planning Methods for Autonomous Ground Vehicle Exploration and Search
Navigation of an Autonomous Ground Vehicle (AGV) is a major challenge when it comes to exploration of uncharted terrain, search and rescue (SAR) missions, and military assignments. An AGV can be subjected to numerous uncertainties such as unexpected obstacles, uneven terrain, or hazardous regions. Motion planning algorithms are responsible for safe autonomous navigation of an AGV in the surrounding environment. However, in exploration operations such as SAR missions or military assignments the AGV should additionally be capable of navigating through its environment in an energy efficient, quick and collision-free manner. Some of the widely used algorithms can be categorized as graph search, sampling-based, optimization-based, and reaction-based algorithms. In this paper, we compare the performances of A* and D* Lite graph-search, RRT and RRT* sampling-based, MPC optimization-based and APF reaction-based motion planning algorithms for exploration and search operations using realistic environments in AutoVRL, a high-fidelity simulator built using open-source tools. Graph search algorithms primarily consist of a graph defined by various nodes and edges having a certain unidirectional or bidirectional path cost. A* is one of the most useful graph search algorithms that computes the lowest cost path to go from an initial point to a final point, where the cost of each path is defined using a numerical weight. This algorithm makes use of a path cost, defined as the sum of positive costs of the individual paths and a heuristic function that estimates the lowest cost to travel from that node to the goal node. D* Lite, an extension of A*, is useful for motion planning in operations involving dynamic environments. A priority queue of nodes is maintained, expanded and updated by this algorithm when the graph changes. Further, the algorithm maintains a set of nodes that have been explored and re-explores the updated nodes. Unlike graph search methods, sampling-based algorithms sample the workspace by selecting random nodes to generate sub-optimal trajectories. Some of the most common sampling-based algorithms that will be studied in this paper are Rapidly exploring Random Tree (RRT) and RRT*. In RRT, a random node, at a specified maximum distance is selected from the workspace and is then connected to its nearest node from within the tree. This process continues into the unexplored regions of the workspace until a node is placed in a specified tolerance of the goal node. However, this does not always give the optimal path to the target position. In RRT* the node placed at a maximum distance from the current node is not connected to its nearest node from the tree, instead, it looks for local nodes in a pre-defined neighborhood and reconnects the tree to reduce the total path length from the starting position to the target. Unlike such grid-based techniques, optimization algorithms such as Model Predictive Control (MPC) focus on minimizing the total cost of reaching a state based on the AGV’s motion model. It considers a set of constraints that define the limitations of the AGV and computes an optimal trajectory using state and control inputs. In reaction-based methods such as Artificial Potential Field (APF), the AGV is represented as a point object present in a force field. The attractive and repulsive forces of obstacles and target position define the potential energy of the AGV which is used for planning its motion through the environment. This research focuses on comparing the computation time, length, and efficiency of the trajectories generated by these algorithms, in realistic outdoor and urban environments, for quick and collision-free exploration and search.
Presenting Author: Apoorva Khairnar Virginia Tech
Presenting Author Biography: Apoorva Khairnar completed her B.Tech. in Mechanical Engineering with Honors in Thermal Engineering from College of Engineering, Pune (COEP), India. She is now pursuing a Ph.D. in Mechanical Engineering from Virginia Tech under the mentorship of Dr. Azim Eskandarian at the Autonomous Systems and Intelligent Machines Lab. Her research interests include Deep Reinforcement Learning and Model Predictive Control for all-terrain Autonomous Ground Vehicles.
Authors:
Apoorva Khairnar Virginia TechShathushan Sivashangaran Virginia Tech
Azim Eskandarian Virginia Tech
A Comparison of Motion Planning Methods for Autonomous Ground Vehicle Exploration and Search
Paper Type
Technical Paper Publication