Risk-based Motion Planning and Control for Robotic Systems
Date
Authors
ORCID
Journal Title
Journal ISSN
Volume Title
Publisher
item.page.doi
Abstract
A robot autonomy stack usually consists of several modules that enable it to perceive the environment and decide how to interact with it to achieve a desired task. At the heart of this stack are the motion planning and control modules. The motion planning module is generally responsible for decision making and generating a plan for the robot to follow, such as determining how an autonomous car should drive around pedestrians and other vehicles. The control module computes a finer sequence of control actions that can be issued to the actuators to operate the robot. One issue that plagues robot motion planning and control is the effect of uncertainty, of which there are different types, on the system. This includes unknown and unmodeled disturbances that affect the system such as noise, aerodynamics, or simplified dynamics models. However, addressing these uncertainties is non-trivial and often requires a trade-off between accounting for the uncertainty accurately and the tractability of solving the problems. This dissertation develops risk-based solutions for a few robot motion planning and control problems. The contributions of the dissertation are categorized into four main types. The first part addresses control design with complex spatio-temporal requirements under uncertainty. An optimization-based control algorithm is designed to guarantee the completion of the requirements when the robot dynamics are affected by process noise. The second part addresses sampling-based motion planning under uncertainty. RRT*, a famous motion planning algorithm in robotics, is considered and risk-aware variants of it are developed to account for process and measurement noise affecting the robotic system. The third part addresses a limitation of learning-based planning approaches with an application to multi-agent motion planning. A reinforcement learning (RL) framework is considered for learning policies then an optimization-based module, called a safety filter, is proposed to enforce collision avoidance as hard constraints, which learning algorithms cannot do. The safety filter is designed to handle process, state, and measurement noise. Finally, the fourth part addresses data-driven planning in dynamic and uncertain environments. This assumes that the robot has access to some future predictions of the obstacles in the environment, such as where they may be in the next few seconds. A safety filter is then developed using these sample predictions to plan a safe trajectory for the robot. In several sections, uncertainties whose distribution is unknown, which is generally the case, are considered and addressed using the concept of distributionally robust optimization (DRO) to develop solutions that guarantee safety or the successful completion of the task despite the lack of knowledge of the underlying distribution. Throughout, examples are provided to emphasize and clarify core concepts, and simulations and physical experiments are performed to demonstrate the efficacy of the developed solutions.