Distributed Formation Control of Autonomous Vehicles via Vision-Based Motion Estimation




Journal Title

Journal ISSN

Volume Title




Unmanned autonomous vehicles are starting to play a major role in tasks such as search and rescue, environment monitoring, security surveillance, transportation, and inspection. In these operations, two critical challenges arise. First, the use of global positioning system (GPS) based navigation is not sufficient. Fully autonomous operations in cities or other dense indoor and outdoor environments require ground/aerial vehicles to drive/fly between tall buildings or at low altitudes, where GPS signals are often shadowed or absent. In addition, when multiple vehicles are involved in a mission, the complexity of such systems increases with the number of vehicles. The goal of this dissertation is to address and solve the aforementioned sensing and control challenges. In particular, we present a novel vision-based control strategy for a swarm of vehicles to autonomously achieve a desired geometric shape (i.e., a formation). A “formation” of vehicles is the fundamental building block upon which more sophisticated tasks are constructed. We start by showing how the mathematical machinery in graph theory and networked dynamical systems can be used to assign distributed navigation policies to individual vehicles such that the desired formation emerges from their collective behavior. In such a case, vehicles can perform tasks in a collaborative manner and exchange information between each other, preventing the system complexity from increasing with the number of vehicles. We proceed by presenting a novel camera pose estimation algorithm to recover the rotation and translation (i.e., pose) changes of a moving camera from the captured images. Our algorithm, called QuEst, is based on the quaternion representation of the rotation, and compared to the stateof-the-art algorithms has as much as a 50% decrease in the estimation error. In applications such as the visual simultaneous localization and mapping (SLAM), the estimated pose from images is used to map an unknown environment and localize the position of the vision sensor on the generated map. Due to its higher estimation accuracy, QuEst has the potential of improving the accuracy and computational efficiency of these applications. Lastly, we merge the proposed pose estimation algorithm and the formation control strategy to derive a vision-based formation control pipeline. In the proposed pipeline, the images captured by the vehicles’ onboard cameras are used in QuEst to localize the neighboring vehicles and provide the required feedback in the formation control. Hence, vehicles achieve the desired formation using their local perception of the environment, effectively eliminating the need for GPS measurements. Throughout this work, we present several examples to clarify the concepts and provide simulations and experiments to validate the theoretical results.



Autonomous vehicles, Formation control (Machine theory), Robot vision, Robot camera