Optimization of Pose, Texture and Geometry in 3D Reconstruction with Consumer Depth Cameras
3D Reconstruction is one of the most popular topics in the area of computer graphics and vision. A typical 3D reconstruction process is to reconstruct the 3D model or other similar geometry representations from different sources of data, including color images and depth data captured by depth cameras. Online and offline RGB-D (RGB and depth) reconstruction techniques have been developing rapidly in this decade with the prevalence of consumer depth cameras. However, current 3D construction methods still lack robustness in the tracking process, and also pay little attention on the quality of final reconstructed 3D models. This dissertation is focused on improving the robustness of camera tracking in the online RGB-D reconstruction process, as well as optimizing camera pose, face texture and geometry quality of 3D models in the offline RGB-D reconstruction with consumer depth cameras. One problem in online 3D reconstruction is that, existing camera pose estimation approaches in online RGB-D reconstruction systems always suffer from fast-scanned data and generate inaccurate relative transformations between consecutive frames. In order to improve the tracking robustness of online 3D reconstruction, we propose a novel feature-based camera pose optimization algorithm for real-time 3D reconstruction systems. We have demonstrated that our method improves current methods by utilizing matched features across all frames, and is robust on reconstructing RGB-D data with large adjacent shifts across frames. Another problem in RGB-D reconstruction is that the geometry of reconstructed 3D models is usually too dense and coarse, and texture quality of mesh faces is always too low. To deal with this problem, we introduce a new plane-based RGB-D reconstruction approach with plane, geometry and texture optimization to improve the geometry and texture quality of reconstructed models. Compared to existing planar reconstruction methods which cover only large planar regions in the scene, our method reconstructs the entire original scene without losing geometry details in the low-polygonal lightweight result meshes with clear face textures and sharp features. We have demonstrated the effectiveness of our approach by applying it onto different RGB-D benchmarks and comparing it with other state-of-the-art reconstruction methods.