Monocular vision based localization and mapping

K-REx Repository

Show simple item record

dc.contributor.author Jama, Michal
dc.date.accessioned 2011-05-03T16:45:28Z
dc.date.available 2011-05-03T16:45:28Z
dc.date.issued 2011-05-03
dc.identifier.uri http://hdl.handle.net/2097/8561
dc.description.abstract In this dissertation, two applications related to vision-based localization and mapping are considered: (1) improving navigation system based satellite location estimates by using on-board camera images, and (2) deriving position information from video stream and using it to aid an auto-pilot of an unmanned aerial vehicle (UAV). In the first part of this dissertation, a method for analyzing a minimization process called bundle adjustment (BA) used in stereo imagery based 3D terrain reconstruction to refine estimates of camera poses (positions and orientations) is presented. In particular, imagery obtained with pushbroom cameras is of interest. This work proposes a method to identify cases in which BA does not work as intended, i.e., the cases in which the pose estimates returned by the BA are not more accurate than estimates provided by a satellite navigation systems due to the existence of degrees of freedom (DOF) in BA. Use of inaccurate pose estimates causes warping and scaling effects in the reconstructed terrain and prevents the terrain from being used in scientific analysis. Main contributions of this part of work include: 1) formulation of a method for detecting DOF in the BA; and 2) identifying that two camera geometries commonly used to obtain stereo imagery have DOF. Also, this part presents results demonstrating that avoidance of the DOF can give significant accuracy gains in aerial imagery. The second part of this dissertation proposes a vision based system for UAV navigation. This is a monocular vision based simultaneous localization and mapping (SLAM) system, which measures the position and orientation of the camera and builds a map of the environment using a video-stream from a single camera. This is different from common SLAM solutions that use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. The SLAM solution was built by significantly modifying and extending a recent open-source SLAM solution that is fundamentally different from a traditional approach to solving SLAM problem. The modifications made are those needed to provide the position measurements necessary for the navigation solution on a UAV while simultaneously building the map, all while maintaining control of the UAV. The main contributions of this part include: 1) extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; 2) improved performance of the SLAM algorithm for lower camera frame rates; and 3) the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible, and can be effective in Global Positioning System denied environments. en_US
dc.language.iso en_US en_US
dc.publisher Kansas State University en
dc.subject monocular vision en_US
dc.subject bundle adjustment en_US
dc.subject degrees of freedom en_US
dc.subject SLAM en_US
dc.title Monocular vision based localization and mapping en_US
dc.type Dissertation en_US
dc.description.degree Doctor of Philosophy en_US
dc.description.level Doctoral en_US
dc.description.department Department of Electrical and Computer Engineering en_US
dc.description.advisor Balasubramaniam Natarajan en_US
dc.description.advisor Dale E. Schinstock en_US
dc.subject.umi Engineering (0537) en_US
dc.date.published 2011 en_US
dc.date.graduationmonth May en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search K-REx


Browse

My Account

Statistics








Center for the

Advancement of Digital

Scholarship

cads@k-state.edu