State estimation for vision-based simultaneous localization and mapping of unmanned vehicles
MetadataShow full item record
A vision-based simultaneous localization and mapping algorithm is developed to assist automated navigation. The proposed algorithm is particularly desired in a situation where a priori information of the environment is unavailable, such as landing on unknown planetary surface. Vision-sensor, IMU and laser altimeter are considered as the onboard sensor suits. For vision-sensor, instead of using standard pinhole camera model, colinearity model was employed for state estimation purpose. A nonlinear batch estimation and extended Kalman filter were formulated to test the performance of the algorithm, and validating simulation results are presented.