Abstract:To address the problems of inadequate environmental representation in single sensor map construction and inability to provide a complete environmental map for autonomous navigation of mobile robots, a more complete and accurate raster map was constructed by complementary fusion of environmental information obtained from LiDAR and depth cameras. Firstly, the traditional ORB-SLAM2 algorithm was enhanced to have the functions of dense point cloud map construction, octree map construction and raster map construction. Secondly, in order to verify the performance of the enhanced ORB-SLAM2 algorithm, it was tested in the fr1_desk1 dataset and real scenes, and the data showed that the absolute position error of the enhanced ORB-SLAM2 algorithm was reduced by 52.2%, and the camera tracking trajectory grew by 14.7%, which made the localization more accurate. Then the D435i type depth camera adopted the enhanced ORB-SLAM2 algorithm and the Gmapping-Slam algorithm adopted by LiDAR, and constructed the global raster map by complementary fusion according to the rules of Bayesian estimation. Finally, an experimental platform was built for validation and compared with the map building effect of the two sensors, depth camera and LiDAR, respectively. The experimental results showed that the fusion algorithm had a stronger ability to recognize the surrounding obstacles, which can obtain more complete environmental information, and the map construction was more clear and precise, which met the needs of mobile robot navigation and path planning.