亚洲一区欧美在线,日韩欧美视频免费观看,色戒的三场床戏分别是在几段,欧美日韩国产在线人成

基于激光雷達(dá)與深度相機(jī)融合的SLAM算法
作者:
作者單位:

作者簡(jiǎn)介:

通訊作者:

中圖分類號(hào):

基金項(xiàng)目:

安徽省科技重大專項(xiàng)項(xiàng)目(201903a05020029)


SLAM Algorithm Based on Fusion of LiDAR and Depth Camera
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問(wèn)統(tǒng)計(jì)
  • |
  • 參考文獻(xiàn)
  • |
  • 相似文獻(xiàn)
  • |
  • 引證文獻(xiàn)
  • |
  • 資源附件
  • |
  • 文章評(píng)論
    摘要:

    針對(duì)單一傳感器地圖構(gòu)建時(shí)存在環(huán)境表征不充分,無(wú)法為移動(dòng)機(jī)器人自主導(dǎo)航提供完整環(huán)境地圖等問(wèn)題,本文通過(guò)將激光雷達(dá)與深度相機(jī)獲取的環(huán)境信息進(jìn)行互補(bǔ)融合,構(gòu)建出更完整精確的柵格地圖。首先,對(duì)傳統(tǒng)ORB-SLAM2算法進(jìn)行增強(qiáng),使其具備稠密點(diǎn)云地圖構(gòu)建、八叉樹地圖構(gòu)建以及柵格地圖構(gòu)建等功能。其次,為驗(yàn)證增強(qiáng)后ORB-SLAM2算法的性能,在fr1_desk1數(shù)據(jù)集和真實(shí)場(chǎng)景下進(jìn)行測(cè)試,數(shù)據(jù)顯示增強(qiáng)后ORB-SLAM2算法絕對(duì)位姿誤差降低52.2%,相機(jī)跟蹤軌跡增長(zhǎng)14.7%,定位更加精準(zhǔn)。然后,D435i型深度相機(jī)采用增強(qiáng)型ORB-SLAM2算法,激光雷達(dá)采用的Gmapping-Slam算法,按照貝葉斯估計(jì)的規(guī)則進(jìn)行互補(bǔ)融合構(gòu)建全局柵格地圖。最后,搭建實(shí)驗(yàn)平臺(tái)進(jìn)行驗(yàn)證,并分別與深度相機(jī)和激光雷達(dá)2個(gè)傳感器建圖效果進(jìn)行對(duì)比。實(shí)驗(yàn)結(jié)果表明,本文融合算法對(duì)周圍障礙物的識(shí)別能力更強(qiáng),可獲取更完整的環(huán)境信息,地圖構(gòu)建更加清晰精確,滿足移動(dòng)機(jī)器人導(dǎo)航與路徑規(guī)劃的需要。

    Abstract:

    To address the problems of inadequate environmental representation in single sensor map construction and inability to provide a complete environmental map for autonomous navigation of mobile robots, a more complete and accurate raster map was constructed by complementary fusion of environmental information obtained from LiDAR and depth cameras. Firstly, the traditional ORB-SLAM2 algorithm was enhanced to have the functions of dense point cloud map construction, octree map construction and raster map construction. Secondly, in order to verify the performance of the enhanced ORB-SLAM2 algorithm, it was tested in the fr1_desk1 dataset and real scenes, and the data showed that the absolute position error of the enhanced ORB-SLAM2 algorithm was reduced by 52.2%, and the camera tracking trajectory grew by 14.7%, which made the localization more accurate. Then the D435i type depth camera adopted the enhanced ORB-SLAM2 algorithm and the Gmapping-Slam algorithm adopted by LiDAR, and constructed the global raster map by complementary fusion according to the rules of Bayesian estimation. Finally, an experimental platform was built for validation and compared with the map building effect of the two sensors, depth camera and LiDAR, respectively. The experimental results showed that the fusion algorithm had a stronger ability to recognize the surrounding obstacles, which can obtain more complete environmental information, and the map construction was more clear and precise, which met the needs of mobile robot navigation and path planning.

    參考文獻(xiàn)
    相似文獻(xiàn)
    引證文獻(xiàn)
引用本文

劉慶運(yùn),楊華陽(yáng),劉濤,吳天躍,盧超.基于激光雷達(dá)與深度相機(jī)融合的SLAM算法[J].農(nóng)業(yè)機(jī)械學(xué)報(bào),2023,54(11):29-38. LIU Qingyun, YANG Huayang, LIU Tao, WU Tianyue, LU Chao. SLAM Algorithm Based on Fusion of LiDAR and Depth Camera[J]. Transactions of the Chinese Society for Agricultural Machinery,2023,54(11):29-38.

復(fù)制
分享
文章指標(biāo)
  • 點(diǎn)擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2023-07-01
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2023-11-10
  • 出版日期:
文章二維碼