亚洲一区欧美在线,日韩欧美视频免费观看,色戒的三场床戏分别是在几段,欧美日韩国产在线人成

基于多視角三維點(diǎn)云融合的采棉機(jī)器人視覺感知方法
作者:
作者單位:

作者簡介:

通訊作者:

中圖分類號:

基金項(xiàng)目:

江蘇省高等學(xué)?;A(chǔ)科學(xué)(自然科學(xué))研究項(xiàng)目(23KJA460008)和江蘇省研究生科研與實(shí)踐創(chuàng)新計(jì)劃項(xiàng)目(SJCX23_1180)


Visual Perception Method for Cotton-picking Robots Based on Fusion of Multi-view 3D Point Clouds
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問統(tǒng)計(jì)
  • |
  • 參考文獻(xiàn)
  • |
  • 相似文獻(xiàn)
  • |
  • 引證文獻(xiàn)
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    針對傳統(tǒng)采棉機(jī)器人因單一視角和二維圖像信息帶來的視覺感知局限問題,本文提出了一種多視角三維點(diǎn)云配準(zhǔn)方法,以增強(qiáng)采棉機(jī)器人實(shí)時(shí)三維視覺感知能力。采用4臺固定位姿的Realsense D435型深度相機(jī),從不同視角獲取棉花點(diǎn)云數(shù)據(jù)。通過AprilTags算法標(biāo)定出深度相機(jī)RGB成像模塊與Tag標(biāo)簽的相對位姿,并基于深度相機(jī)中RGB成像模塊與立體成像模塊坐標(biāo)系間的轉(zhuǎn)換關(guān)系,解算出各個(gè)相機(jī)間點(diǎn)云坐標(biāo)的對應(yīng)變換,進(jìn)而實(shí)現(xiàn)點(diǎn)云間的融合配準(zhǔn)。結(jié)果表明,本文配準(zhǔn)方法的全局配準(zhǔn)平均距離誤差為0.93cm,平均配準(zhǔn)時(shí)間為0.025s,表現(xiàn)出較高的配準(zhǔn)精度和效率。同時(shí),為滿足采棉機(jī)器人感知的實(shí)時(shí)性要求,本文對算法中點(diǎn)云獲取、背景濾波和融合配準(zhǔn)等步驟進(jìn)行了效率分析及優(yōu)化,最終整體算法運(yùn)行速度達(dá)到29.85f/s,滿足采棉機(jī)器人感知系統(tǒng)實(shí)時(shí)性需求。

    Abstract:

    Traditional cotton-picking robots face visual perception challenges due to their reliance on single viewpoint and two-dimensional imagery. To address this, a multi-view 3D point cloud registration method was introduced, enhancing these robots’ real-time 3D visual perception. Four fixed-pose Realsense D435 depth cameras were utilized to capture point cloud data of the cotton from multiple viewpoints. To ensure the quality of fusion registration, each camera underwent rigorous imaging distortion calibration and depth error adjustment before operation. With the help of AprilTags algorithm, the relative pose between the RGB imaging modules of the cameras and their AprilTag labels was calibrated, which clarified the transformation relationship between the coordinate systems of the RGB and stereo imaging modules. As a result,the transformations of point cloud coordinates between cameras can be deduced, ensuring accurate fusion and alignment. The findings showed that this method had an average global alignment error of 0.93cm and took 0.025s on average, highlighting its accuracy and efficiency against the commonly used methods. To cater to the real-time demands of cotton-picking robots, processes for point cloud acquisition, background filtering, and fusion registration were also optimized. Impressively, the algorithm’s speed tops at 29.85f/s, meeting the real-time demands of the robot’s perception system.

    參考文獻(xiàn)
    相似文獻(xiàn)
    引證文獻(xiàn)
引用本文

劉坤,王曉,朱一帆.基于多視角三維點(diǎn)云融合的采棉機(jī)器人視覺感知方法[J].農(nóng)業(yè)機(jī)械學(xué)報(bào),2024,55(4):74-81. LIU Kun, WANG Xiao, ZHU Yifan. Visual Perception Method for Cotton-picking Robots Based on Fusion of Multi-view 3D Point Clouds[J]. Transactions of the Chinese Society for Agricultural Machinery,2024,55(4):74-81.

復(fù)制
分享
文章指標(biāo)
  • 點(diǎn)擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2023-08-14
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2024-04-10
  • 出版日期: