亚洲一区欧美在线,日韩欧美视频免费观看,色戒的三场床戏分别是在几段,欧美日韩国产在线人成

吊裝機器人肢體動作指令識別技術(shù)研究
作者:
作者單位:

作者簡介:

通訊作者:

中圖分類號:

基金項目:

國家自然科學基金項目(51575219)和福建省海洋經(jīng)濟創(chuàng)新發(fā)展區(qū)域示范項目(2014FJPT03)


Research on Limb Motion Command Recognition Technology of Lifting Robot
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問統(tǒng)計
  • |
  • 參考文獻
  • |
  • 相似文獻
  • |
  • 引證文獻
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    鑒于Kinect相機進行肢體識別監(jiān)控距離有限,提出使用網(wǎng)絡大變焦攝像頭、構(gòu)建CNN-BP融合網(wǎng)絡進行肢體動作識別,并以9組機器人吊裝指令為例進行訓練和識別。首先,基于OpenPose提取18個骨架節(jié)點坐標,生成RGB骨架圖和骨架向量;然后,采用遷移學習方法對RGB骨架圖使用InceptionV3網(wǎng)絡提取圖像深層抽象特征,并對訓練數(shù)據(jù)集采用旋轉(zhuǎn)、平移、縮放和仿射多種數(shù)據(jù)增強方式,以擴充訓練數(shù)據(jù),防止過擬合;再將提取的骨架向量使用BP神經(jīng)網(wǎng)絡提取點線面等淺層特征;最后對InceptionV3網(wǎng)絡和BP神經(jīng)網(wǎng)絡輸出進行融合,并使用Softmax求解器得到肢體識別結(jié)果。將肢體識別結(jié)果輸入機器人輔助吊裝控制系統(tǒng),建立雙重驗證控制方法,完成機器人輔助吊裝操作。實驗結(jié)果表明,該方法保證了模型運行的精度和時效性,實時識別精度達0.99以上,大大提升了遠距離人機交互能力。

    Abstract:

    In view of the limited monitoring distance of Kinect for limb recognition, the large zoom network camera was used and CNN-BP fusion network for human behavior recognition was constructed, and the nine groups of robot lifting instructions were trained and identified. Firstly, totally 18 skeleton nodes were extracted based on OpenPose to generate RGB skeleton map and skeleton vector. Then, using the migration learning method, the InceptionV3 network was used to extract the deep abstract features of the image, and the training data set was rotated, translated, scaled and affine. A variety of data enhancement methods were used to extend the training data to prevent overfitting;and then the extracted skeleton vector was extracted from the shallow layer features such as the point line surface using BP neural network;the InceptionV3 network and the BP neural network output were merged and obtained by using the Softmax solver to obtain limb classification results. Finally, the result of limb recognition was input into the robot auxiliary hoisting control system, and the double verification control mode was established to complete the robot auxiliary hoisting operation. The test results showed that the method ensured the timeliness of the model operation, and the real-time recognition accuracy reached 0.99, which greatly improved the long-distance human-computer interaction capability.

    參考文獻
    相似文獻
    引證文獻
引用本文

倪濤,鄒少元,劉海強,黃玲濤,陳寧,張紅彥.吊裝機器人肢體動作指令識別技術(shù)研究[J].農(nóng)業(yè)機械學報,2019,50(6):405-411,426. NI Tao, ZOU Shaoyuan, LIU Haiqiang, HUANG Lingtao, CHEN Ning, ZHANG Hongyan. Research on Limb Motion Command Recognition Technology of Lifting Robot[J]. Transactions of the Chinese Society for Agricultural Machinery,2019,50(6):405-411,426.

復制
分享
文章指標
  • 點擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2018-11-16
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2019-06-10
  • 出版日期: