亚洲一区欧美在线,日韩欧美视频免费观看,色戒的三场床戏分别是在几段,欧美日韩国产在线人成

基于CenterNet的密集場(chǎng)景下多蘋(píng)果目標(biāo)快速識(shí)別方法
作者:
作者單位:

作者簡(jiǎn)介:

通訊作者:

中圖分類號(hào):

基金項(xiàng)目:

陜西省科技重大專項(xiàng)(2020zdzx03-04-01)和國(guó)家重點(diǎn)研發(fā)計(jì)劃項(xiàng)目(2016YFD0700503)


Fast Recognition Method for Multiple Apple Targets in Dense Scenes Based on CenterNet
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問(wèn)統(tǒng)計(jì)
  • |
  • 參考文獻(xiàn)
  • |
  • 相似文獻(xiàn)
  • |
  • 引證文獻(xiàn)
  • |
  • 資源附件
  • |
  • 文章評(píng)論
    摘要:

    為提高蘋(píng)果采摘機(jī)器人的識(shí)別效率和環(huán)境適應(yīng)性,使其能在密集場(chǎng)景下對(duì)多蘋(píng)果目標(biāo)進(jìn)行快速、精確識(shí)別,提出了一種密集場(chǎng)景下多蘋(píng)果目標(biāo)的快速識(shí)別方法。該方法借鑒“點(diǎn)即是目標(biāo)”的思路,通過(guò)預(yù)測(cè)蘋(píng)果的中心點(diǎn)及該蘋(píng)果的寬、高尺寸,實(shí)現(xiàn)蘋(píng)果目標(biāo)的快速識(shí)別;通過(guò)改進(jìn)CenterNet網(wǎng)絡(luò),設(shè)計(jì)了Tiny Hourglass-24輕量級(jí)骨干網(wǎng)絡(luò),同時(shí)優(yōu)化殘差模塊提高了目標(biāo)識(shí)別速度。試驗(yàn)結(jié)果表明,該方法在非密集場(chǎng)景下(即近距離場(chǎng)景)測(cè)試集的識(shí)別平均精度(Average precision,AP)為98.90%,F(xiàn)1值為96.39%;在密集場(chǎng)景下(即遠(yuǎn)距離場(chǎng)景)測(cè)試集的識(shí)別平均精度為93.63%,F(xiàn)1值為92.91%,單幅圖像平均識(shí)別時(shí)間為0.069s。通過(guò)與YOLO v3、CornerNet-Lite網(wǎng)絡(luò)在兩類測(cè)試集下的識(shí)別效果進(jìn)行對(duì)比,該方法在密集場(chǎng)景測(cè)試集上比YOLO v3和CornerNet-Lite網(wǎng)絡(luò)的平均精度分別提高了4.13、29.03個(gè)百分點(diǎn);單幅圖像平均識(shí)別時(shí)間比YOLO v3減少0.04s、比CornerNet-Lite減少0.646s。該方法無(wú)需使用錨框(Anchor box)和非極大值抑制后處理,可為蘋(píng)果采摘機(jī)器人在密集場(chǎng)景下快速準(zhǔn)確識(shí)別多蘋(píng)果目標(biāo)提供技術(shù)支撐。

    Abstract:

    In order to improve the recognition efficiency and environmental adaptability of the apple picking robot, so that it can quickly and accurately recognize multiple apple targets in dense scenes, a rapid recognition method for multiple apple targets in dense scenes was proposed. The method drew on the idea of “point is the target”, and realized the rapid identification of apple targets by predicting the center point of apple and the width and height of apple. By improving the CenterNet network, the Tiny Hourglass-24 lightweight backbone network was designed, and the residual module was optimized to improve the target recognition speed. The test results showed that the average recognition accuracy of this method on the test set in non-dense scenes (images taken in close-range scenes) was 98.90%, and F1 was 96.39%. In the dense scene (images taken in the remote scene), the recognition average precision (AP) of the test set was 93.63%, the F1 was 92.91%, and the average recognition time of a single image was 0.069s. By comparing with the recognition effect of YOLO v3 and CornerNet-Lite network under the two types of test sets, the AP of this method was increased by 4.13 percentage points and 29.03 percentage points respectively on the dense scene test set. The average image recognition time was 0.04s faster than that of YOLO v3 and 0.646s faster than that of CornerNet-Lite. This method did not need to use anchor box and non-maximum suppression post-processing, and can provide technical support for the apple picking robot to quickly and accurately identify multiple apple targets in dense scenes.

    參考文獻(xiàn)
    相似文獻(xiàn)
    引證文獻(xiàn)
引用本文

楊福增,雷小燕,劉志杰,樊攀,閆彬.基于CenterNet的密集場(chǎng)景下多蘋(píng)果目標(biāo)快速識(shí)別方法[J].農(nóng)業(yè)機(jī)械學(xué)報(bào),2022,53(2):265-273. YANG Fuzeng, LEI Xiaoyan, LIU Zhijie, FAN Pan, YAN Bin. Fast Recognition Method for Multiple Apple Targets in Dense Scenes Based on CenterNet[J]. Transactions of the Chinese Society for Agricultural Machinery,2022,53(2):265-273.

復(fù)制
分享
文章指標(biāo)
  • 點(diǎn)擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2021-01-27
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2021-02-21
  • 出版日期:
文章二維碼