亚洲一区欧美在线,日韩欧美视频免费观看,色戒的三场床戏分别是在几段,欧美日韩国产在线人成

基于改進(jìn)YOLO v5n的豬只盤點(diǎn)算法
作者:
作者單位:

作者簡介:

通訊作者:

中圖分類號:

基金項(xiàng)目:

國家重點(diǎn)研發(fā)計(jì)劃項(xiàng)目(2021YFD2000802)


Pig Counting Algorithm Based on Improved YOLO v5n
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問統(tǒng)計(jì)
  • |
  • 參考文獻(xiàn)
  • |
  • 相似文獻(xiàn)
  • |
  • 引證文獻(xiàn)
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    豬只盤點(diǎn)是規(guī)?;B(yǎng)殖中的重要環(huán)節(jié),為生豬精準(zhǔn)飼喂和資產(chǎn)管理提供了依據(jù)。人工盤點(diǎn)不僅耗時(shí)低效,而且容易出錯(cuò)。當(dāng)前已有基于深度學(xué)習(xí)的生豬智能盤點(diǎn)算法,但在遮擋重疊、光照等復(fù)雜場景下盤點(diǎn)精度較低。為提高復(fù)雜場景下生豬盤點(diǎn)的精度,提出了一種基于改進(jìn)YOLO v5n的豬只盤點(diǎn)算法。該算法從提升豬只目標(biāo)檢測性能出發(fā),構(gòu)建了一個(gè)多場景的生豬數(shù)據(jù)集;其次,在主干網(wǎng)絡(luò)中引入SE-Net通道注意力模塊,引導(dǎo)模型更加關(guān)注遮擋條件下豬只目標(biāo)信息的通道特征。同時(shí),增加了檢測層進(jìn)行多尺度特征融合處理,使模型更容易學(xué)習(xí)收斂并預(yù)測不同尺度的豬只對象,提升模型遮擋場景的檢測性能;最后,對邊界框損失函數(shù)以及非極大值抑制處理進(jìn)行了改進(jìn),使模型對遮擋的目標(biāo)有更好的識別效果。實(shí)驗(yàn)結(jié)果表明,與原YOLO v5n算法相比,改進(jìn)算法的平均絕對誤差(MAE)、均方根誤差(RMSE)以及漏檢率分別降低0.509、0.708以及3.02個(gè)百分點(diǎn),平均精度(AP)提高1.62個(gè)百分點(diǎn),達(dá)到99.39%,在復(fù)雜遮擋重疊場景下具有較優(yōu)的精確度和魯棒性。算法的MAE為0.173,與豬只盤點(diǎn)算法CClusnet、CCNN和PCN相比,分別降低0.257、1.497和1.567。在時(shí)間性能上,單幅圖像的平均識別時(shí)間僅為0.056s,符合實(shí)際豬場生產(chǎn)的實(shí)時(shí)性要求。

    Abstract:

    Pig counting is an important part in large-scale farming, which provides the basis for precise pig feeding and asset management. Manual counting is both time-consuming and inefficient,more than error-prone. In recent years, as the performance of deep learning systems far outperforms traditional machine learning systems, deep learningbased methods have demonstrated state-of-the-art performance in tasks such as image classification, segmentation, and object detection. Although there are currently existing intelligent pig counting algorithms based on deep learning, the counting accuracy is low in complex scenes such as occlusion and different illumination. So as to increase the accuracy of pig counting in complex scenarios, a pig counting algorithm was proposed based on improved YOLO v5n. Starting from improving the performance of pig target detection, the algorithm constructed a multi-scene pig dataset. In the field of target detection, each target was surrounded by the surrounding background, and the environment around the target object had rich contextual information. However, in the deep convolutional neural network, although the convolutional layer can capture the features of the image from the global receptive field to describe the image, it essentially only modeled the spatial information of the image without modeling the information between channels. By introducing the SE-Net channel attention module into the Backbone network, the model was guided to place greater emphasis on the channel features of the pig target information under occlusion conditions, so that it can better locate the features to be detected and enhance the network performance. At the same time, there may be pig targets of various scales in an actual picture of a dense scene of a pig farm. In order to deal with the complex and densely occluded actual production pig farm scene and obtain more abundant and comprehensive feature information, a detection layer was added based on the original three detection layers of different scales for multi-scale feature detection, so as to better learn the multi-level features of the occlusion target and improve the detection performance of model complex occlusion scenes. Finally, the loss function of the boundary box and the non-maximum suppression processing were improved to make the model have better recognition effect on the occluded targets. According to experimental results, in contrast with the original YOLO v5n algorithm, the mean absolute error (MAE), root mean square error (RMSE) and missed detection rate of the improved algorithm were reduced by 0.509, 0.708 and 3.02 percentage points, respectively, and the average precision (AP) was improved by 1.62 percentage points to 99.39%. The improved algorithm had high accuracy and good robustness in complex occlusion overlapping scenarios. Compared with other pig inventory algorithms: CClusnet, CCNN and PCN, the MAE of this algorithm was 0.173, which was 0.257, 1.497 and 1.567 lower than that of the other three algorithms. In terms of time performance, it only took 0.056s to recognize a single image on average, satisfying the real-time requirements of actual pig farm production.

    參考文獻(xiàn)
    相似文獻(xiàn)
    引證文獻(xiàn)
引用本文

楊秋妹,陳淼彬,黃一桂,肖德琴,劉又夫,周家鑫.基于改進(jìn)YOLO v5n的豬只盤點(diǎn)算法[J].農(nóng)業(yè)機(jī)械學(xué)報(bào),2023,54(1):251-262. YANG Qiumei, CHEN Miaobin, HUANG Yigui, XIAO Deqin, LIU Youfu, ZHOU Jiaxin. Pig Counting Algorithm Based on Improved YOLO v5n[J]. Transactions of the Chinese Society for Agricultural Machinery,2023,54(1):251-262.

復(fù)制
分享
文章指標(biāo)
  • 點(diǎn)擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2022-09-30
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2023-01-10
  • 出版日期:
文章二維碼