Abstract:During the process from harvest to sales, apples are susceptible to mechanical damage, which can have detrimental effects on their quality and lead to rotting. Detecting and removing this damage in a timely manner is crucial to prevent further deterioration. However, early-stage mechanical damage in apples often manifests as subtle color changes, making it challenging to detect. To address this issue, an apple implicit damage detection method was presented based on structured-illumination reflectance imaging (SIRI) and convolutional neural network (CNN). By building an SIRI system to acquire modulated structured light images of the measured apples, and utilizing three-phase demodulation method to extract the alternating current component, the image contrast of the apple implicit damage can be enhanced. The dataset of apple implicit damages was produced by using the images of alternating current components. Several CNNbased semantic segmentation networks, including FCN, UNet, HRNet, PSPNet, DeepLabv3+, LRASPP, and SegNet were employed to train the damage detection models, respectively. Several groups of experimental results demonstrated that these models can effectively detect the apple implicit damages in different situations. In contrast, the precision (P), recall (R), F1 score, and mean intersection over union (MIoU) of the HRNet model were respectively 97.96%, 97.52%, 97.74% and 97.58%. However, its detection speed was only 60 frames per second. The PSPNet model had a faster detection speed, reaching up to 217 frames per second. However, it had slightly lower detection accuracy, with precision (P), recall (R), F1 score, and mean intersection over union (MIoU) of 97.10%, 94.57%, 95.82%, and 95.90%, respectively.