Abstract:To address the challenges of limited feature point matching, vulnerability to loss, and sparse point cloud in dark lighting conditions in orchards, the ORB-SLAM2 was improved by proposing an adaptive threshold-based algorithm for dense construction of binocular 3D orchard maps. Firstly, a FAST corner extraction method with adaptable threshold values was introduced in the tracking thread, and ORB features were extracted from left and right eye images by calculating the average pixel solution threshold across images captured under different lighting conditions, which effectively enhanced the number of feature point matches under different lighting conditions. Subsequently, local map tracking was performed based on camera pose estimation by using feature points and accomplished local map construction through bundle adjustment optimization of key frame map points derived from the tracking thread. Based on the original algorithm, a dense mapping module was incorporated by utilizing ZED-stereo binocular deep fusion to acquire image pairs through feature matching of key frames from the left and right eyes. Depth information was obtained by solving the image pairs, camera pose was determined via depth optimization, and local point clouds were constructed and stitched together based on the camera pose. Finally, global BA optimization was applied to refine the resulting point cloud map, enabling the construction of a three-dimensional dense map of an orchard. The improved ORB-SLAM2 algorithm demonstrated enhanced convergence in terms of absolute trajectory error when evaluated on the KITTI data set sequence. Specifically, the standard deviation of trajectory error was decreased by 60.5% and 62.6% in sequences 00 and 07, respectively, while also exhibiting varying degrees of improvement in other sequences. These results indicated a notable enhancement in positioning accuracy compared with the original algorithm. The results demonstrated that in comparison with the original algorithm, the proposed algorithm exhibited excellent adaptability to diverse lighting conditions. Specifically, it achieved an average increase of 5.32%, 4.53%, 8.93% and 12.91% in feature point matching under strong light, normal light, dark light, and rainy day respectively. The results demonstrated that the yaw angle exhibited enhanced convergence, resulting in higher positioning accuracy. Moreover, the proposed algorithm reduced the number of extracted key frames by 2.86% and decreased average tracking time by 39.3% compared with the original approach. Additionally, it achieved a favorable dense mapping effect, accurately reflecting both robot pose and real environmental information within the orchard. Consequently, this method satisfied the requirements for constructing a 3D dense point cloud map of an orchard and provided essential support for realizing navigation path planning for orchard robots.