98%
921
2 minutes
20
The relative position of the orchard robot to the rows of fruit trees is an important parameter for achieving autonomous navigation. The current methods for estimating the position parameters between rows of orchard robots obtain low parameter accuracy. To address this problem, this paper proposes a machine vision-based method for detecting the relative position of orchard robots and fruit tree rows. First, the fruit tree trunk is identified based on the improved YOLOv4 model; second, the camera coordinates of the tree trunk are calculated using the principle of binocular camera triangulation, and the ground projection coordinates of the tree trunk are obtained through coordinate conversion; finally, the midpoints of the projection coordinates of different sides are combined, the navigation path is obtained by linear fitting with the least squares method, and the position parameters of the orchard robot are obtained through calculation. The experimental results show that the average accuracy and average recall rate of the improved YOLOv4 model for fruit tree trunk detection are 5.92% and 7.91% higher, respectively, than those of the original YOLOv4 model. The average errors of heading angle and lateral deviation estimates obtained based on the method in this paper are 0.57° and 0.02 m. The method can accurately calculate heading angle and lateral deviation values at different positions between rows and provide a reference for the autonomous visual navigation of orchard robots.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10650010 | PMC |
http://dx.doi.org/10.3390/s23218807 | DOI Listing |
Front Plant Sci
August 2025
College of Mathematics and Computer Science, Yan'an University, Yan'an, Shaanxi, China.
To address the challenge of real-time kiwifruit detection in trellised orchards, this paper proposes YOLOv10-Kiwi, a lightweight detection model optimized for resource-constrained devices. First, a more compact network is developed by adjusting the scaling factors of the YOLOv10n architecture. Second, to further reduce model complexity, a novel C2fDualHet module is proposed by integrating two consecutive Heterogeneous Kernel Convolution (HetConv) layers as a replacement for the traditional Bottleneck structure.
View Article and Find Full Text PDFFront Plant Sci
September 2025
College of Big Data, Yunnan Agricultural University, Kunming, China.
Introduction: Accurate identification of cherry maturity and precise detection of harvestable cherry contours are essential for the development of cherry-picking robots. However, occlusion, lighting variation, and blurriness in natural orchard environments present significant challenges for real-time semantic segmentation.
Methods: To address these issues, we propose a machine vision approach based on the PIDNet real-time semantic segmentation framework.
Front Plant Sci
August 2025
School of Future Technology, Fujian Agriculture and Forestry University, Fuzhou, China.
With the development of smart agriculture, fruit picking robots have attracted widespread attention as one of the key technologies to improve agricultural productivity. Visual perception technology plays a crucial role in fruit picking robots, involving precise fruit identification, localization, and grasping operations. This paper reviews the research progress in the visual perception technology for fruit picking robots, focusing on key technologies such as camera types used in picking robots, object detection techniques, picking point recognition and localization, active vision, and visual servoing.
View Article and Find Full Text PDFSensors (Basel)
August 2025
College of Engineering, Zhejiang Normal University, Jinhua 321004, China.
Accurate real-time detection of hawthorn by vision systems is a fundamental prerequisite for automated harvesting. This study addresses the challenges in hawthorn orchards-including target overlap, leaf occlusion, and environmental variations-which lead to compromised detection accuracy, high computational resource demands, and poor real-time performance in existing methods. To overcome these limitations, we propose YOLO-DCL (group shuffling convolution and coordinate attention integrated with a lightweight head based on YOLOv8n), a novel lightweight hawthorn detection model.
View Article and Find Full Text PDFSensors (Basel)
August 2025
Department of UAV Engineering, Shijiazhuang Campus, Army Engineering University, Shijiazhuang 050003, China.
Apple-detection performance in orchards degrades markedly under low-light conditions, where intensified noise and non-uniform exposure blur edge cues critical for precise localisation. We propose Knowledge Distillation with Geometry-Consistent Feature Alignment (KDFA), a compact end-to-end framework that couples image enhancement and detection through the following two complementary components: (i) Cross-Domain Mutual-Information-Bound Knowledge Distillation, which maximises an InfoNCE lower bound between daylight-teacher and low-light-student region embeddings; (ii) Geometry-Consistent Feature Alignment, which imposes Laplacian smoothness and bipartite graph correspondences across multiscale feature lattices. Trained on 1200 pixel-aligned bright/low-light image pairs, KDFA achieves 51.
View Article and Find Full Text PDF