Towards Over-Canopy Autonomous Navigation: Crop-Agnostic LiDAR-Based Crop-Row Detection in Arable Fields

ICRA 2025 (Accepted)

Figure 1

Abstract

Autonomous navigation is crucial for various robotics applications in agriculture. However, many existing methods depend on RTK-GPS devices, which can be susceptible to loss of radio signal or intermittent reception of corrections from the internet. Consequently, research has increasingly focused on using RGB cameras for crop-row detection, though challenges persist when dealing with grown plants. This paper introduces a LiDAR-based navigation system that can achieve crop-agnostic over-canopy autonomous navigation in row-crop fields, even when the canopy fully blocks the inter-row spacing. Our algorithm can detect crop rows across diverse scenarios, encompassing various crop types, growth stages, the presence of weeds, curved rows, and discontinuities. Without utilizing a global localization method (i.e., based on GPS), our navigation system can perform autonomous navigation in these challenging scenarios, detect the end of the crop rows, and navigate to the next crop row autonomously, providing a crop-agnostic approach to navigate an entire field. The proposed navigation system has undergone tests in various simulated and real agricultural fields, achieving an average cross-track error of 3.55cm without human intervention. The system has been deployed on a customized UGV robot, which can be reconfigured depending on the field conditions.


Navigation system’s workflow. (i) The crop row detection algorithm uses LiDAR data and filtered odometry values [x, y, ψ] as inputs, predicting crop rows in the form [x1, y1, x2, y2] within the robot’s frame. (ii) The crop row following algorithm applies nonlinear MPC to control the robot to follow the center line of the predicted rows, sending linear velocity v and angular velocity w commands. (iii) The crop row switching algorithm utilizes a PID controller to navigate the robot to the next lane if no more crop rows are detected.

1. Crop Row Detection


Our crop-row detection algorithm is comprised of three key components. First, we estimate the ground plane using the LiDAR’s tilted angle θ, raise it to intersect the point cloud centroid, and filter out points below this plane. This process isolates the returns corresponding only to the top of the plants. Second, employing this filtered LiDAR data, we apply the K-means clustering algorithm to segment crop rows autonomously. The generated centroids of these segments represent the center of crop rows. We utilize the robot’s filtered odometry, [x, y, ψ], to accumulate detected centroids into the robot frame within a short time window. This approach allows us to have accurate local positioning and avoid drifting. Finally, we implement the RANSAC line fitting algorithm on this crop-row centroids map, extracting 2D line locations for the first row on the left and the first row on the right in the robot frame.

2. Crop Row Following


After crop-row detection, we generate waypoints along the center line of the two predicted crop rows. We apply a nonlinear Model Predictive Control (MPC) algorithm in the robot's local frame for tracking the generated waypoints. ACADO is used to solve the quadratic programming problem, enabling real-time operation

3. Crop Row Switching


Whole field coverage strategy. We implemented a PID controller method that uses mainly filtered odometry data. Since the crops are typically planted in parallel rows for efficient management, this approach first rotates the robot by 90 degrees, drives it forward based on the distance between crop rows, and then performs another 90-degree turn to navigate into the next lane. Then, the LiDAR-based navigation system is started again and navigates the robot through the new lane. This process is implemented as a finite state machine, which enables the automatic change of states during execution.

3. Experiments

We conducted experiments in both Gazebo simulated environments and real fields with different crops (corn & soybean) and growth stages (young & grown) to test our crop-row detection and crop-row following algorithms performance.


Fields overview and the collected drone maps.The experiments are conducted in soybean (top left), corn (top middle), and curved corn (top right) fields. We collected the drone map for (soybean (bottom left), corn (bottom middle), and curved corn (bottom right)) with RTK-GPS inputs as ground truth for later evaluations. We overlay the detected centroids on the drone map to provide qualitative results, demonstrating that the detected centroids align with the crop rows as shown in the map.

Crop-row detection and crop-row following results. The crop-row detection algorithm achieved an average 3.35cm detection accuracy. This navigation system achieved an average of 3.55cm autonomous driving accuracy and outperformed the baseline method in all testing fields. For the two figures on the right, the x-axis represents the driving forward direction (around 200m). The y-axis for the top figure represents the robot's deviation from the center in meters, while the y-axis for the bottom one represents the angular error between the robot and the crop rows in degrees. These figures show our navigation system’s capability and robustness to recover from an initial distance error of 0.3m and an angular error of 8 degrees as these values decrease and fluctuate around 0 as the robot moves forward.

Trajectory (red) of the robot while navigating across the whole field (green) in Gazebo-simulated young soybean (top left), young corn (bottom left), grown soybean (top right), and grown corn (bottom right) fields (30m × 12m with 16 rows). These show our crop-row switching maneuver ability to cover the whole fields with different crop types and different crop stages

Conclusion

In this paper, we present a novel LiDAR-based crop-row detection approach that integrates the Model Predictive Control (MPC) and lane-switching algorithm to create an autonomous navigation system for agricultural robots in row-crop fields. This system facilitates independent robot navigation for diverse agricultural tasks, contributing to precision farming. Our crop-row detection method utilizes 3D LiDAR data to extract the height information and accurately detects crop rows amidst challenging scenarios such as canopy obstructions. The whole navigation system incorporates the crop-row detection, following, and switching algorithm, enabling automated tracking of detected crop rows and full field coverage. This navigation system is evaluated in both actual fields and Gazebo simulated fields with a 1:1 scale Amiga robot model. The crop-row detection algorithm achieves an average detection accuracy of 3.35cm, while the crop-row following algorithm achieves an average driving accuracy of 3.55cm. Future work will focus on improving the robustness of the crop row perception algorithm by integrating camera data, especially to handle gaps between plants during the germination stage.

BibTeX

@misc{liu2024overcanopyautonomousnavigationcropagnostic,
            title={Towards Over-Canopy Autonomous Navigation: Crop-Agnostic LiDAR-Based Crop-Row Detection in Arable Fields}, 
            author={Ruiji Liu and Francisco Yandun and George Kantor},
            year={2024},
            eprint={2403.17774},
            archivePrefix={arXiv},
            primaryClass={cs.RO},
            url={https://arxiv.org/abs/2403.17774}, 
      }