Seeding is much the same as plowing. The added challenge here is to tightly follow the rows and implant the seeds in their center. The navigation here is deep learning based, allowing to handle any light condition to identify the rows.
Robots using Machine Vision for weeds handling
Weeds handling is performed using a self-driven robotic platform. The challenge is to maneuver between crops, identity and classify weeds as well as handle them by spraying them or picking them out. This robot, usually smaller than the plowing and seeding robots, has both autonomous driving capabilities, in addition to camera-controlled, multi-joint arms. Automatic navigation of these robots uses deep learning trained CNN (Convolutional Neural Networks). This classifier drives the robot throughout the gaps between the planned rows. Additional CNN classifier, using the camera feed recognizes crops and classifies them as weeds. An additional camera, placed on the robot’s multi-joint arm, assists the location of the arm’s tool for better weeds handling. The machine vision algorithms here use first calibration and registration procedures to make sure that all works on the same coordinates system, so that data handover is simple.
Produce growth monitoring can be performed by a ground-based robot or by a flying robotic UAV. It is the ground-based robot that best matches fruits and vegetables. Flying UAV, using autonomous navigation, are best for cotton, soybeans and similar. The ground-based robot uses the same autonomous driving and navigation capabilities as explained before. It is equipped with scanning cameras providing video stream of the crops. Machine vision algorithms, on their side, use deep learning classifier to recognize and measure fruits and vegetables. Such CNN algorithms are the best technology available today to maintain a classification performance even in presence of intensity, color and geometric variations. An automatic navigation UAV robot is able to avoid obstacles, by classifying objects, potentially blocking the flying path. Their pod uses multi spectrum cameras which assist in producing the vegetation index – NDVI (Normalized Difference Vegetation Index).
Machine Vision algorithms
Fruits and vegetables picking is a complex process requiring a set of machine vision algorithms which controls a high DOF (degrees of freedom) robotic structure. This robot may be human-like, having mobility capabilities while manipulating its arm(s) to the location of fruits. As said, machine vision algorithms should be fully “synchronized”. For example, the navigation and the localization algorithms should update the machine vision algorithms to adjust their reference system in accordance to the robot movements. The machine vision’s deep learning based autonomous driving function is in charge of moving between the rows. The robot’s arm camera actually uses two cameras: one scans the tree to detect and classify fruits, while the other guides the arm’s tool to the best picking location and orientation. Both activities, navigating and detecting the fruits, are perfected by the Machine vision’s CNN capabilities. The fruit classification function can be mentioned as having a high sorting accuracy, even while partly occluded and with harsh light conditions.
Sorting and grading robots using Deep Learning
Sorting and grading robots work as arms with cameras mounting over conveyors. Using their camera, they get a fast and accurate classification of fruits. Their deep learning algorithms can identify defects from any angle with large color and geometric variation (provided that a proper training was previously performed). The algorithms are set to perform the first object detection to locate the fruits and, after that, the classification. RSIP Vision has a very large experience in precise agriculture and world-class proficiency in deep learning algorithms and CNN’s. Consult our experts to see how best apply the best robotics technology to your project.