Gunturu, Sujith2021-11-112021-11-112021https://hdl.handle.net/2097/41747To derive more consistent measurements through the course of a wheat growing season, this thesis conceives and designs an autonomous robotic platform that perform collision avoidance and disease detection in crops using spatial artificial intelligence (AI). This thesis demonstrates the working of the proposed robotic platform on wheat crop. The main constraint the agronomists have in breeding trials is to not run over the wheat while driving. This limits the freedom of the robot to freely navigate through the wheat field. To overcome this hurdle, we have trained a spatial deep learning model that help navigate the robot freely in the field while avoiding collisions with wheat. To train this model, we have used publicly available databases of prelabeled images of wheat along with the images of wheat that we have collected in the field. We have used YOLO (You Only Look Once) as our deep learning model to detect wheat. Faster R-CNN (Faster-Region-based Convolutional Neural Network) with ResNet-50-FPN (Residual Neural Network-50 Feature Pyramid Network) as backbone is also used to compare the accuracy of YOLO. This allowed 1-3 frames per second (fps) vision for wheat detection in the field. With the robot driving between 2-5 miles per hour in the field, this frame rate of 1-3 fps would only allow the robot to detect its surroundings once every foot or so. To increase the frame rate for real-time robot response to field environments the previous images were used to train the MobileNet single shot detector (SSD) and a new camera, the Luxonis Depth AI Camera, was used for inference in the field. Together the newly trained model and camera could achieve a frame rate of 18-23 fps, fast enough for the robot to sense its surroundings once every 2-3 inches of driving. Following the discussion of sensing, the autonomous navigation of the robot is next addressed. The new camera allows the robot to determine the distance to sensed objects by using a stereo camera also embedded with the main AI camera. The stereo camera allows the model to determine the distance of the robot from a particular object. By knowing appropriate distances an algorithm can be written to have the robot maneuver more precisely within its surroundings in the field. When it detects an object in the field the MobileNet SSD sends a binary thresholded distance signal to the robot motion controller. The motion controller can then use this information to make decisions, such as continuing motion, steering, or stopping. With an intention to improve the range of potential robot applications, a classification model is also installed on the robot to locate two of the most common diseases in wheat namely, stem rust and wheat rust. Making it more reasonable for agronomists to find these diseases in a large field.en-US© the author. This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).http://rightsstatements.org/vocab/InC/1.0/Artificial intelligenceA spatial AI-based agricultural robotic platform for collision avoidance and disease detection in cropsThesis