Classification of land cover using semantic segmentation
dc.contributor.author | Nahitiya, Dishan Anupama | |
dc.date.accessioned | 2021-04-15T16:48:01Z | |
dc.date.available | 2021-04-15T16:48:01Z | |
dc.date.graduationmonth | May | |
dc.date.issued | 2021-05-01 | |
dc.description.abstract | In agricultural fields, knowledge about the proportion of the soil surface covered with live vegetation and crop residue cover is key to assess the risk of soil erosion by wind and water. Live vegetation and residue cover act as an effective barrier that reduces the raindrop impact on the soil surface that can potentially break soil aggregates and wash away the soil particles and nutrients in the soil solution. Traditional methods for quantifying live vegetation and soil residue cover include line transects and sets of reference images, two methods that have proven accurate, but highly time-consuming and repetitive. This research aims at training a Deep Convolutional Neural Network (DCNN) to automate the classification of bare soil, crop residue, and live vegetation from downward-facing images of agricultural fields. A SegNet model, which is a deep convolutional encoder-decoder architecture for robust pixel-wise semantic segmentation, was trained using batch sizes of 4 images and a learning rate of 0.01. The training dataset consisted of 3300 images and the test set consisted of 645 images. All images were collected from agricultural fields and experimental plots across Kansas State University Experiment Stations. Images were first auto-labeled and then labels were manually revised by a human using the MATLAB Image Labeler application. The SegNet model resulted in an accuracy of 90% in the training set and 84% in the test set. Despite the intricate patterns, shapes, and colors given by soil, plant, and stubble element, the trained SegNet shows promising results for automating the classification of land cover from images. The trained SegNet was also implemented on a web-based application to help farmers, field agronomists, and scientists to process images for better assessment of the risk of soil erosion and to quantify the impact of soil and water conservation practices. | |
dc.description.advisor | Daniel Andresen | |
dc.description.degree | Master of Science | |
dc.description.department | Department of Computer Science | |
dc.description.level | Masters | |
dc.identifier.uri | https://hdl.handle.net/2097/41373 | |
dc.language.iso | en | |
dc.publisher | Kansas State University | |
dc.rights | © the author. This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s). | |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | |
dc.subject | Soil cover | |
dc.subject | Semantic segmentation | |
dc.subject | Deep convolutional neural network | |
dc.subject | 2D image classification | |
dc.subject | Soil conservation | |
dc.title | Classification of land cover using semantic segmentation | |
dc.type | Report |