Please use this identifier to cite or link to this item: https://etd.cput.ac.za/handle/20.500.11838/3884
DC FieldValueLanguage
dc.contributor.advisorDaramola, Justine Olawandeen_US
dc.contributor.advisorKavu, Tatendaen_US
dc.contributor.authorMkhatshwa, Junioren_US
dc.date.accessioned2024-01-15T10:52:09Z-
dc.date.available2024-01-15T10:52:09Z-
dc.date.issued2023-
dc.identifier.urihttps://etd.cput.ac.za/handle/20.500.11838/3884-
dc.descriptionThesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2023en_US
dc.description.abstractEssential mineral nutrients play a crucial role in the growth and survival of plants. The lack of nutrients in plants threatens global food security and affects farmers who solely depend on producing healthy crops. Traditionally the identification of nutrient deficiencies in a crop is done manually by experienced farmers. Deep learning (DL) has shown promise in image classification. However, the lack of understanding regarding the accuracy and explainability of specific DL models for the identification of plant nutrient deficiencies is a hindrance to making informed decisions about the suitability of these algorithms for practical implementation. This study aimed to assess the performance and explainability of these models to facilitate better decision-making in agriculture. To achieve this, the study formulated four objectives: 1) identify the features that are essential to determine plant nutrient deficiencies; 2) determine the requirements of explainable DL for nutrient deficiency identification; 3) explore how explainable DL could be applied to a plant image dataset to identify plant nutrient deficiencies; and 4) determine the performance and explainability of selected DL algorithms when used for plant nutrient deficiencies. The study used a deductive approach to achieve the aforementioned objectives, using a quantitative research methodology and an experimental research design to investigate the performance and interpretability of three machine learning (ML) models based on two plant datasets: rice and banana. The three DL models were a Convolutional Neural Network (CNN), and two pre-trained models: Inception-V3 and Visual Geometry Group (VGG-16). For the explainability of the models, the study used two XAI techniques: Shapley Additive exPlanations (SHAP) and Gradient-weighted Class Activation Mapping (Grad-CAM). The study found that the choice of DL models has a significant impact on the performance of nutrient deficiency identification in different plant datasets. Inception-V3 achieved a very good F1-Score of 92% for the banana dataset. VGG-16 follows with a good F1-Score of 81% and the CNN, while not as strong as the other models, achieves an acceptable F1-Score of 68%. Based on these findings, Inception-V3 is effective in detecting nutrient deficiency in banana plants. Regarding explainability using SHAP, the CNN and VGG-16 models were found to rely on a limited set of prominent features. However, Inception-V3 appears to rely on a broader range of features, with many features making significant contributions to the final prediction. When Grad-CAM was used to assess explainability, for the banana and rice datasets, it was noted that the Grad-CAM heatmap of the CNN model highlights the contours of the plant leaf, while the other two models (Inception V-3, and VGG-16) focus on the leaf itself. VGG-16 accurate localisation of the affected regions proved to be more reliable due to the quality of its heatmap. The result of this study shows that Inception-V3 is the most accurate model, but it may not be the most explainable model due to its complex architecture. On the other hand, the VGG-16 has a simpler architecture that tends to offer a better explanation. Therefore, balancing accuracy and explication when selecting a model for a particular task is essential. The study contributes to the literature by incorporating explainable deep learning in the context of plant nutrient deficiency identification. Moreover, unlike prior research that primarily evaluated accuracy without considering explainability, the study addressed this gap by comparing the explainability of GRAD-CAM and SHAP techniques, shedding light on how these models arrive at their predictions. The research successfully addressed its objectives, providing valuable insights into both the theoretical and practical aspects of this domain. The study's holistic approach and valuable findings pave the way for the integration of XAI techniques in agriculture, adding value to the field and opening avenues for future research and innovation.en_US
dc.language.isoenen_US
dc.publisherCape Peninsula University of Technologyen_US
dc.subjectArtificial intelligence -- Agricultural applicationsen_US
dc.subjectDeep learning (Machine learning)en_US
dc.subjectPlants -- Nutrition -- Information technologyen_US
dc.subjectAgricultural innovationsen_US
dc.subjectAgriculture -- Effect of technological innovations onen_US
dc.subjectSHAPen_US
dc.subjectGrad-CAMen_US
dc.subjectAgricultural informaticsen_US
dc.titleComparative analysis of explainable deep learning models for identification of plant nutrient deficienciesen_US
dc.typeThesisen_US
dc.identifier.doihttps://doi.org/10.25381/cput.24590862.v1-
Appears in Collections:Information Technology - Master's Degree
Files in This Item:
File Description SizeFormat 
Mkhatshwa_Oter_Junior_214011097.pdf2.7 MBAdobe PDFView/Open
Show simple item record

Page view(s)

214
Last Week
13
Last month
19
checked on Nov 19, 2024

Download(s)

177
checked on Nov 19, 2024

Google ScholarTM

Check

Altmetric


Items in Digital Knowledge are protected by copyright, with all rights reserved, unless otherwise indicated.