Please use this identifier to cite or link to this item:
https://etd.cput.ac.za/handle/20.500.11838/3884
Title: | Comparative analysis of explainable deep learning models for identification of plant nutrient deficiencies | Authors: | Mkhatshwa, Junior | Keywords: | Artificial intelligence -- Agricultural applications;Deep learning (Machine learning);Plants -- Nutrition -- Information technology;Agricultural innovations;Agriculture -- Effect of technological innovations on;SHAP;Grad-CAM;Agricultural informatics | Issue Date: | 2023 | Publisher: | Cape Peninsula University of Technology | Abstract: | Essential mineral nutrients play a crucial role in the growth and survival of plants. The lack of nutrients in plants threatens global food security and affects farmers who solely depend on producing healthy crops. Traditionally the identification of nutrient deficiencies in a crop is done manually by experienced farmers. Deep learning (DL) has shown promise in image classification. However, the lack of understanding regarding the accuracy and explainability of specific DL models for the identification of plant nutrient deficiencies is a hindrance to making informed decisions about the suitability of these algorithms for practical implementation. This study aimed to assess the performance and explainability of these models to facilitate better decision-making in agriculture. To achieve this, the study formulated four objectives: 1) identify the features that are essential to determine plant nutrient deficiencies; 2) determine the requirements of explainable DL for nutrient deficiency identification; 3) explore how explainable DL could be applied to a plant image dataset to identify plant nutrient deficiencies; and 4) determine the performance and explainability of selected DL algorithms when used for plant nutrient deficiencies. The study used a deductive approach to achieve the aforementioned objectives, using a quantitative research methodology and an experimental research design to investigate the performance and interpretability of three machine learning (ML) models based on two plant datasets: rice and banana. The three DL models were a Convolutional Neural Network (CNN), and two pre-trained models: Inception-V3 and Visual Geometry Group (VGG-16). For the explainability of the models, the study used two XAI techniques: Shapley Additive exPlanations (SHAP) and Gradient-weighted Class Activation Mapping (Grad-CAM). The study found that the choice of DL models has a significant impact on the performance of nutrient deficiency identification in different plant datasets. Inception-V3 achieved a very good F1-Score of 92% for the banana dataset. VGG-16 follows with a good F1-Score of 81% and the CNN, while not as strong as the other models, achieves an acceptable F1-Score of 68%. Based on these findings, Inception-V3 is effective in detecting nutrient deficiency in banana plants. Regarding explainability using SHAP, the CNN and VGG-16 models were found to rely on a limited set of prominent features. However, Inception-V3 appears to rely on a broader range of features, with many features making significant contributions to the final prediction. When Grad-CAM was used to assess explainability, for the banana and rice datasets, it was noted that the Grad-CAM heatmap of the CNN model highlights the contours of the plant leaf, while the other two models (Inception V-3, and VGG-16) focus on the leaf itself. VGG-16 accurate localisation of the affected regions proved to be more reliable due to the quality of its heatmap. The result of this study shows that Inception-V3 is the most accurate model, but it may not be the most explainable model due to its complex architecture. On the other hand, the VGG-16 has a simpler architecture that tends to offer a better explanation. Therefore, balancing accuracy and explication when selecting a model for a particular task is essential. The study contributes to the literature by incorporating explainable deep learning in the context of plant nutrient deficiency identification. Moreover, unlike prior research that primarily evaluated accuracy without considering explainability, the study addressed this gap by comparing the explainability of GRAD-CAM and SHAP techniques, shedding light on how these models arrive at their predictions. The research successfully addressed its objectives, providing valuable insights into both the theoretical and practical aspects of this domain. The study's holistic approach and valuable findings pave the way for the integration of XAI techniques in agriculture, adding value to the field and opening avenues for future research and innovation. | Description: | Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2023 | URI: | https://etd.cput.ac.za/handle/20.500.11838/3884 | DOI: | https://doi.org/10.25381/cput.24590862.v1 |
Appears in Collections: | Information Technology - Master's Degree |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Mkhatshwa_Oter_Junior_214011097.pdf | 2.7 MB | Adobe PDF | View/Open |
Page view(s)
214
Last Week
13
13
Last month
19
19
checked on Nov 19, 2024
Download(s)
177
checked on Nov 19, 2024
Google ScholarTM
Check
Altmetric
Items in Digital Knowledge are protected by copyright, with all rights reserved, unless otherwise indicated.