Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

sensors-logo

Article Menu

research paper on plant disease detection using cnn

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Early poplar ( populus ) leaf-based disease detection through computer vision, yolov8, and contrast stretching technique.

research paper on plant disease detection using cnn

1. Introduction

  • Data Collection: Gathering a diverse and comprehensive collection of images featuring poplar trees affected by various diseases. This involved extensive fieldwork and the compilation of images from various sources to create a robust dataset for analysis.
  • Expert Classification: The images were expertly classified and labeled, ensuring the dataset’s accuracy and reliability for model training.
  • Generating New Poplar Disease-Based Dataset: Using the collected images to create a new dataset specifically focused on poplar diseases. This dataset includes labeled images where each image is annotated with information about the specific diseases affecting the poplar trees depicted.
  • Powerful Model Training (YOLO): Utilizing state-of-the-art deep learning techniques, such as the YOLO (You Only Look Once) model, for training a powerful machine learning model. YOLO is known for its speed and accuracy in object detection tasks, making it well suited for identifying and localizing diseases in images of poplar trees leaves.

2. Related Works

2.1. computer vision and image processing approaches for poplar (populus) disease detection, 2.2. deep learning approaches for poplar disease detection, 3. materials and methods, 3.1. data collection and preprocessing, 3.2. proposed method and model architecture, 3.3. the model structure of yolov8 network, 3.4. techniques for enhancing image quality, 3.5. contrast stretching technique, 4. experimental results, 4.1. model evaluation, 4.2. model training results, 5. limitations and future work, 6. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, acknowledgments, conflicts of interest.

  • Michalak, M.; Plitta, B.P.; Tylkowski, T.; Chmielarz, P.; Suszka, J. Desiccation tolerance and cryopreservation of seeds of black poplar ( Populus nigra L.), a disappearing tree species in Europe. Eur. J. For. Res. 2015 , 134 , 53–60. [ Google Scholar ] [ CrossRef ]
  • Parminder Singh, P.S.; Sharma, V.K.; Rattan, G.S.; Chander Mohan, C.M. Forest Tree Diseases in Relation to Climate Change. Indian J. Ecol. 2011 , 38 , 232–234. [ Google Scholar ]
  • Sauter, J.J.; van Cleve, B. Storage, mobilization and interrelations of starch, sugars, protein, and fat in the ray storage tissue of poplar trees. Trees 1994 , 8 , 297–304. [ Google Scholar ] [ CrossRef ]
  • Coleman, M.D.; Dickson, R.E.; Isebrands, J.G. Contrasting fine-root production, survival and soil CO 2 efflux in pine and poplar plantations. Plant Soil. 2000 , 225 , 129–139. [ Google Scholar ] [ CrossRef ]
  • Rezaei, M.; Chaharsooghi, S.K.; Kashan, A.H.; Babazadeh, R. Optimal design and planning of biodiesel supply chain network: A scenario based robust optimization approach. Int. J. Energy Environ. Eng. 2020 , 11 , 111–128. [ Google Scholar ] [ CrossRef ]
  • Debeljak, M.; Ficko, A.; Brus, R. The use of habitat and dispersal models in protecting European black poplar ( Populus nigra L.) from genetic introgression in Slovenia. Biol. Conserv. 2015 , 184 , 310–319. [ Google Scholar ] [ CrossRef ]
  • Roberto, O.; Massimo, M.; Paolo, T.; Aldo, C. Selective spraying of grapevines for disease control using a modular agricultural robot. Biosyst. Eng. 2016 , 146 , 203–215. [ Google Scholar ]
  • Mustapoevich, D.T.; Kim, W. Machine Learning Applications in Sarcopenia Detection and Management: A Comprehensive Survey. Healthcare 2023 , 11 , 2483. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Dimitrios, B.; Georgios-Ioannis, V.; Konstantinos, B.; Andreas, A.; Ioannis, P.; Anastasia, K.; Dimitrios, K.; Charalampos, K.; Konstantinos, K.; Christos-Nikolaos, A.; et al. The role of deep learning in diagnosing colorectal cancer. Natl. Libr. Med. 2023 , 18 , 266–273. [ Google Scholar ]
  • Sedighi, S.; Kalantari, D.; Shiukhy, S.; Redtl, J. Accurate and early detection of poplar tree leaf spot disease by using image processing technique. Agric. Eng. Int. CIGR J. 2021 , 23 , 2. [ Google Scholar ]
  • Hao, M.; Xiyou, S. Image recognition of poplar leaf diseases with feature segmentation and lesion enhancement. J. Zhejiang AF Univ. 2020 , 37 , 1159–1166. [ Google Scholar ]
  • Liu, X.; Hu, C.; Li, P. Automatic segmentation of overlapped poplar seedling leaves combining Mask R-CNN and DBSCAN. Comput. Electron. Agric. 2020 , 178 , 105753. [ Google Scholar ] [ CrossRef ]
  • Liang, D.; Liu, W.; Zhao, L.; Zong, S.; Luo, Y. An Improved Convolutional Neural Network for Plant Disease Detection Using Unmanned Aerial Vehicle Images. Nat. Environ. Pollut. Technol. Int. Q. Sci. J. 2022 , 21 , 899–908. [ Google Scholar ] [ CrossRef ]
  • Champigny, M.J.; Unda, F.; Skyba, O.; Soolanayakanahally, R.Y.; Mansfield, S.D.; Campbell, M.M. Learning from methylomes: Epigenomic correlates of Populus balsamifera traits based on deep learning models of natural DNA methylation. Plant Biotechnol. J. 2020 , 18 , 1361–1375. [ Google Scholar ] [ CrossRef ]
  • Wang, Y.; Zhang, W.; Gao, R.; Jin, Z.; Wang, X. Recent advances in the application of deep learning methods to forestry. Wood Sci. Technol. 2021 , 55 , 1171–1202. [ Google Scholar ] [ CrossRef ]
  • Gao, W.; Zhou, L.; Liu, S.; Guan, Y.; Gao, H.; Hu, J. Machine learning algorithms for rapid estimation of holocellulose content of poplar clones based on Raman spectroscopy. Carbohydr. Polym. 2022 , 292 , 119635. [ Google Scholar ] [ CrossRef ]
  • Nirmaladevi, K.; Reddy, P.C.; Kumar, P.T.; Asha, S.; Lingaraja, D. Prediction of leaf disease and pest detection using deep learning. AIP Conf. Proc. 2024 , 2935 , 020025. [ Google Scholar ]
  • Malambo, L.; Popescu, S.; Ku, N.-W.; Rooney, W.; Zhou, T.; Moore, S. A Deep Learning Semantic Segmentation-Based Approach for Field-Level Sorghum Panicle Counting. Remote Sens. 2019 , 11 , 2939. [ Google Scholar ] [ CrossRef ]
  • Liu, J.; Yang, S.; Cheng, Y.; Song, Z. Plant Leaf Classification Based on Deep Learning. In Proceedings of the 2018 Chinese Automation Congress (CAC), Xi’an, China, 30 November 2018–2 December 2018. [ Google Scholar ] [ CrossRef ]
  • Korznikov, K.A.; Kislov, D.E.; Altman, J.; Doležal, J.; Vozmishcheva, A.S.; Krestov, P.V. Using U-Net-Like Deep Convolutional Neural Networks for Precise Tree Recognition in Very High Resolution RGB (Red, Green, Blue) Satellite Images. Forests 2021 , 12 , 66. [ Google Scholar ] [ CrossRef ]
  • Yunusov, N.; Islam, B.M.S.; Abdusalomov, A.; Kim, W. Robust Forest Fire Detection Method for Surveillance Systems Based on You Only Look Once Version 8 and Transfer Learning Approaches. Processes 2024 , 12 , 1039. [ Google Scholar ] [ CrossRef ]
  • Saydirasulovich, S.N.; Mukhiddinov, M.; Djuraev, O.; Abdusalomov, A.; Cho, Y.-I. An Improved Wildfire Smoke Detection Based on YOLOv8 and UAV Images. Sensors 2023 , 23 , 8374. [ Google Scholar ] [ CrossRef ]
  • Avazov, K.; Jamil, M.K.; Muminov, B.; Abdusalomov, A.B.; Cho, Y.-I. Fire detection and notification method in ship areas using deep learning and computer vision approaches. Sensors 2023 , 23 , 7078. [ Google Scholar ] [ CrossRef ]
  • Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. A Wildfire Smoke Detection System Using Unmanned Aerial Vehicle Images Based on the Optimized YOLOv5. Sensors 2022 , 22 , 9384. [ Google Scholar ] [ CrossRef ]
  • Al-Ameen, Z. Contrast enhancement for color images using an adjustable contrast stretching technique. Int. J. Comput. 2018 , 17 , 74–80. [ Google Scholar ] [ CrossRef ]
  • Make Sense AI. Available online: www.makesense.ai (accessed on 10 November 2023).
  • GitHub. Available online: https://github.com/roboflow (accessed on 20 November 2023).
  • Jubayer, F.; Alam Soeb, J.; Mojumder, A.N.; Paul, M.K.; Barua, P.; Kayshar, S.; Akter, S.S.; Rahman, M.; Islam, A. Detection of mold on the food surface using YOLOv5. Curr. Res. Food Sci. 2021 , 4 , 724–728. [ Google Scholar ] [ CrossRef ]
  • Szeliski, R. Computer Vision Algorithms and Applications , 2nd ed.; Springer Nature: London, UK, 2011. [ Google Scholar ]
  • Terven, J.; Córdova-Esparza, D.-M.; Romero-González, J.-A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023 , 5 , 1680–1716. [ Google Scholar ] [ CrossRef ]
  • Ergasheva, A.; Akhmedov, F.; Abdusalomov, A.; Kim, W. Advancing Maritime Safety: Early Detection of Ship Fires through Computer Vision, Deep Learning Approaches, and Histogram Equalization Techniques. Fire 2024 , 7 , 84. [ Google Scholar ] [ CrossRef ]
  • Barmpoutis, P.; Papaioannou, P.; Dimitropoulos, K.; Grammalidis, N. A Review on Early Forest Fire Detection Systems Using Optical Remote Sensing. Sensors 2020 , 20 , 6442. [ Google Scholar ] [ CrossRef ]
  • Redmon, J.; Darknet: Open-Source Neural Networks in C. 2013–2016. Available online: http://pjreddie.com/darknet/ (accessed on 23 July 2024).
  • Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018 , arXiv:1804.02767. [ Google Scholar ]
  • Akhatov, A.; Nazarov, F.M.; Eshtemirov, B. Detection and analysis of traffic jams using computer vision technologies. In Proceedings of the International Conference on Artificial Intelligence and Information Technologies (ICAIIT 2023), Uzbekistan, Samarkand, 3–4 November 2023; CRC Press: Boca Raton, FL, USA, 2023; Volume 2, pp. 761–766. [ Google Scholar ]
  • Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Digit. Libr. 2012 , 25 , 1097–1105. [ Google Scholar ]
  • He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [ Google Scholar ]
  • Abdusalomov, A.B.; Mukhiddinov, M.; Kutlimuratov, A.; Whangbo, T.K. Improved Real-Time Fire Warning System Based on Advanced Technologies for Visually Impaired People. Sensors 2022 , 22 , 7305. [ Google Scholar ] [ CrossRef ]
  • Abdusalomov, A.; Mukhiddinov, M.; Djuraev, O.; Khamdamov, U.; Whangbo, T.K. Automatic Salient Object Extraction Based on Locally Adaptive Thresholding to Generate Tactile Graphics. Appl. Sci. 2020 , 10 , 3350. [ Google Scholar ] [ CrossRef ]
  • Khan, F.; Tarimer, I.; Alwageed, H.S.; Karadağ, B.C.; Fayaz, M.; Abdusalomov, A.B.; Cho, Y.-I. Effect of Feature Selection on the Accuracy of Music Popularity Classification Using Machine Learning Algorithms. Electronics 2022 , 11 , 3518. [ Google Scholar ] [ CrossRef ]
  • Rim, J.; Lee, H.; Won, J.; Cho, S. Real-World Blur Dataset for Learning and Benchmarking Deblurring Algorithms. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings; Part of the book series: Lecture Notes in Computer Science ((LNIP, volume 12370)); SPRINGER LINK 20 November 2020. Springer International Publishing: Berlin/Heidelberg, Germany, 2020. Available online: https://link.springer.com/chapter/10.1007/978-3-030-58595-2_12 (accessed on 24 July 2024).

Click here to enlarge figure

DatasetTraining ImagesValidation ImagesTesting ImagesTotal Images
Diseased15922002001995
Healthy2903636362
DatasetTraining ImagesValidation ImagesTesting ImagesTotal Images
Diseased40575065065069
Healthy8701081081086
Experimental EnvironmentDetails
Programming languagePython 3.9.12
Operating systemUbuntu 22.04.4 LTS
Deep learning frameworkPyTorch 2.2.1 + cu118
GPUNVIDIA Corporation AD106 [GeForce RTX 4060 Ti 16 GB]
EpochsmAPPrecisionRecallTesting Accuracy
YOLO710,00065.567.364.983%
YOLOv810,00073.275.572.887%
Proposed Method10,00086.685.79295%
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Bolikulov, F.; Abdusalomov, A.; Nasimov, R.; Akhmedov, F.; Cho, Y.-I. Early Poplar ( Populus ) Leaf-Based Disease Detection through Computer Vision, YOLOv8, and Contrast Stretching Technique. Sensors 2024 , 24 , 5200. https://doi.org/10.3390/s24165200

Bolikulov F, Abdusalomov A, Nasimov R, Akhmedov F, Cho Y-I. Early Poplar ( Populus ) Leaf-Based Disease Detection through Computer Vision, YOLOv8, and Contrast Stretching Technique. Sensors . 2024; 24(16):5200. https://doi.org/10.3390/s24165200

Bolikulov, Furkat, Akmalbek Abdusalomov, Rashid Nasimov, Farkhod Akhmedov, and Young-Im Cho. 2024. "Early Poplar ( Populus ) Leaf-Based Disease Detection through Computer Vision, YOLOv8, and Contrast Stretching Technique" Sensors 24, no. 16: 5200. https://doi.org/10.3390/s24165200

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Plant Disease Detection Using CNN Through Segmentation and Balancing Techniques

  • Conference paper
  • First Online: 28 July 2022
  • Cite this conference paper

research paper on plant disease detection using cnn

  • Maulik Verma 15 ,
  • Anshu S. Anand   ORCID: orcid.org/0000-0002-7271-4702 15 &
  • Anjil Srivastava   ORCID: orcid.org/0000-0001-9871-5781 16  

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 427))

639 Accesses

In a country like India where approximately 60% population is employed by the agricultural sector, producing a good yield without compromising on the quality of crops is an important challenge. Plant diseases not only reduce the agricultural yield but also reduce the quality of the produce. Thus, tackling plant diseases at an early stage becomes crucial. This becomes a difficult task if done manually as tracking large farmlands is not easy. In this paper, we present an approach for detecting plant diseases. One of the datasets used in this work is the most popularly available dataset, Plant Village . Since the dataset is imbalanced, generative adversarial network (GAN) has been used to generate synthetic minority class images for balancing the dataset. Most of the existing works have only used the Plant Village dataset for training and testing, which makes these trained models unfit for plant images in uncontrolled conditions. Thus, Plant Doc , a real-world dataset, has also been used. Transfer learning has been used in our proposed system for training the CNN models to detect diseases. Segmentation and object detection methods have been employed to identify individual plant leaves from images containing multiple leaves and complex backgrounds, thus facilitating the model to work on real-world images. Each prominently visible leaf was segmented and the disease and severity predictions have been made on the masked image. Class balancing improved the model performance up to 35% on the real-world dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

research paper on plant disease detection using cnn

Optimized classification model for plant diseases using generative adversarial networks

research paper on plant disease detection using cnn

Cross-dataset learning for performance improvement of leaf disease detection using reinforced generative adversarial networks

research paper on plant disease detection using cnn

Classification of crop leaf diseases using image to image translation with deep-dream

Guru-Pirasanna-Pandi G, Adak T, Gowda B, Patil N, Annamalai M, Jena M (2018) Toxicological effect of underutilized plant, Cleistanthus collinus leaf extracts against two major stored grain pests, the rice weevil, Sitophilus oryzae and red flour beetle, Tribolium castaneum . Ecotoxicol Environ Saf 154:92–99

Google Scholar  

Mohapatra T, Alagusundaram K, Jena JK, Rathore NS, Singh AK, Singh SK (2018) ICAR News

Boulent J, Foucher S, Théau J, St-Charles PL (2019) Convolutional neural networks for the automatic identification of plant diseases. Front Plant Sci

Brahimi M, Boukhalfa K, Moussaoui A (2017) Deep learning for tomato diseases: classification and symptoms visualization. Appl Artif Intell, 299–315

Amara J, Bouaziz B, Algergawy A (2017) A deep learning-based approach for banana leaf diseases classification. In: Datenbanksysteme für business, technologie und web workshopband

Mishra S, Sachan R, Rajpal D (2020) Deep convolutional neural network based detection system for real-time corn plant disease recognition. Proc Comput Sci 167:2003–2010

Article   Google Scholar  

Mohanty SP, Hughes DP, Salathé M (2016) Using deep learning for image-based plant disease detection. Front Plant Sci 7:1419

Parikh A, Raval MS, Parmar C, Chaudhary S (2016) Disease detection and severity estimation in cotton plant from unconstrained images. In: IEEE international conference on data science and advanced analytics (DSAA). IEEE, pp 594–601

Shoohi LM, Saud JH (2020) Dcgan for handling imbalanced malaria dataset based on over-sampling technique and using CNN. Med Legal Update 20(1):1079–1085

Lomte SS, Janwale AP (2017) Plant leaves image segmentation techniques: a review. JCSE Int J Comput Sci Eng 5

Singh D, Jain N, Jain P, Kayal P, Kumawat S, Batra N (2020) PlantDoc: a dataset for visual plant disease detection. In: Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, pp 249–253

Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232

Oerke EC, Dehne HW, Schönbeck F, Weber A (2012) Crop production and crop protection: estimated losses in major food and cash crops. Elsevier

Agrios GN (2005) Plant pathology, 5th edn. Elsevier Academic Press. Burlington, MA, USA, pp 79–103

Lee MK, Golzarian MR, Kim I (2021) A new color index for vegetation segmentation and classification. Precis Agric 22(1):179–204

Download references

Author information

Authors and affiliations.

IIIT Allahabad, Prayagraj, India

Maulik Verma & Anshu S. Anand

National Agri-Food Biotechnology Institute, Mohali, India

Anjil Srivastava

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Anshu S. Anand .

Editor information

Editors and affiliations.

Department of Computer Science and Engineering, National Institute of Technology, Warangal, India

Rashmi Ranjan Rout

Department of Computer Science and Engineering, Indian Institute of Technology, Kharagpur, India

Soumya Kanti Ghosh

Department of Computer Science and Engineering, Indian Institute of Technology (ISM) Dhanbad, Dhanbad, India

Prasanta K. Jana

School of IT and Engineering (SITE), Vellore Institute of Technology, Vellore, India

Asis Kumar Tripathy

Department of Computer Science and Information Technology, Institute of Technical Education and Research (ITER), Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, India

Jyoti Prakash Sahoo

Department of Computer Science and Information Engineering (CSIE), Providence University, Taichung, Taiwan

Kuan-Ching Li

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper.

Verma, M., Anand, A.S., Srivastava, A. (2022). Plant Disease Detection Using CNN Through Segmentation and Balancing Techniques. In: Rout, R.R., Ghosh, S.K., Jana, P.K., Tripathy, A.K., Sahoo, J.P., Li, KC. (eds) Advances in Distributed Computing and Machine Learning. Lecture Notes in Networks and Systems, vol 427. Springer, Singapore. https://doi.org/10.1007/978-981-19-1018-0_30

Download citation

DOI : https://doi.org/10.1007/978-981-19-1018-0_30

Published : 28 July 2022

Publisher Name : Springer, Singapore

Print ISBN : 978-981-19-1017-3

Online ISBN : 978-981-19-1018-0

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Sensors (Basel)

Logo of sensors

Early Detection and Classification of Tomato Leaf Disease Using High-Performance Deep Neural Network

Naresh k. trivedi.

1 Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India; [email protected] (N.K.T.); [email protected] (A.A.)

Vinay Gautam

2 School of Computing, DIT University, Dehradun 248009, India; [email protected]

Abhineet Anand

Hani moaiteq aljahdali.

3 Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 37848, Saudi Arabia; as.ude.uak@iladhajlamH

Santos Gracia Villar

4 Higher Polytechnic School/Industrial Organization Engineering, Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain; [email protected]

5 Department of Project Management, Universidad Internacional Iberoamericana, Campeche 24560, Mexico

Divya Anand

6 Department of Computer Science and Engineering, Lovely Professional University, Phagwara 144411, India; [email protected]

Nitin Goyal

Seifedine kadry.

7 Faculty of Applied Computing and Technology, Noroff University College, 4608 Kristiansand, Norway; [email protected]

Associated Data

Not applicable.

Tomato is one of the most essential and consumable crops in the world. Tomatoes differ in quantity depending on how they are fertilized. Leaf disease is the primary factor impacting the amount and quality of crop yield. As a result, it is critical to diagnose and classify these disorders appropriately. Different kinds of diseases influence the production of tomatoes. Earlier identification of these diseases would reduce the disease’s effect on tomato plants and enhance good crop yield. Different innovative ways of identifying and classifying certain diseases have been used extensively. The motive of work is to support farmers in identifying early-stage diseases accurately and informing them about these diseases. The Convolutional Neural Network (CNN) is used to effectively define and classify tomato diseases. Google Colab is used to conduct the complete experiment with a dataset containing 3000 images of tomato leaves affected by nine different diseases and a healthy leaf. The complete process is described: Firstly, the input images are preprocessed, and the targeted area of images are segmented from the original images. Secondly, the images are further processed with varying hyper-parameters of the CNN model. Finally, CNN extracts other characteristics from pictures like colors, texture, and edges, etc. The findings demonstrate that the proposed model predictions are 98.49% accurate.

1. Introduction

Plants are an integral part of our lives because they produce food and shield us from dangerous radiation. Without plants, no life is imaginable; they sustain all terrestrial life and defend the ozone layer, which filters ultraviolet radiations. Tomato is a food-rich plant, a consumable vegetable widely cultivated [ 1 ]. Worldwide, there are approximately 160 million tons of tomatoes consumed annually [ 2 ]. The tomato, a significant contributor to reducing poverty, is seen as an income source for farm households [ 3 ]. Tomatoes are one of the most nutrient-dense crops on the planet, and their cultivation and production have a significant impact on the agricultural economy. Not only is the tomato nutrient-dense, but it also possesses pharmacological properties that protect against diseases such as hypertension, hepatitis, and gingival bleeding [ 1 ]. Tomato demand is also increasing as a result of its widespread use. According to statistics, small farmers produce more than 80% of agricultural output [ 2 ]; due to diseases and pests, about 50% of their crops are lost. The diseases and parasitic insects are the key factors impacting tomato growth, making it necessary to research the field crop disease diagnosis.

The manual identification of pests and pathogens is inefficient and expensive. Therefore, it is necessary to provide automated AI image-based solutions to farmers. Images are being used and accepted as a reliable means of identifying disease in image-based computer vision applications due to the availability of appropriate software packages or tools. They process images using image processing, an intelligent image identification technology which increases image recognition efficiency, lowers costs, and improves recognition accuracy [ 3 ].

Although plants are necessary for existence, they experience numerous obstacles. An early and accurate diagnosis helps decrease the risk of ecological damage. Without systematic disease identification, product quality and quantity suffer. This has a further detrimental effect on a country’s economy [ 1 ]. Agricultural production must expand by 70% by 2050 to meet global food demands, according to the United Nations Food and Agriculture Organization (FAO) [ 2 ]. In opposition, chemicals used to prevent diseases, such as fungicides and bactericides, negatively impact the agricultural ecosystem. We therefore need quick and effective disease classification and detection techniques that can help the agro-ecosystem. Advance disease detection technology, such as image processing and neural networks, will allow the design of systems capable of early disease detection for tomato plants. The plant production can be reduced by 50% due to stress as a result [ 1 ]. Inspecting the plant is the first step in finding disease, then figuring out what to work with based on prior experience is the next step [ 3 ]. This method lacks scientific consistency because farmers’ backgrounds differ, resulting in the process being less reliable. There is a possibility that farmers will misclassify a disease, and an incorrect treatment will damage the plant. Similarly, field visits by domain specialists are pricey. There is a need for the development of automated disease detection and classification methods based on images that can take the role of the domain expert.

It is necessary to tackle the leaf disease issue with an appropriate solution [ 4 , 5 ]. Tomato disease control is a complex process that takes constant account of a substantial fraction of production cost during the season [ 6 , 7 , 8 , 9 ]. Vegetable diseases (bacteria, late mildew, leaf spot, tomato mosaic, and yellow curved) are prevalent. They seriously affect plant growth, which leads to reduced product quality and quantity [ 10 ]. As per past research, 80–90% of diseases of plants appear on leaves [ 11 ]. Tracking the farm and recognizing different forms of the disease with infected plants takes a long time. Farmers’ evaluation of the type of plant disease might be wrong. This decision could lead to insufficient and counterproductive defense measures implemented in the plant. Early detection can reduce processing costs, reduce the environmental impact of chemical inputs, and minimize loss risk [ 12 , 13 , 14 ].

Many solutions have been proposed with the advent of technology. Here in this paper, the same solutions are used to recognize leaf diseases. Compared with other image regions, the main objective is to make the lesion more apparent. Problems such as (1) shifts in illumination and spectral reflectance, (2) poor input image contrast, and (3) image size and form range have been encountered. Pre-processing operations include image contrast, greyscale conversion, image resizing, and image cropping and filtering [ 15 , 16 , 17 ]. The next step is the division of an image into objects. These objects are used to determine regions of interest as infected regions in the image [ 18 ]. Unfortunately, the segmentation method has many problems:

  • When the conditions of light differ from eligible photographs, color segmentation fails.
  • Regional segmentation occurs because of initial seed selection.
  • Texture varieties take too long to handle.

The next step for classification is to determine which class belongs to the sample. Then, one or more different input variables of the procedure are surveyed. Occasionally, the method is employed to identify a particular type of input. Improving the accuracy of the classification is by far the most extreme classification challenge. Finally, the actual data are used to create and validate datasets dissimilar to the training set.

The rest of the paper is organized as follows: Section 2 reviews the extant literature. Then, the material method and process are described in Section 3 . Next, the results analysis and discussion are explained in Section 4 . Finally, Section 5 is the conclusion.

2. Related Work

Various researchers have used cutting-edge technology such as machine learning and neural network architectures like Inception V3 net, VGG 16 net, and SqueezeNet to construct automated disease detection systems. These use highly accurate methods for identifying plant disease in tomato leaves. In addition, researchers have proposed many deep learning-based solutions in disease detection and classification, as discussed below in [ 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 ].

A pre-trained network model for detecting and classifying tomato disease has been proposed with 94–95% accuracy [ 19 , 20 ]. The Tree Classification Model and Segmentation is used to detect and classify six different types of tomato leaf disease with a dataset of 300 images [ 21 ]. A technique has been proposed to detect and classify plant leaf disease with an accuracy of 93.75% [ 22 ]. The image processing technology and classification algorithm detect and classify plant leaf disease with better quality [ 23 ]. Here, an 8-mega-pixel smartphone camera is used to collect sample data and divides it into 50% healthy and 50% unhealthy categories. The image processing procedure includes three elements: improving contrast, segmenting, and extracting features. Classification processes are performed via an artificial neural network using a multi-layer feed-forward neural network, and two types of network structures are compared. The result was better than the Multilayer Perceptron (MLP) network and Radial Basis Function (RBF) network results. The quest divides the plant blade’s picture into healthy and unhealthy; it cannot detect the form of the disease. Authors used leaf diseases to identify and achieve 87.2% classification accuracy through color space analysis, color time, histogram, and color coherence [ 24 ].

AlexNet and VGG 19 models have been used to diagnose diseases affecting tomato crops using a frame size of 13,262. The model is used to achieve 97.49% precision [ 25 ]. A transfer learning and CNN Model is used to accurately detect diseases infecting dairy crops, reaching 95% [ 26 ]. A neural network to determine and classify tomato plant leaf conditions using transfer learning as an AlexNet based deep learning mechanism achieved an accuracy of 95.75% [ 27 , 28 ]. Resnet-50 was designed to identify 1000 diseases of tomato leaf, i.e., a total of 3000 pictures with the name of lesion blight, late blight, and the yellow curl leaf. The Network Activation function for comparison has been amended to Leaky-ReLU, and the kernel size has been updated to 11 × 11 for the first convolution layer. The model predicts the class of diseases with an accuracy of 98.30% and a precision of 98.0% after several repetitions [ 29 ]. A simplified eight-layered CNN model has been proposed to detect and classify tomato leaf disease [ 30 ]. In this paper, the author utilized the PlantVillage dataset [ 31 ] that contains different crop datasets. The tomato leaf dataset was selected and used to performe deep learning; the author used the disease classes and achieved a better accuracy rate.

A simple CNN model with eight hidden layers has been used to identify the conditions of a tomato plant. The proposed techniques show optimal results compared to other classical models [ 32 , 33 , 34 , 35 ]. The image processing technique uses deep learning methods to identify and classify tomato plant diseases [ 36 ]. Here, the author used the segmentation technique and CNN to implement a complete system. A variation in the CNN model has been adopted and applied to achieve better accuracy.

LeNet has been used to identify and classify tomato diseases with minimal resource utilization in CPU processing capability. Furthermore, the automatic feature extraction technique has been employed to improve classification accuracy [ 37 ]. ResNet 50 model has been used to classify and identify tomato disease. The authors detected the diseases in multiple steps: Firstly, by segregating the disease dataset. Secondly, by adapting and adjusting the model based on the transfer learning approach, and lastly, by enhancing the quality of the model by using data augmentation. Finally, the model is authenticated by using the dataset. The model outperformed various legacy methods and achieved 97% accuracy [ 38 ]. Hyperspectral images identify rice leaf diseases by evaluating different spectral responses of leaf blade fractions and identifying Sheath blight (ShB) leaf diseases [ 39 ]. A spectral library has been created using different disease samples [ 40 ]. An improved VGG16 has been used to identify apple leaf disease with an accuracy rate of 99.01% [ 41 ].

The author employed image processing, segmentation, and a CNN to classify leaf disease. This research attempts to identify and classify tomato diseases in fields and greenhouse plants. The author used deep learning and a robot in real-time to identify plant diseases utilizing the sensor’s image. AlexNet and SqueezeNet are deep learning architectures used to diagnose and categorize plant disease [ 42 ]. The authors built convolutional neural network models using leaf pictures of healthy and sick plants. An open-source PlavtVillage dataset with 87,848 images of 25 plants classified into 58 categories and a model was used to identify plant/disease pairs with a 99.53% success rate (or healthy plant). The authors suggest constructing a real-time plant disease diagnosis system based on the proposed model [ 43 ].

In this paper, the authors reviewed all CNN variants for plant disease classification. The authors also briefed all deep learning principles used for leaf disease identification and classification. The authors mainly focused on the latest CNN models and evaluated their performance. Here, the authors summarized CNN variants such as VGG16, VGG19, and ResNet. In this paper, the authors discuss pros, cons, and future aspects of different CNN variants [ 44 ].

This work is mainly focused on investigating an optimal solution for plant leaf disease detection. This paper proposes a segmentation-based CNN to provide the best solution to the defined problem. This paper uses segmented images to train the model compared to other models trained on the complete image. The model outperformed and achieved 98.6% classification accuracy. The model was trained and tested on independent data with ten disease classes [ 45 ].

A detailed learning technique for the identification of disease in tomato leaves using enhanced CNNs is presented in this article.

  • The dataset for tomato leaves is built using data augmentation and image annotation tools. It consists of laboratory photos and detailed images captured in actual field situations.
  • The recognition of tomato leaves is proposed using a Deep Convolutional Neural Network (DCNN). Rainbow concatenation and GoogLeNet Inception V3 structure are all included.
  • In the proposed INAR-SSD model, the Inception V3 module and Rainbow concatenation detect these five frequent tomato leaf diseases.

The testing results show that the INAR-SSD model achieves a detection rate of 23.13 frames per second and detection performance of 78.80% mAP on the Apple Leaf Disease Dataset (ALDD). Furthermore, the results indicate that the innovative INAR-SSD (SSD with Inception module and Rainbow concatenation) model produces more accurate and faster results for the early identification of tomato leaf diseases than other methods [ 46 ].

An EfficientNet, a convolutional neural network with 18,161 plain segmented tomato leaf images, is used to classify tomato diseases. Two leaf segmentation models, U-net and Modified U-net, are evaluated. The models’ ability was examined categorically (healthy vs disease leaves and 6- and 10-class healthy vs sick leaves). The improved U-net segmentation model correctly classified 98.66% of leaf pictures for segmentation. EfficientNet-B7 surpassed 99.95% and 99.12% accuracy for binary and six-class classification, and EfficientNet-B4 classified images for ten classes with 99.89 percent accuracy [ 47 ].

Disease detection is crucial for crop output. Therefore, disease detection has led academics to focus on agricultural ailments. This research presents a deep convolutional neural network and an attention mechanism for analyzing tomato leaf diseases. The network structure has attention extraction blocks and modules. As a result, it can detect a broad spectrum of diseases. The model also forecasts 99.24% accuracy in tests, network complexities, and real-time adaptability [ 48 ].

Convolutional Neural Networks (CNNs) have revolutionized image processing, especially deep learning methods. Over the last two years, numerous potential autonomous crop disease detection applications have emerged. These models can be used to develop an expert consultation app or a screening app. These tools may help enhance sustainable farming practices and food security. The authors looked at 19 studies that employed CNNs to identify plant diseases and assess their overall utility [ 49 ].

To depict the illustrations, the authors depended on the PlantVillage dataset. The authors did not evaluate the performance of the neural network topologies using typical performance metrics such as F1-score, recall, precision, etc. Instead, they assessed the model’s accuracy and inference time. This article proposes a new deep neural network model and evaluates it using a variety of evaluation metrics.

3. Materials and Methods

In this part, cutting-edge methodologies, models, and datasets are utilized to attain outcomes.

3.1. Dataset

There were ten unique classes of disease in the sample. Tomato leaves of nine types were infected, and one class was resistant. We used reference photos and disease names to identify our dataset classes, as shown in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g001.jpg

Sample leaf image with disease and pathogen for ( a ) Bacterial_Spot(Xanthomonas vesicatoria), ( b ) Early_Blight(fungus Alternaria solani), ( c ) Late_Blight( Phytophthora infestans ), ( d ) Leaf_Mold(Cladosporium fulvum), ( e ) Septoria_Leaf_Spot(fungus Septoria lycopersici), ( f ) Spider_Mites(floridana), ( g ) Target_Spot(fungus Corynespora), ( h ) Tomato_Mosaic_Virus(Tobamovirus), ( i ) Tomato_Yellow_Leaf_Curl_Virus(genus Begomovirus), ( j ) Healthy_Leaf.

In the experiment, the complete dataset was divided in the ratio of 80:20 for training and testing and validation data.

3.2. Image Pre-Processing and Labelling

Before training the model, image pre-processing was used to change or boost the raw images that needed to be processed by the CNN classifier. Building a successful model requires analyzing both the design of the network and the format of input data. We pre-processed our dataset so that the proposed model could take the appropriate features out of the image. The first step was to normalize the size of the picture and resize it to 256 × 256 pixels. The images were then transformed into grey. This stage of pre-processing means that a considerable amount of training data are required for the explicit learning of the training data features. The next step was to group tomato leaf pictures by type, then mark all images with the correct acronym for the disease. In this case, the dataset showed ten classes in test collection and training.

3.3. Training Dataset

Preparing the dataset was the first stage in processing the existing dataset. The Convolutional Neural Network process was used during this step as image data input, which eventually formed a model that assessed performance. Normalization steps on tomato leaf images are shown in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g002.jpg

Classifier model used.

3.4. Convolutional Neural Network

The CNN is a neural network technology widely employed today to process or train the data in images. The matrix format of the Convolution is designed to filter the pictures. For data training, each layer is utilized in the Convolution Neural Network, including the following layers: input layer, convo layer, fully connected layer pooling layer, drop-out layer to build CNN, and ultimately linked dataset classification layer. It can map a series of calculations to the input test set in each layer. The complete architecture is shown in Figure 3 , and a description of the model is in Table 1 .

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g003.jpg

CNN model architecture.

Hyper-parameter of deep neural network.

ParameterDescription
No. of Convolution Layer8
No. of Max Pulling Layer8
Dropout Rate0.5
Network Weight AssignedUniform
Activation FunctionRelu
Learning Rates0.01, 0.01, 0.1
Epocho50, 100, 150
Batch Size36, 64, 110

3.4.1. Convolutional Layer

A convolution layer is used to map characteristics using the convolution procedure with the presentation layer. Each function of the map is combined with several input characteristics. Convolution can be defined as a two-function operation and constitutes the basis of CNNs. Each filter is converted to each part of the input information, and a map or 2D function map is generated. The complexity of the model encounters significant layer convolutional performance optimization. Calculated in the following equation for input z of the i th coalescent layer (1):

Where × is a convolution operation and f is used for an activation function, and Q is a layer kernel convolution. w i = [ Q i 1 ,   Q i 2 ,   … ,   Q i J ] , J   is the kernel layer convolution amount. Each kernel of Q i is a weight matrix K × K × L . The number of input channels is K as the window size.

3.4.2. Pooling Layer

The pooling layer increases the number of parameters exponentially to maximize and improve precision. Furthermore, with growing parameters, the size of the maps is reduced. The pooling layer reduces the overall output of the convolution layer. It reduces the number of training parameters by considering the spatial properties of an area representing a whole country. It also distributes the total value of all R activations to the subsequent activation in the chain. In the m-th max-pooled band, there are J-related filters that are combined.

where N ∈ {1, …., R} is pooling shift allowing for overlap between pooling zones where N < R. It reduces the output dimensionality from K convolution bands to M = (( K - R ))/( N + 1) pooled bands and the resulting layer is p = [ p _1, …, p _ m ] ∈ R ^( M . J .)

Finally, a maximum of four quadrants indicates the value maximum with average pooling results.

3.4.3. Fully Connected Layer

Each layer in the completely connected network is connected with its previous and subsequent layers. The first layer of the utterly corresponding layer is connected to each node in the pooling layer’s last frame. The parameters used in the CNN model take more time because of the complex computation; it is the critical drawback of the fully linked sheet. Thus, the elimination of the number of nodes and links will overcome these limitations. The dropout technique will satisfy deleted nodes and connections.

3.4.4. Dropout

An absence is an approach in which a randomly selected neuron is ignored during training, and they are “dropped out” spontaneously. This means that they are briefly omitted from their contribution to the activation of the downstream neurons on the forward transfer, and no weight changes at the back are applied to the neuron. Thus, it avoids overfitting and speeds up the process of learning. Overfitting is when most data has achieved an excellent percentage through the training process, but a difference in the prediction process occurs. Dropout occurs when a neuron is located in the network in the hidden and visible layers.

Performance Evaluation Metrics . The accuracy, precision, recall, and F1-score measures are used to evaluate the model’s performance. To avoid being misled by the confusion matrix, we applied the abovementioned evaluation criteria.

  • Accuracy . Accuracy ( A cc ) is a measure of the proportion of accurately classified predictions that have been made so far. It is calculated as follows:

Note that abbreviations such as “ TP ”, “ TN ”, “ FP ”, and “ FN ” stand for “true positive”, “true negative”, “false positive”, and “false negative”, respectively.

  • Precision . Precision ( Pre ) is a metric that indicates the proportion of true positive outcomes. It is calculated as follows:
  • Recall . Recall ( Re ) is a metric that indicates the proportion of true positives that were successfully detected. It is calculated as follows:
  • F1-Score. The F1-Score is calculated as the harmonic mean of precision and recall and is defined as follows:

Proposed Algorithm: Steps involved for Disease Detection

  • Step 1 : Input color of the image I RGB of the leaf procure from the PlantVillage dataset.
  • Step 2 : Given I RGB , generate the mask M veq using CNN-based segmentation.
  • Step 3 : Encrust I RGB with M veq to get M mask .
  • Step 4 : Divide the image M mask into smaller regions K tiles (square tiles).
  • Step 5: Classify K tiles from M mask into Tomato.
  • Step 6 : Finally, K tiles is the leaf part to detect disease.
  • Step 7 : Stop.

The disease detection starts with inputted image I RGB from the multiclass dataset. After input image I RGB is the mask segmented M veq using CNN. The mask image is divided into a different region K tiles . Afterward, it selects the Region of Interest (RoI), and the same is used to detect leaf disease.

The proposed algorithm for disease detection is given below:

Disease Detection
acquired from a dataset
Disease detection;
) using CNN-based seg mentation;
;
(square tiles);
)
tomato diseases;
is a disease, identify disease;

4. Results Analysis and Discussion

The complete experiment was performed on Google Colab. The result of the proposed method is described with different test epochs and learning rates and explained in the next sub-section.

This research used epoch 50 and epoch 100 for comparison, though learning rates were 0.0001. Figure 4 a shows the comparison between training and validation loss, and Figure 4 b shows the comparison between training accuracy and validation accuracy.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g004.jpg

( a ) Training loss vs validation loss (rate of learning 0.0001 and epoch 50). ( b ) Training accuracy vs validation accuracy (rate of learning 0.0001 and epoch 50).

Figure 5 a shows the comparison between training loss and validation loss, and Figure 5 b shows training accuracy and validation accuracy. Here, Figure 5 b shows that the accuracy rate of 98.43% is achieved with a training step at 100 epochs and the rate of learning 0.0001. Therefore, it is reasonable to infer that more iterations will result in higher data accuracy based on the research technique. However, the number of epochs increases as the training phase lengthens.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g005.jpg

( a ) Training loss vs validation loss (rate of learning 0.0001 and epoch 100). ( b ) Training accuracy vs validation accuracy (rate of learning 0.0001 and epoch 100).

This assessment looks to evaluate how machine learning plays a role in the process. For example, one of the training variables used to calculate the weight correction value for the course is the learning rate (1). This test is based on the epochs 50 and 100, while the learning rates are 0.001 and 0.01 used for comparison. Figure 6 a shows the comparison between training loss and validation loss, and Figure 6 b shows training accuracy and validation accuracy.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g006.jpg

( a ) Training loss vs validation loss (rate of learning 0.001 and epoch 50). ( b ) Training accuracy vs validation accuracy (rate of learning 0.001 and epoch 50).

Figure 7 a shows the comparison between training loss and validation loss, and Figure 7 b shows training accuracy and validation accuracy. According to Figure 7 b, the accurate rate of 98.42% is indicated by step 50 and a learning rate of 0.001. Furthermore, Figure 8 a shows the comparison between training loss and validation loss, and Figure 8 b shows training accuracy and validation accuracy. Figure 8 b shows that a level of accuracy of 98.52% is achieved. Figure 9 a shows the training and validation losses, and Figure 9 b shows training and validation accuracy. It also shows an accuracy rate of 98.5% with the 100 steps and 0.01 learning rate. Based on the assessment process used, it can evaluate a more accurate percentage of data with a greater learning rate.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g007.jpg

( a ) Training loss vs validation loss (rate of learning 0.001 and epoch 100). ( b ) Training accuracy vs validation accuracy (rate of learning 0.001 and epoch 100).

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g008.jpg

( a ) Training loss vs validation loss (rate of learning 0.01 and epoch 50). ( b ) Training accuracy vs validation accuracy (rate of learning 0.01 and epoch 50).

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g009.jpg

( a ) Training loss vs validation loss (rate of learning 0.01 and epoch 100). ( b ) Training accuracy vs validation accuracy (rate of learning 0.01 and epoch 100).

As shown in the results of Table 2 , the accuracy rate is dependent on both the learning rate and the epoch: The more significant the epoch value, the more precise the calculation. Table 2 describes that the experiment was performed by varying different parameters such as epoch (two values) and learning rate (three values). Different learning rates were used to find out detection accuracy. The accuracy of the experiment on different variations values is shown in Table 2 .

Test results.

Dataset AmountImage SizeEpochLearning RateAccuracy (%)
3000 256 × 256 px500.000198.47%
500.00198.42%
500.0198.52%
1000.000198.43%
1000.00198.58%
1000.0198.5%

The precision, recall, and F1-Score of the model is shown in Figure 10 (a–c). The performance of parameters is measured by accuracy, but other factors such as precision, recall, and F1-score also contribute to it. These factors are computed for all the classes and shown below in the experiment performed. These factors are calculated on the true positive, true negative, false positive, and false negative values for all the classes. The high precision shows that the accuracy will be increased. A high recall value indicates the number of relevant positive values. The f-score represents the weighted average of the precision and recall.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-07987-g010.jpg

( a ) Precision; ( b ) Recall; ( c ) F1-Score.

In Table 3 , the proposed approach’s performance is compared to that of three standardized models. The results show that the proposed model outperforms the other classical models using segmentation and an added extra layer in the model.

Comparison with other models.

S.No.ModelAccuracy RateSpace Training ParametersNon-Trainable
1Mobinet66.7582,56618,020,552455,262
2VGG1679.5285,24521,000,254532,654
3InceptionV364.2590,25522,546,862658,644
4Proposed 98.4922,5651,422,5420

5. Conclusions and Future Scope

The article discussed a deep neural network model for detecting and classifying tomato plant leaf diseases into predefined categories. It also considered morphological traits such as color, texture, and leaf edges of the plant. This article introduced standard profound learning models with variants. This article discussed biotic diseases caused by fungal and bacterial pathogens, specifically blight, blast, and browns of tomato leaves. The proposed model detection rate was 98.49 percent accurate. With the same dataset, the proposed model was compared to VGG and ResNet versions. After analyzing the results, the proposed model outperformed other models. The proposed approach for identifying tomato disease is a ground-breaking notion. In the future, we will expand the model to include certain abiotic diseases due to the deficiency of nutrient values in the crop leaf. Our long-term objective is to increase unique data collection and accumulate a vast amount of data on several diseases of plants. To improve accuracy, we will apply subsequent technology in the future.

Acknowledgments

This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. The authors, therefore, gratefully acknowledge DSR’s technical and financial support.

Author Contributions

Conceptualization, N.K.T., V.G. and A.A.; methodology, H.M.A., S.G.V. and D.A.; validation, S.K. and N.G.; formal analysis, A.A. and N.G.; investigation, N.K.T. and V.G.; resources, A.A.; data curation, S.G.V. and H.M.A.; writing—original draft, N.K.T., V.G. and A.A.; writing—review and editing, S.K., H.M.A. and N.G.; supervision, S.G.V. and D.A.; project administration, H.M.A. and S.G.V. All authors have read and agreed to the published version of the manuscript.

This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (D438-830-1442). The authors, therefore, gratefully acknowledge DSR’s technical and financial support.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Data availability statement, conflicts of interest.

The authors declare no conflict of interest.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A novel groundnut leaf dataset for detection and classification of groundnut leaf diseases

  • Sasmal, Buddhadev
  • Das, Arunita
  • Dhal, Krishna Gopal
  • Saheb, Sk. Belal
  • Khurma, Ruba Abu
  • Castillo, Pedro A.

Groundnut (Arachis hypogaea) is a widely cultivated legume crop that plays a vital role in global agriculture and food security. It is a major source of vegetable oil and protein for human consumption, as well as a cash crop for farmers in many regions. Despite the importance of this crop to household food security and income, diseases, particularly Leaf spot (early and late), Alternaria leaf spot, Rust, and Rosette, have had a significant impact on its production. Deep learning (DL) techniques, especially convolutional neural networks (CNNs), have demonstrated significant ability for early diagnosis of the plant leaf diseases. However, the availability of groundnut-specific datasets for training and evaluation of DL models is limited, hindering the development and benchmarking of groundnut-related deep learning applications. Therefore, this study provides a dataset of groundnut leaf images, both diseased and healthy, captured in real cultivation fields at Ramchandrapur, Purba Medinipur, West Bengal, using a smartphone camera. The dataset contains a total of 1720 original images, that can be utilized to train DL models to detect groundnut leaf diseases at an early stage. Additionally, we provide baseline results of applying state-of-the-art CNN architectures on the dataset for groundnut disease classification, demonstrating the potential of the dataset for advancing groundnut-related research using deep learning. The aim of creating this dataset is to facilitate in the creation of sophisticated methods that will aid farmers accurately identify diseases and enhance groundnut yields.

  • Deep learning;
  • Image classification;
  • Disease detection;
  • Precision griculture
  • DOI: 10.48175/ijarsct-19238
  • Corpus ID: 271538272

An Improve Method for Plant Leaf Disease Detection and Classification using Deep Learning

  • Jeetendra Mahor , Ashish Gupta
  • Published in International Journal of… 28 July 2024
  • Agricultural and Food Sciences, Computer Science

41 References

Detection of plant disease by leaf image using convolutional neural network, a robust deep learning approach for tomato plant leaf disease localization and classification, toled: tomato leaf disease detection using convolution neural network, deep learning precision farming: tomato leaf disease detection by transfer learning, monitoring tomato leaf disease through convolutional neural networks, detection of tomato leaf diseases for agro-based industries using novel pca deepnet, mobile application for tomato plant leaf disease detection using a dense convolutional network architecture, deep learning models for plant disease detection and diagnosis, tomato leaf disease detection using convolutional neural network with data augmentation, a convolution neural network based approach to detect the disease in corn crop, related papers.

Showing 1 through 3 of 0 Related Papers

REVIEW article

An advanced deep learning models-based plant disease detection: a review of recent research.

A correction has been applied to this article in:

Corrigendum: An advanced deep learning models-based plant disease detection: a review of recent research

  • Read correction

Muhammad Shoaib,&#x;

  • 1 Department of Computer Science, CECOS University of IT and Emerging Sciences, Peshawar, Pakistan
  • 2 Department of Computer Science and Information Technology, Sarhad University of Science and Information Technology, Peshawar, Pakistan
  • 3 College of Technological Innovation, Zayed University, Dubai, United Arab Emirates
  • 4 Faculty of Computer Science and Engineering, Galala University, Suez, Egypt
  • 5 Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Banha, Egypt
  • 6 Department of Molecular Stress Physiology, Center of Plant Systems Biology and Biotechnology, Plovdiv, Bulgaria
  • 7 Department of Electrical Engineering, College of Engineering, Jouf University, Jouf, Saudi Arabia
  • 8 Department of Plant Physiology and Molecular Biology, University of Plovdiv, Plovdiv, Bulgaria
  • 9 School of Computer Science and Information Engineering, Zhejiang Gongshang University, Hangzhou, China
  • 10 Department of Computer Science and Engineering, School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul, Republic of Korea

Plants play a crucial role in supplying food globally. Various environmental factors lead to plant diseases which results in significant production losses. However, manual detection of plant diseases is a time-consuming and error-prone process. It can be an unreliable method of identifying and preventing the spread of plant diseases. Adopting advanced technologies such as Machine Learning (ML) and Deep Learning (DL) can help to overcome these challenges by enabling early identification of plant diseases. In this paper, the recent advancements in the use of ML and DL techniques for the identification of plant diseases are explored. The research focuses on publications between 2015 and 2022, and the experiments discussed in this study demonstrate the effectiveness of using these techniques in improving the accuracy and efficiency of plant disease detection. This study also addresses the challenges and limitations associated with using ML and DL for plant disease identification, such as issues with data availability, imaging quality, and the differentiation between healthy and diseased plants. The research provides valuable insights for plant disease detection researchers, practitioners, and industry professionals by offering solutions to these challenges and limitations, providing a comprehensive understanding of the current state of research in this field, highlighting the benefits and limitations of these methods, and proposing potential solutions to overcome the challenges of their implementation.

1 Introduction

The use of ML and DL in plant disease detection has gained popularity and shown promising results in accurately identifying plant diseases from digital images. Traditional ML techniques, such as feature extraction and classification, have been widely used in the field of plant disease detection. These methods extract features from images, such as color, texture, and shape, to train a classifier that can differentiate between healthy and diseased plants. These methods have been widely used for the detection of diseases such as leaf blotch, powdery mildew, and rust, as well as disease symptoms from abiotic stresses such as drought and nutrient deficiency ( Mohanty et al., 2016 ; Anjna et al., 2020 ; Genaev et al., 2021 ) but have limitations in accurately identifying subtle symptoms of diseases and early-stage disease detection. In addition, they also struggle to process complex and high-resolution images.

Recently, DL techniques such as convolutional neural networks (CNNs) and deep belief networks (DBNs) have been proposed for plant disease detection ( Liu et al., 2017 ; Karthik et al.,2020 ). These methods involve training a network to learn the underlying features of the images, enabling the identification of subtle symptoms of diseases that traditional image processing methods may not be able to detect ( Singh and Misra, 2017 ; Khan et al., 2021 ; Liu and Wang, 2021b ). DL models can handle complex and large images, making them suitable for high-resolution images ( Ullah et al., 2019 ). However, these methods require a large amount of labeled training data and may not be suitable for unseen diseases. Furthermore, DL models are computationally expensive, which may be a limitation for some applications.

In recent years, several research studies have proposed different ML and DL approaches for plant disease detection. However, most studies have focused on a specific type of disease or a specific plant species. Therefore, more research is needed to develop a generalizable and robust model that can work for different plant species and diseases. Additionally, there is a need for more publicly available datasets for training and evaluating models. One of the recent trends in the field is transfer learning, a technique that allows for reusing pre-trained models on new datasets. Recently, transfer learning and ensemble methods have emerged as popular trends in plant disease detection using ML and DL. Transfer learning involves fine-tuning pre-trained models on a specific dataset to enhance the performance of DL models. Ensemble methods, on the other hand, involve combining multiple models to improve overall performance and reduce dependence on a single model. These approaches have been applied to increase the robustness and accuracy of plant disease detection models. Additionally, it can also prevent overfitting, a common problem in DL models where the model performs well on the training data but poorly on unseen data. Another essential aspect to consider is the use of data augmentation techniques, which is the process of artificially enlarging the size of a dataset by applying random transformations to the images. This approach has been used to increase the diversity of the data and reduce the dependence on a large amount of labeled data.

In conclusion, the application of ML and DL techniques in plant disease detection is a rapidly evolving field with promising results. While these techniques have demonstrated their potential to accurately identify and classify plant diseases. There are still limitations and challenges that need to be addressed. Further research is required to develop generalizable models and make more publicly available datasets for training and evaluation. This review highlights the current state of research in this field and provides a comprehensive understanding of the benefits and limitations of ML and DL techniques for plant disease detection. Its novelty lies in the breadth of coverage of research published from 2015 to 2022, which explores various ML and DL techniques while discussing their advantages, limitations, and potential solutions to overcome implementation challenges. By offering valuable insights into the current state of research in this area, the article is a valuable resource for plant disease detection researchers, practitioners, and industry professionals seeking a thorough understanding of the subject matter.

The following section comprises the contributions of this research article.

● This paper provides an overview of the current developments in the field of plant disease detection using ML and DL techniques. By covering research published between 2015 and 2022, it provides a comprehensive understanding of the state-of-the-art techniques and methodologies used in this field.

● This review examines various ML and DL methods for detecting plant diseases, including image processing, feature extraction, CNNs, and DBNs, and sheds light on the benefits and drawbacks, such as data availability, imaging quality, and differentiation between healthy and diseased plants. The article shows that the use of ML and DL techniques significantly increases the precision and speed of plant disease detection.

● Various datasets related to plant disease detection have been studied in the literature, including PlantVillage, the rice leaf disease dataset, and datasets for insects affecting rice, corn, and soybeans.

● The paper discussed various performance evaluation criteria used to assess the accuracy of plant disease detection models, including the intersection of unions (IoU), dice similarity coefficient (DSC), and accurate recall curves.

The article has seven main sections. A brief overview of plant disease and pest detection and its significance is provided in Section 1. The challenges and issues in the plant disease and pest detection are discussed in Section 2. The deep learning approaches for recognizing images and their applications in plant disease and pest detection are presented in Section 3. The comparison of commonly used datasets and the performance metrics of deep learning methods on different datasets are presented in Section 4. The challenges in existing systems are identified in Section 5. The discussion about the identification of plant diseases and pests is presented in Section 6. Finally, the conclusion of the research work and future research directions are discussed in Section 7.

2 Plant disease and pest detection: Challenges and issues

2.1 identifying plant abnormalities and infestations.

Artificial Intelligence (AI) technologies have recently been applied to the field of plant pathology for identifying plant abnormalities and infestations. These technologies can have the capability to transform the method in which plant maladies are identified, diagnosed, and managed. In this passage, we will explore the various AI technologies that have been proposed for identifying plant abnormalities and infestations, their advantages and limitations, and the impact of these technologies on the field of plant pathology. One of the most widely used AI technologies in plant pathology is ML. ML algorithms, such as c4.5 classifier, tree bagger, and linear support vector machines, have been applied to the classification of plant diseases from digital images. These algorithms can be trained to recognize specific patterns and symptoms of diseases, making them suitable for the classification of diseases in their primary phases. However, ML algorithms mandate a substantial quantity of data that has been annotated for training and may not be suitable for diseases that have not been seen before.

DL technologies, such as CNNs and DBNs, have also been proposed for identifying plant abnormalities and infestations. These technologies have been showing promising outcomes in the detection and identification of lesions from digital images ( Kaur and Sharma, 2021 ; Siddiqua et al., 2022 ; Wang, 2022 ). DL models can automatically learn features from the images and can identify subtle symptoms of diseases that traditional image processing methods may not be able to detect. Though, Deep Learning models necessitate a significant volume of labeled training data and involve intensive computational resources, which may be a limitation for some applications. Another AI technology that has been applied to plant pathology is computer vision (CV). CV algorithms, such as object detection and semantic segmentation, can be used to identify and localize specific regions of interest in images, such as plant leaves and symptoms of diseases ( Kurmi and Gangwar, 2022 ; Peng and Wang, 2022 ). These algorithms can be used to automatically transforming the images into recognizable patterns or characteristics can be integrated with ML or DL algorithms for disease detection and classification. However, CV algorithms need a huge number of labeled image data for model training and may not be suitable for diseases that have not been seen before. Figure 1 comprises four images, each depicting a different stage of plant disease detection. The first image is the input image, while the next image displays the disease identification results. The third image features lesion detection, and the final image presents the segmentation results of the plant lesion.

www.frontiersin.org

Figure 1 (A) Input raw image, (B) leaf classification, (C) lesion detection, and (D) lesion segmentation.

AI technologies have shown promising results in identifying plant abnormalities and infestations. ML, DL, and CV based system are utilized for to the classification and lesion segmentation of plant diseases from digital images and could change the method of discovering plant illnesses significantly, diagnosed, and managed ( Akbar et al., 2022 ). However, these technologies need a considerable amount of annotated training data and may not be suitable for diseases that have not been seen before. Further research is needed to develop generalizable models that can be applied to different plant species and diseases, and to make more datasets publicly available for training and evaluating the models. Table 1 provides comprehensive information about the tools and technologies utilized for plant disease detection. It includes details about the various feature extraction methods, including those based on handcrafted and learning features, as well as the appropriate methods for processing small and large plant image datasets.

www.frontiersin.org

Table 1 Comparison of different technologies for image processing.

2.2 Evaluation of conventional techniques for identifying plant diseases and pests

In recent years, ML and DL-based approaches have been increasingly applied to agriculture and botanical studies. These approaches have shown great potential in improving crop yield, identifying plant lesions, and optimizing plant growth. In comparison to traditional approaches, ML and DL-based methods offer several advantages and have the potential to revolutionize the field of agriculture and botanical studies. Traditional approaches in agriculture and botanical studies mainly rely on manual inspection and expert knowledge. These methods are often time-consuming, physically demanding, and susceptible to human mistakes. In contrast, ML and DL-based approaches can automate these tasks, reducing the need for human interference and enhancing precision and efficiency of the process.

ML and DL-based approaches have been used to analyze large amounts of data, including images, sensor data, and weather data, to identify patterns and make predictions. For example, ML algorithms such as c4.5 classifier and tree bagger are being used to predict crop yields, identify plant lesions and pests, and optimize plant growth ( Yoosefzadeh-Najafabadi et al., 2021 ; Cedric et al., 2022 ; Domingues et al., 2022 ). DL models, such as CNNs and DBNs, have been applied plant lesion identification based on image analysis and classification, providing better accuracy and robustness compared to traditional image processing methods ( Sladojevic et al., 2016 ; Alzubaidi et al., 2021 ; Dhaka et al., 2021 ). The ML and DL-based approaches offer several advantages over traditional methods in agriculture and botanical studies. These methods can automate tasks, increase accuracy and efficiency, and analyze huge quantity of data. Since, these methods require a large size of labeled features and may not be suitable for lesions that have not been seen before. Further research is needed to develop generalizable models that can be applied to different crop species and conditions, and to make more datasets publicly available for predictive model training and model validation for performance analysis.

3 Deep learning approaches for recognizing images

DL approaches have become a promising method for detecting plant lesions. These techniques, which are based on RNN have demonstrated success by achieving high accuracy in identifying various plant lesions from images ( Xu et al., 2021 ). By automatically learning features from the images, DL models can accurately identify and classify different disease symptoms, reducing the need for manual feature engineering ( Drenkow et al., 2021 ). Additionally, these models can handle large amounts of data, making them well-suited for large-scale plant lesions detection ( Arcaini et al., 2020 ). Therefore, in review paper, we evaluate the current state-of-the-art in using DL for plant lesions recognition, examining various architectures, techniques, and datasets used in this field. Our aim is to provide a thorough understanding of the current research in this area and identify potential future directions for improving the detection precision and make the identification system more efficient using the DL approaches.

3.1 Deep learning theory

Iqbal ( Sarker, 2021 ) popularized the term “Deep Learning” in a 2006 Science article (DL). The article describes a procedure for transforming high-dimensional data into low-dimensional codes using a technique called “autoencoder” networks. These networks are made up of a layers with few parameters that is trained to create vectors of input with high dimensions. The process of fine-tuning the weights of the network can be done using gradient descent, but this method is only effective if the baseline weights are near to a satisfactory solution. The article presents an effective initialization of weights that enables deep autoencoder models to learn the low-dimensional sequences that are more effective than principal component analysis for reducing the dimensionality of data.

DL is a variant of ML that employs multiple-layered AI networks to learn and represent complex patterns in data. It is extensively employed in object recognition, object detection, speech analysis and speech-to-text transcription. In natural language processing, DL-based models are used for tasks such as language translation, text summarization, and sentiment analysis. Additionally, DL is also used in recommendation systems to predict user preferences based on previous actions or interactions. AI vision is a subfield of artificial intelligence concerned with the construction of computers to process and understand the visual contents from the world ( Liu et al., 2017 ).

In traditional manual image classification and recognition methods, the underlying characteristics of an image are extracted through the use of hand-crafted features. These methods, however, are limited in their ability to extract information about the deep and complex characteristics of an image. This is because the manual extraction procedure is extremely reliant on the expertise of an individual conducting the analysis, and can be prone to errors and inconsistencies. Additionally, traditional manual methods are not able to extract information about subtle or hidden features that may be present in an image. In contrast, DL-based image classification and recognition methods use artificial neural networks to automatically extract image features. These methods have been shown to be highly effective in extracting complex and deep features from images, and have been utilized in numerous applications such as object recognition, facial features recognition, and image segmentation. Among the primary benefits of DL-based methods is its capacity to learn features autonomously from input data, rather than relying on manual feature engineering. This allows the model to learn more abstract and subtle features that may be present in the image, leading to improved performance and greater accuracy. Additionally, DL-based methods are also able to handle high-dimensional and complex data, making them particularly well-suited to handling large-scale image datasets. In summary, traditional manual image classification and recognition methods have limitations in extracting deep and complex characteristics of an image, while DL-based methods have been demonstrated greater efficiency and effectiveness in this task by automatically extracting image features, handling high-dimensional and complex data, and learning more abstract and subtle features that may be present in the image ( Tran et al., 2015 ).

DBN ( Hasan et al., 2020 ) is a type of unsupervised DL model that is composed of multiple layers of Restricted Boltzmann Machines (RBMs). Using the plant lesion and pest infestation detection, DBNs have been used to test plant images affected regions to detect various diseases and types of pests, and extract features from images of plant leaves. Studies have shown that DBNs can achieve high accuracy rates in the range of 96-97.5% in classifying images of plant leaves affected by diseases and pests.

Boltzmann’s Deep Machine (DBM) ( Salakhutdinov & Larochelle, 2010 ) is generative stochastic AI model that can be utilized for unsupervised classification to detect the plant lesion. Within the context of conventional plant lesion and pest detection, DBMs have been used to predict labels for images of various plant affected regions by viruses and plant bugs, and extract features from images of plant leaves. Studies have shown that DBMs can achieve high accuracy rates in the range of 96-96.8% in classifying images of plant leaves affected by diseases and pests.

Deep Denoising Autoencoder ( Lee et al., 2021 ) is a variant of autoencoder, which is a neural network architecture that is composed of an encoder module along with a decoder. In the context of traditional plant disease and pest infestation detection, DDA has been used to for two different purposed i.e., noise removal from the plant leaf data and a prediction system to identify plant disease. Studies have shown that DDA can achieve high accuracy rates in the range of 98.3% in classifying images of plant leaves affected by diseases and pests.

Deep CNN ( Shoaib et al., 2022a ; Shoaib et al., 2022b )is a type of feedforward AI model that is consisting of several hidden layers of convolutional and pooling layers, the CNN model are the best of the DL model for achieving higher detection accuracy using imaging data The CNN model consist of two blocks, the features learning and classification blocks. The features learning block extract various kind of features using the convolutional layer where the features learning is performed at the fully connected layers. The higher accuracy of the CNN model for plant disease classification has proofed to be the best then all other kinds of ML and DL methods. Studies have shown that CNNs can achieve high accuracy rates in the range of 99-99.2% in classifying images of plant leaves affected by diseases and pests.

3.2 Convolutional neural network

CNNs are a sort of DL model that are ideally suited for image classification tasks such as leaf disease detection ( Zhang et al., 2019 ; Lin et al., 2020 ; Stančić et al., 2022 ). Multiple layers comprise the CNN’s architecture, such as fully connected layers, maxpooling, and normalization layers. The first layer in the CNN is the input layer while the second layer in most of the CNNs is convolutional layers which extract features by applying various kind of 2D filters on the image, the amount of images increase which can then dimensionally reduced pooling also known as down sampling layers, resulting in a more compact representation of the image. Fully connected (FC) layers in a CNN are also known as learnable features, the extracted features are processed in the FC layer for learning and weights optimization. These layers are also responsible for making classification which can be used to recognize various plant diseases. The learning process of CNN model begins with training, the input to the CNN are images along with their labels, after the successful training of the model, the model is able to identify disease types.

The decision-making process in a CNN for leaf disease detection starts with the input of an image of a leaf. The image is then passed through the convolutional layers, where features are extracted. The feature vectors are then processed by pooling layers, where the spatial dimensions are reduced. The feature vectors are then transmitted via the FC layers, where a decision is made about the presence of a disease or pest. The models output are the probabilities that the leaf is diseased or healthy. CNNs are well-suited for leaf disease detection, thanks to their architecture consisting of up-sampling, down-sampling and learnable layers ( Agarwal et al., 2020 ). The learning process of CNN involves training the network using labeled images of healthy and disease effected plants. Figure 2 presents a framework for classifying the plants into normal and abnormal plant using leaf data. The framework employs several different Inception architectures, and the final decision is made through a bagging-based approach.

www.frontiersin.org

Figure 2 A CNN framework for classifying plants into healthy and unhealthy ( Shoaib et al., 2022a ).

3.3 Deep learning using open-source platforms

TensorFlow is a powerful library for dataflow and differentiable programming ( Abadi, 2016 ; Dillon et al., 2017 ), which allows for efficient computation on a set of devices with powerful hardware’s, that include memory, GPUs and TPUs. Its ability to create dataflow graphs, which describe how data moves through a computation, makes it a popular choice for ML and DL applications. In contrast, Keras is a high-end DL library that operates atop TensorFlow (also some other libraries). It simplifies the creation of DL models by providing a user-friendly API, and it provides a number of pre-built layers and functions, such as convolutional layers and pooling layers, which can be easily added to a model. In recent versions of Tensorflow (2.4 and above). TensorFlow is used to provide low-level operations for building and training models, while Keras is used to provide a higher-level API for building and training models more easily. The use of TensorFlow and Keras together in this research has allowed us to effectively and efficiently solve the problem at hand.

PyTorch is also from the open-source community which has lot of capabilities for developing ML and DL applications ( Zhao et al., 2021 ; Masilamani and Valli, 2021 ). PyTorch is a powerful library for building and training DL models. It is known for its flexibility and ease of use, making it a popular choice among researchers and practitioners. One of the key features of PyTorch is its dynamic computational graph. Unlike other libraries, such as TensorFlow, which uses a static computational graph, PyTorch allows for the modification of the graph on-the-fly, making it more suitable for research and experimentation. Additionally, PyTorch provides support for distributed training, allowing for efficient training of large models on multiple GPUs. PyTorch also provides a number of pre-built modules, such as convolutional layers and recurrent layers, which can be easily added to a model. This makes it easy to quickly prototype and experiment with different model architectures. Additionally, PyTorch also has a large community that shares pre-trained models, datasets, and tutorials, which helps to make the development process even more efficient.

Caffe (Convolutional Architecture for Fast Feature Embedding) is a Berkeley Vision and Learning Center-developed open-source DL framework (BVLC) and community contributors ( Jia et al., 2014 ). It is a popular choice for image and video classification tasks such as object detection and video summarization, and also consider a good choice for its speed and efficiency in training large models. Caffe is implemented in C++ and has a Python interface, which allows for easy integration with other Python libraries such as NumPy and SciPy. This allows for a high level of flexibility in the design and experimentation of DL models. One of the key features of Caffe is its ability to perform efficient convolutional operations, which are essential for computer vision tasks. Additionally, Caffe supports a wide range of DL models, such as CNN, RNN, Transformers networks. It also provides a number of pre-built layers and functions, such as convolutional layers and pooling layers, which can be easily added to a model ( Komar et al., 2018 ).

The Montreal Institute for Learning Algorithms (MILA) at the University of Montreal created Theano which also covers the open source license and have several packages in the python language for ML and DL ( Bahrampour et al., 2015 ). It is widely used for DL and other numerical computations, and it is known for its ability to optimize and speed up computations on CPUs and GPUs. One of the key features of Theano is its ability to perform symbolic differentiation, which allows for the efficient computation of gradients during the training of DL models ( Chung et al., 2017 ). Additionally, Theano can automatically optimize computations and perform automatic differentiation, which allows for the efficient training of large models. Theano also provides a number of pre-built functions, such as convolutional and recurrent layers, which can be easily added to a model. Theano is implemented in Python, which allows for easy integration with other Python libraries such as NumPy and SciPy. This allows for a high level of flexibility in the design and experimentation of DL models.

Table 2 in the research article provides a comparison of several popular Artificial Intelligence (AI) frameworks. The table compares the technology, developer, auxiliary devices required, functionality, programming language, and popular applications of each framework. This information is valuable for researchers and practitioners in the field of AI, as it provides an overview of the various options available and the strengths and limitations of each framework. The data presented in Table 2 can be used to guide the selection of an appropriate AI framework for a specific task or application.

www.frontiersin.org

Table 2 Comparison of popular artificial intelligence frameworks.

3.4 Deep learning based plant lesion and pests detection system

This section of the research focuses on the application of DL methods for segmentation plant lesions and pest infestation in botany and agriculture. With the increasing demand for food and the need for sustainable agricultural practices, the prompt identification and handling of illnesses affecting plants and pests is crucial for ensuring crop yields and maintaining the health of crops. DL, with its ability to process large amounts of data and its ability to learn from the data, has proven to be a robust tool for detecting plant diseases and pest infestation. In this section, we present a comprehensive overview of the state-of-the-art DL methods that have been developed for this purpose, including methods for image-based disease and pest detection, as well as methods for data-driven disease and pest detection using sensor data and other types of data. We also discuss the challenges and limitations of these methods and provide insights into future research directions. In particular, we will cover the recent advancements in DL for disease and pest detection, including the use of CNN, recurrent neural networks, and transfer learning techniques. These DL methods have shown to be effective in detecting plant diseases and pest infestation at a high level of accuracy, which can support farmers and agricultural professionals in taking appropriate action to prevent crop losses.

3.4.1 Classification network

Various Convolutional Neural Network (CNN) models which have been utilized to identify plant diseases and pest infestation are discussed. The first model that we will discuss is AlexNet ( Antonellis et al., 2015 ), which is the CNN model developed in 2012. The AlexNet CNN win the classification challenge by achieving the highest accuracy using the 1000 classes Imagenet dataset. AlexNet is known for its high accuracy and speed, and it has been used for a variety of tasks, including plant disease detection. Another popular CNN model is VGG ( Soliman et al., 2019 ), which was established in 2014 by the University of Oxford’s at Visual Geometry Lab. VGG is known for its high accuracy and is often used for image classification tasks. It has been employed to detect plant lesions by extracting hidden patterns from plant leaf data.

ResNet ( Szymak et al., 2020 ), which was developed by Microsoft Research Asia in 2015, is known for its ability to handle very deep networks. It has been used for plant disease detection by using pre-trained ResNet models on the images of the plants. GoogLeNet ( Wang et al., 2015 ), which was developed by Google in 2014, is known for its high accuracy and efficient use of computation resources. It has been used for plant disease detection by fine-tuning pre-trained GoogLeNet models on the images of the plants. InceptionV3, which was developed by Google in 2015, is known for its high accuracy and efficient use of computation resources. It has been used for plant disease detection by fine-tuning pre-trained InceptionV3 models on the images of the plants. DenseNet ( Tahir et al., 2022 ), which was developed in the ( Huang et al., 2017 ), is known for its ability to handle very deep networks and efficient use of computation resources. It has been used for plant disease detection by fine-tuning pre-trained DenseNet models on the images of the plants. These CNN models differ in their architectures, sizes, shapes, and the number of parameters. While AlexNet, VGG, GoogLeNet, InceptionV3, and DenseNet have been widely used for plant disease detection, ResNet is known for its ability to handle very deep networks. All these models have been shown to be effective in detecting plant diseases and pests based on different characteristics such as size, shape, and color, and they can be employed for harvesting characteristics from pictures of the plants which can be used to train a classifier to detect different diseases and pests.

3.4.2 CNN as features descriptor

The article ( Sabrol, 2015 ) “Recent Research on Image Processing and Soft Computing Approaches for Identifying and Categorizing Plant Diseases using CNNs” discusses the use of CNNs for recognizing and classifying plant diseases. The authors review various studies that have used CNNs, which are a type of DL algorithm, to detect and diagnose plant diseases. They also discuss the challenges and limitations of using CNNs, such as the need for large amounts of data, the high computational requirements, and the potential for overfitting. The article concludes by highlighting the potential for further research in this area and the importance of developing accurate and reliable plant disease recognition and classification systems using CNNs.

This research article presents an architecture of Convolutional Neural Networks for determining the variety of crops from image sequences obtained from advanced agro-observation stations ( Yalcin and Razavi, 2016 ). The authors address challenges related to lighting and image quality by implementing preprocessing steps. They then employ the CNN architecture to extract features from the images, highlighting the importance of the construction and depth of the CNN architecture in determining the recognition capability of the network. The accuracy of the model presented is evaluated to perform a comparison between the CNN model with those obtained using a support vector machine (SVM) classifier with the utilization of feature extractors such as Local Binary Patterns (LBP) and Gray-Level Co-Occurrence Matrix. The results of the approach are tested on a dataset collected through a government-supported project in Turkey, which includes over 1,200 agro-stations. The experimental outcomes affirm the efficiency of the suggested technique.

A novel meta-architecture is proposed, which utilizing a CNN designed for distinguishing between healthy and diseased plants ( Fuentes et al., 2017b ). The authors employed multiple characteristic extractors within the CNN to analyze input images that are divided into their corresponding categories. On the other hand, a CNN-based approach for the identification of various eight classes of rice viruses is presented in ( Hasan et al., 2019 ). The authors performed features extraction using the features learning model and introduced them along with the corresponding labels into a support vector machine (SVM) linear multiclass model for training. The trained model achieved a validation accuracy of 97.5%.

3.4.3 CNN-based predictive systems

In the area of plant illness and pest identification, CNNs have been extensively utilized. One of the first applications of CNNs in this field was the identification of lesions in plant images, utilizing classification networks. The method employed involves training CNNs to recognize specific patterns or features in the input image that are associated with various diseases or pests. After training, the network can be utilized to classify new images as diseased or healthy. The classification of raw images is a straightforward process that utilizes the entire image as input to the CNN. However, this approach may be limited by the presence of irrelevant information or noise in the image, which can negatively impact the performance of the network. In order to address this problem, investigators have proposed utilizing a region of interest (ROI) based approach, in which is the model is taught to categorize specific regions of the image that contain the lesion, rather than the entire image. Multi-category classification is another area of research in this field, which involves training CNNs to recognize multiple types of diseases or pests in the same image. This approach can be more challenging than binary classification, as it requires CNNs to learn more complex and diverse patterns in the input images.

The first broad application of CNNs for plant pest and disease detection was the identification of lesions using categorization networks. Current study issues include the categorization of raw pictures, classification following recognition of regions of interest (ROI), and classification of several categories. Utilizing neural structural models, such as CNN, for direct classification in plant pest identification can be a highly effective strategy. CNN is a DL model that is ideally suited for image classification problems since it can automatically learn picture attributes.

To train the network when the team constructed it independently, a tagged collection of photos of ill and healthy plants was required. There must be a variety of pests and illnesses, plant growth phases, and environmental circumstances within the databases. The team can then construct the network architecture and choose relevant parameters based on the specific features of the intended recipient plant pest and disease. Alternately, during transfer learning, you can employ a CNN model that has already been trained and modify it using data from specific plant pest detection tasks. This method is less computationally intensive and requires less labeled data due to the fact that the pre-trained network has already acquired generic characteristics from huge datasets. Notably, transfer learning enables teams to harness the performance of a model trained in some data that were developed using extensive, varied datasets demonstrated to perform well on similar tasks.

Establishing the weight parameters for multi-objective disease and pest classification networks, obtained through binary learning between healthy and infected samples as well as pests, are uniform. A CNN model is designed that integrates basic metadata and allows training on a single multi-crop model to identify 17 diseases across five cultures by utilizing a unified newly suggested model which has ability to handle multiple crops multi-crop model ( Picon et al., 2019 ). The following goals can be accomplished through the use of the proposed model:

1. Achieve more prosperous and stable shared visual characteristics than a single culture.

2. Is unaffected by diseases that cause similar symptoms across cultures.

3. Seamlessly integrates the context for classifying conditional crop diseases.

Experiments show that the proposed model eliminates 71 percent of classification errors and reduces data imbalance, with a balanced data the proposed model boasts an average accuracy rate of 98%, surpassing the performance of other models.

3.5 Identifying lesion locations through neural network analysis

Images are typically processed and labeled using a classification network. However, it is also possible to use a combination of various strategies and methods to determine the location of affected areas and perform pixel-level classification. Some commonly used methods for this purpose include the sliding window approach, the thermal map technique, and the multitasking learning network. These methods involve analyzing the input image and identifying specific regions or areas that correspond to lesions through a systematic and formal analysis process.

The sliding window method is a widely utilized technique for identifying and arranging elements within an image. This method involves moving a small window across the image and analyzing each window using a classification network. This technique is particularly useful for detecting localized features, such as lesions in plant photos, making it a valuable tool. In a study, a CNN classification network incorporating the sliding window method was utilized to develop a system for the identification of plant diseases and pests ( Tianjiao et al., 2019 ). This system incorporates ML, feature fusion, identification, and location regression estimation through the use of sliding window technology. The software demonstrated an ability to identify 70-82% of 29 typical symptoms when used in the field.

The graphic illustrates a temperature chart that illustrates the importance of various regions within an image. The darker the hue, the greater the importance of that region. Specifically, darker tones on the heat map indicate a higher likelihood of lesion detection in plants affected by diseases and pests. In a study conducted by ( Dechant et al., 2017 ), a convolutional neural network (CNN) was trained to generate thermal maps of corn disease images, which were then used to classify the entire image as infected or non-infected. The process of creating a thermal map for a single image takes approximately 2 minutes and requires 2 GB of memory. Identifying a group of three thermal cards for execution, on the other hand, takes less than a second and requires 600 bytes of memory. The results of the study showed that the test data set had an accuracy rate of 98.7%. In a separate study, ( Wiesner-hanks et al., 2019 ) used the thermal map system to accurately identify contour zones for maize diseases with a 96.22% accuracy rate in 2019. This method of detection is highly precise and can identify lesions as small as a few millimeters, making it the most advanced method of aerial plant disease detection to date.

A multitasking learning network is a network that is capable of both categorizing and segmenting plant afflictions and pests. Unlike a pure predictive model, which is only able to categorize images at the image level, multitasking networks add a branch that can accurately locate the affected region of plant diseases. This is achieved by sharing the results of characteristic extraction between the two branches. As a result, the multitasking learning network uses a detection hierarchy to generate precise lesion detection results, which reduces the sampling requirements for the classification network. In a study by ( Shougang et al., 2020 ), a VGCNN model followed by deconvolution (DGVGCNN) was developed to detect afflictions of plant leaves resulting from shadows, obstructions, and luminosity levels. The implementation of deconvolution redirects the CNN classifier’s attention to the precise locations of the afflictions, resulting in a highly robust model with a disease class identification accuracy of 97.81%, a lesion segmentation pixel accuracy of 96.44%, and a disease class recognition accuracy of 98.15%.

Figure 3 presents architecture of the CANet neural network. utilized for plant lesion detection and segmentation. The figure provides a visual representation of the various components and structure of the network, such as the input layer, intermediate hidden layers, and the final output layer. This information is valuable for researchers and practitioners who are interested in understanding the underlying mechanics of the CANet network and how it performs lesion detection and segmentation.

www.frontiersin.org

Figure 3 CANet neural network-based disease detection and ROI segmentation ( Shoaib et al., 2022b ).

Table 3 provides a comparison of the pros and cons of various object detection and classification methods for identifying diseases in the leaves of plants. The table compares five methods including Convolutional Neural Networks (CNNs), Transfer learning with CNNs, Multitasking learning networks, Deconvolution-guided VGNet (DGVGNet), and traditional methods such as manual inspection and microscopy. This information is valuable for researchers and practitioners in the area of identifying plant lesions, as it provides a comprehensive comparison of the strengths and limitations of each method, enabling them to make informed decisions about which method is most suitable for their needs. The data presented in Table 3 can act as a guide for future studies and development in the field of plant disease detection.

The research community as a whole has come to acknowledge the utility of taxonomic network systems for the detection of plant pests, and a significant amount of study and investigation is currently being carried out in this field. Table 3 offers a full comparison of the several sub-methods that make up the categorized network system, showing the benefits and drawbacks of each option ( Mohanty et al., 2016 ; Brahimi et al., 2018 ; Garcia and Barbedo, 2019 ). It is essential to keep in mind that the method that will prove to be the most effective will change depending on the particular use case as well as the resources. It should also be mentioned that while this table does illustrate the performance of each approach, it should not be considered to be an exhaustive comparison because the results may differ depending on the particular data sets and environmental conditions that are used.

www.frontiersin.org

Table 3 Comparison of pros and cons of various object detection and classification methods for plant leaf disease detection.

3.5.1 Object detection networks for plant lesion detection

Object localization is a fundamental task in computer vision and is closely associated with the traditional detection of plant pests. The objective of this task is to acquire knowledge about the location of objects and their corresponding categories. In recent years, various algorithms for object detection based on DL have been developed. These include single-stage networks such as SSD (W. Liu et al., 2016 ) and YOLO ( Dumitrescu et al., 2022 ; Peng and Wang, 2022 ; Shoaib and Sayed, 2022 ), as well as a networks with multi-stages, like YOLOv1 ( Nasirahmadi et al., 2021 ). These techniques are commonly employed in the identification of plant lesions and pests. The single-stage network makes use of network features to directly forecast the site and classification of blemishes, whereas the two-stage network first generates a candidate box (proposal) with lesions before proceeding to the object detection process.

3.5.2 Pest and plant lesion localization using multi-stage network

Faster R-CNN is a two-part object detection system that uses a common feature extractor to obtain a map of features from an input image. The network then utilizes a Region Proposal Network (RPN) to calculate anchor box confidences and generate proposals. The features maps of the proposed regions are then connected to the ROI pooling layer to enhance the initial detection results and finally determine the location and type of the lesion. This method improves upon traditional structures by incorporating modifications to the feature extractor, anchor ratios, ROI pooling, and loss functions that are tailored to the specific characteristics of plant disease and pest infestation detection. In a study conducted by ( Fuentes et al., 2017a ), the Faster R-CNN was used for the first time to accurately locate tomato diseases and pests infestation in a dataset containing 4800 images of 11 different categories. When using deep feature extractors like VGG-Net and ResNet, the mean average precision (mAP) value was calculated 88.66%.

The YOLOv5 architecture is visually represented in Figure 4 , which depicts its structure and organization. The network comprises three primary components: the input layer, the hidden layers, and the output layer. The input layer is where data is initially fed into the network for processing. The hidden layers are responsible for executing complex computations and transformations on the input data, and their performance plays a critical role in determining the network’s accuracy. The output layer generates final predictions by outputting the bounding boxes and class probabilities for objects detected in the input image. The figure provides detailed labels and annotations to explain how the network’s components interact. This visual representation helps researchers and developers gain a better understanding of the network’s mechanics and identify areas for performance enhancement. Overall, Figure 4 is an essential tool for anyone seeking to deepen their understanding of the YOLOv5 architecture. In 2019, ( Liu and Wang, 2021b ) a modification was suggested for the Faster R-CNN framework to automatically detect beet spot lesion by altering the parameters of the CNN model. A total of 142 images were used for testing and validation, resulting in an overall correct ranking rate of 96.84%. ( Zhou et al., 2019 ) a rapid detection system for rice diseases was proposed by integrating the FCM-Kmeans and YOLOv2 algorithms. The system showed a detection accuracy of 97.33% with a processing time of 0.18s for rice blast, 93.24% accuracy and 0.22s processing time for bacterial blight, and 97.75% accuracy and 0.32s processing time for sheath burn, based on the evaluation of 3010 images. ( Xie et al., 2020 ) proposed the DR-IACNN model based on the faster mechanism to ensure efficiency, a custom dataset is developed that contains the vine leaf lesions (GLDD), and the Faster R-CNN detector employe of a Inception-v2 architecture, the Inception-ResNetv2 architecture. The proposed model showed a mean average precision (mAP) accuracy of 83.7% and a detection rate of 12.09 frames per second. The two-stage detection network was designed to improve the real-time performance and practicality of the detection system. However, it still lacks in terms of speed compared to the speed of one-stage detection model.

www.frontiersin.org

Figure 4 YOLOv5 architecture ( Li et al., 2022 ).

3.5.3 One-stage network based plant lesion detection

In recent years, object detection has become an essential tool for diagnosing plant afflictions and pests. YOLO (You Only Look Once) is one of the most widely used object detection techniques. It is a real-time, single-pass object detector that utilizes a single CNN to predict the category and position of objects in an image. Variations of the YOLO algorithm, such as YOLOv2 and YOLOv3, and other various methods have been developed to enhance the accuracy of object recognition while maintaining real-time performance. Another popular object detection technique is SSD (Single Shot MultiBox Detector), which similarly to YOLO, uses a single CNN to predict the type and position of objects in an image. However, SSD makes predictions about the size of objects based on multiple feature maps that are scaled differently, making it better suited for identifying small objects with greater precision than YOLO.

Faster R-CNN is a two-stage object detection system that generates a set of potential object regions using a Region Proposal Network (RPN), and then uses a separate CNN to classify and locate objects within these proposals. Despite being slower than YOLO and SSD, Faster R-CNN has been shown to achieve a higher level of accuracy. When it comes to detecting plant diseases and pests, YOLO, SSD, and Faster R-CNN are all commonly used methods. The choice of algorithm will depend on the specific requirements of the application, such as accuracy, speed, and memory consumption. For real-time applications that prioritize speed, YOLO may be the best option, but for applications that require a higher level of accuracy, SSD and Faster R-CNN may be more suitable.

In this study ( Singh et al., 2020 ), the authors explore the potential of utilizing computer vision techniques for the early and widespread detection of plant diseases. To aid in this effort, a custom dataset, named PlantDoc, was developed for visual plant disease identification. The dataset includes 3,451 data points across 12 plant species and 14 disease categories and was created through a combination of web scraping and human annotation, requiring 352 hours of effort. To demonstrate the effectiveness of the dataset, three plant disease classification models were trained and results showed an improvement in accuracy of up to 29%. The authors believe that this dataset can serve as a valuable resource in the implementation of computer vision methods for plant disease detection.

( Zhang et al., 2019 ) proposed a novel approach to the detection of small agricultural pests by combining an improved version of the YOLOv3 algorithm with a spatial pyramid pooling technique. This method addresses the issue of low recognition accuracy caused by the variable posture and scale of crop pests by applying deconvolution, combining oversampling and convolution operations. This approach allows for the detection of small samples of pests in an image, thus enhancing the accuracy of the detection. The method was evaluated using 20 different groups of pests collected in real-world conditions, resulting in an average identification accuracy of 88.07%. In recent years, many studies have employed detection networks to classify pathogens and pests ( Fuentes et al., 2017a ). It is expected that in the future, more advanced detection models will be utilized for the identification of plant maladies and infestations, as object segmentation networks in computer vision continue to evolve.

In recent times, the detection of plant maladies and infestations has increasingly relied upon the use of two-stage models, which prioritize accuracy. However, there is a growing trend towards the use of single-stage models, which prioritize speed. There has been debate over whether detection networks can replace classification networks in this field. The primary goal of a segmentation network is first to identify the presence of plant maladies and infestations, whereas the goal of a predictive model based on a classification scheme is to categorize these diseases and pests. It is important to note that the visual recognition network provides information on the specific category of diseases and pests that need to be identified. To accurately locate areas of plant disease and pest infestation, detailed annotation is necessary. From this perspective, it may seem that the detection network includes the steps of the classification network. However, it is important to remember that the predetermined categories of plant diseases and pests do not always align with actual results. While the detection network may provide accurate results in different patterns, these patterns may not accurately represent the individuality of specific plant maladies and infestations, and may only indicate the presence of certain kinds of illness and bugs in a specific area. In such cases, the use of a classification network may be necessary. In conclusion, both classification networks and detection networks are important for efficient plant disease and pest detection, but classification networks have more capabilities than detection networks.

3.6 Deep learning-based segmentation network

The segmentation network transforms the task of detecting plant and pest diseases into semantic segmentation, which includes separating lesions from healthy areas. By dividing the lesion’s area in half, it calculates the position, rank, and associated geometric properties (including length, width, surface, contour, center, etc.). Fully convolutional networks include the R-CNN mask ( Lin et al., 2020 ) and completely convolutional networks (FCNs) ( Shelhamer et al., 2017 ).

3.6.1 Fully connected neural network

A complete convolution neural network is used to segment the image’s semantics (FCN). FCN uses convolution to extract and encode the input image features, then deconvolution or oversampling to gradually restore the characteristic image to its original size. FCN is used in almost all semantic segmentation models today. Traditional plant and pest disease segmentation methods are categorized as conventional FCN, U-net ( Navab et al., 2015 ), and SegNet ( Badrinarayanan et al., 2017 ) according to variations in the architecture of the FCN network.

A proposed technique for the segmentation of maize leaf disease employs a fully convolutional neural network (FCN)( Wang and Zhang, 2018 ). The process begins with preprocessing and enhancing the captured image data, followed by the creation of training and test sets for DL. The centralized image is then input into the FCN, where feature maps are generated through multiple layers of convolution, pooling, and activation. The feature map is then up sampled to match the dimensions of the input image. The final step is the restoration of the segmented image’s resolution through the process of deconvolution, resulting in the output of the segmentation process. This method was applied to segment common maize leaf disease images and it was found that the segmentation effect was satisfactory with an accuracy rate exceeding 98%.

The proposed approach employs an improved fully convolutional network (FCN) to precisely segment point regions from crop leaf images with complicated backgrounds ( Wang et al., 2019 ). The strategy addresses the difficulty of reliably identifying sick spots in complicated field situations. The training method of the proposed system employs a collection of crop leaf pictures with healthy and sick sections. The algorithm’s performance is tested using measures such as accuracy and intersectional union ratio (IoU) to determine its ability to effectively partition lesion regions from pictures. The experimental findings demonstrate that the algorithm segments the spot area in complicated backdrop crop leaf images with great precision.

U-Net is a popular CNN architecture for image segmentation tasks. The architecture is named U-Net because it is U-shaped, with encoder and decoder sections connected by a bottleneck ( Shoaib et al., 2022a ). The encoder section of the network consists of a series of convolutional and clustering layers that extract entities from the input image. These features then pass through the bottleneck, where they are up sampled and connected to the feature map from the encoder. This allows the network to use both superficial and fundamental image attributes when making predictions. The decoder part of the network then uses these connected feature maps to generate the final segmentation map. The U-Net architecture is particularly useful for image segmentation tasks because it is able to handle class imbalance problems, where some areas of the image contain more target objects than others.

This paper proposes a semantic segmentation model that uses CNNs to recognize and segment powdery mildew in individual pixel-level images of cucumber lea ( Lin et al., 2019 ). The suggested model obtains an average pixel accuracy of 97.12%, a joint intersection ratio score of 79.54%, and a dice accuracy of 81.54% based on 20 test samples. These results demonstrate that the proposed model outperforms established segmentation techniques such as the gaussian mixture model, random forests, and fuzzy c means. Overall, the proposed model can accurately detect powdery mildew on cucumber leaves at the pixel level, making it a valuable tool for cucumber breeders to assess the severity of powdery mildew.

A novel approach to detect vineyard mildew is proposed, which utilizes DL segmentation on Unmanned Aerial Vehicle (UAV) images ( Kerkech et al., 2022 ). The method involves combining visible and infrared images from two different sensors and using a newly developed image registration technique to align and fuse the information from the two sensors. A fully convolutional neural network is then applied to classify each pixel into different categories, such as shadow, ground, healthy, or symptom. The proposed method achieved an impressive detection rate of 89% at the vine level and 84% at the leaf level, indicating its potential for computer-aided disease detection in vineyards.

3.6.2 Mask regional-CNN

Mask R-CNN is an effective DL model that is perfect for plant pest detection. It is an extension of the Faster R-CNN model and can recognize objects and segment instances ( Permanasari et al., 2022 ). The primary advantage of Mask R-CNN over other models such as YOLO and SSD is its capacity to produce object masks that allow more precise image object location. This is especially beneficial for detecting plant pests, as it enables for more precise identification of afflicted areas. In addition, Mask R-CNN is able to handle overlapping object instances, which is a common issue in plant pest detection due to the presence of several instances of the same pest and disease in a single image. This makes the Mask R-CNN a highly adaptable model that is appropriate for a variety of plant pest identification applications.

In this study ( Stewart et al., 2019 ), an R-CNN based on a masking scheme was utilized to segregate foci of northern plant leaf spots in UAV-captured pictures. The model is trained with a specific data set that recognizes and segments individual lesions in the test set with precision. The average intersectional union ratio (IOU) between the ground reality and the projected lesions was 79.31%, and the average accuracy was 97.24% at a threshold of 60% IOU. In addition, the average accuracy when the IOU threshold ranged from 55% to 90% was 65%. This study illustrates the potential of combining drone technology with advanced instance segmentation techniques based on DL to offer precise, high-throughput quantitative measures of plant diseases.

Using deep CNNs and object detection models, the authors of this paper offer two strategies for tomato disease detection ( Wang et al., 2019 ). These techniques employ two distinct techniques, YOLO and SSD. The YOLO detector is used to categorize tomato disease kinds, while the SSD model is used to classify and separate the ROI-contaminated areas on tomato leaves. Four distinct deep CNNs are merged with two object detection models in order to obtain the optimal model for tomato disease detection. A dataset is generated from the Internet and then split for experimental purposes into training sets, validation sets, and test sets. The experimental findings demonstrate that the proposed approach can accurately and effectively identify eleven tomato diseases and segment contaminated leaf areas.

4 Comparing datasets and evaluating performance

This section starts by providing an overview of the evaluation metrics for DL models, specifically focusing on those that pertain to plant disease and pest detection. It then delves into the various datasets that are relevant to this field, and subsequently, conducts a thorough analysis of the recent DL models that have been proposed for the detection of plant diseases and pests.

4.1 Evaluating plant disease detection using benchmark datasets

The PlantVillage dataset is a compilation of crop photos with labels indicating the presence of various illnesses ( Hughes and Salathé, 2015 ). It features 38,000 photos of 14 distinct crops, including, among others, tomatoes, potatoes, and peppers. The photographs were gathered from many sources, including public databases, research institutions, and individual contributors. The dataset is divided into a training set, a validation set, and a test set, with the training set including the majority of the photos. The scientific community uses this dataset extensively to develop and evaluate DL models for plant disease detection. Figure 5 showcases a selection of images obtained from the PlantVillage dataset, which is a comprehensive dataset containing thousands of images of various plant species. These images depict a wide range of plant conditions, such as healthy plants, plants affected by pests, and plants afflicted by various diseases, which enables researchers and practitioners to gain a comprehensive understanding of the variability in plant growth and development. Moreover, the diverse range of plant species represented in this figure provides an in-depth and realistic representation of the variability in plant types. The images included in this figure capture the nuanced differences in plant morphology, such as leaf shape, color, and texture, which can be useful for developing and validating deep learning models for plant disease detection. The AgriVision collection ( Chiu et al., 2020 ), which contains photos of numerous crops and their diseases, and the Plant Disease Identification dataset, which contains photographs of damaged and healthy plant leaves, are two other significant datasets.

www.frontiersin.org

Figure 5 Some random images from plantvillage dataset ( Hughes and Salathé, 2015 ).

Figure 6 showcases a selection of random images obtained from the Agri-Vision dataset. These images depict various crops and their growth conditions, including both healthy and diseased plants. This figure serves as a visual representation of the types of data available in the Agri-Vision dataset, providing insight into the range and diversity of data contained within the dataset. The Crop Disease dataset comprises photos of 14 crops affected by 27 diseases, whereas the Plant-Pathology-2020 dataset provides images of plant leaves damaged by 38 diseases. All of these datasets are widely utilized by the research community and contribute to the creation and evaluation of DL models for plant disease detection.

www.frontiersin.org

Figure 6 Some random images from agri-vision dataset ( Chiu et al., 2020 ).

Table 4 provides a summary of benchmark datasets commonly used for plant disease and pest detection. The table includes information on the name of the dataset, a brief description, the type of data contained within the dataset, and the types of diseases and pests covered. This information is valuable for researchers and practitioners who are looking to evaluate or compare their algorithms or models against existing datasets.

www.frontiersin.org

Table 4 Plant disease and pest detection from benchmark datasets.

4.2 Evaluation indices

There are several performance metrics commonly used for evaluating the performance of plant disease classification, detection, and segmentation models. Figure 7 displays an example of a confusion matrix, a widely used evaluation metric in machine learning. The matrix represents the results of a classification algorithm, where each row represents the predicted class of a given sample and each column represents the actual class of that sample. The entries in the matrix show the number of samples that have been correctly or incorrectly classified. By examining the entries in the confusion matrix, it is possible to gain insight into the performance of the classification algorithm and identify areas for improvement.

www.frontiersin.org

Figure 7 An example of a confusion matrix where the rows show the predicted results while columns represent actual classes.

Accuracy: This is the proportion of correctly classified instances out of the total number of instances. Mathematically, it is represented as:

Precision: This is the proportion of correctly classified positive instances out of the total number of predicted positive instances. Mathematically, it is represented as:

Recall (Sensitivity): This is the proportion of correctly classified positive instances out of the total number of actual positive instances. Mathematically, it is represented as:

F1 Score: This is the harmonic mean of precision and recall. Mathematically, it is represented as:

Intersection over Union (IoU): This is used to evaluate the performance of segmentation models. It is the ratio of the area of intersection of the predicted segmentation and the ground truth segmentation to the area of the union of the two. Mathematically, it is represented as:

Dice coefficient: This is another metric used for evaluating segmentation performance. It is a measure of the similarity between the predicted segmentation and the ground truth segmentation, and it ranges from 0 to 1. Mathematically, it is represented as:

Jaccard index: This is another metric used for evaluating segmentation performance. It is the ratio of the area of intersection of the predicted segmentation and the ground truth segmentation to the area of the union of the two. Mathematically, it is represented as:

Receiver Operating Characteristic: This curve is a graphical representation of the performance of a binary classifier system. Figure 8 presents an example of a performance comparison between three models using a receiver operating characteristic (ROC) curve. The ROC curve is a widely used evaluation metric in machine learning that graphically summarizes the performance of a binary classifier by plotting the true positive rate against the false positive rate for different classification thresholds. The ROC curve provides a visual representation of the trade-off between the false positive rate and true positive rate, allowing practitioners to compare the performance of different models at different operating points. It plots the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. The TPR, also known as the sensitivity, recall or hit rate, is the number of true positive predictions divided by the number of actual positive cases. The FPR, also known as the fall-out or probability of false alarm, is the number of false positive predictions divided by the number of actual negative cases. The ROC curve can be mathematically represented as TPR = (TP)/(TP + FN) and FPR = (FP)/(FP + TN), where TP, FP, TN, and FN are true positives, false positives, true negatives, and false negatives, respectively. The area under the ROC curve (AUC) is a measure of the classifier’s performance, with a value of 1 indicating perfect performance and a value of 0.5 indicating no better than random.

www.frontiersin.org

Figure 8 An example of performance comparison between three models using the ROC curve ( Shoaib et al., 2022a ).

Area Under the Curve: This AUC is also a performance measure used to evaluate the performance of the binary classifier. It is derived by integrating the true positive rate (TPR) relative to the false positive rate (FPR) overall thresholds. TPR is determined by dividing the number of true positives by the total number of true positive instances (TP + FN), whereas FPR is determined by dividing the number of false positives by the total number of true negative cases (FP + TN). AUC goes from 0 to 1, where 1 corresponds to a perfect classifier and 0.5 corresponds to a random classifier. A greater AUC value suggests superior classification ability.

4.3 Performance comparison of existing algorithms

This article examines in depth the most recent developments in DL-based plant pest identification. The papers examined in this article, published between 2015 and 2022, focus on the detection, classification, and segmentation of plant pests and lesions using ML and DL approaches. This research employs several methodologies, including image processing, feature extraction, and classifier creation. In addition, DL models, namely CNNs, have been widely applied to accurately detect and categorize plant illnesses. This article addresses the problems and limits of utilizing ML and DL algorithms for plant lesions identification, including data availability, image quality, and subtle differences between healthy and diseased plants. This paper also examines the current state of practical applications of ML and DL techniques in plant abnormal region detection and provides viable solutions to address the obstacles and limits of these technologies.

The research covered in this article indicates that the employment of ML and DL approaches enhances the accuracy and efficiency of plant lesion detection greatly. The most prevalent evaluation criteria are mean accuracy (mAP), F1 score, and frames per second (FPS). However, a gap still exists between the intricacy of the images of infectious maladies and infestations utilized in this study and the usage of mobile devices to identify pest and lesions infestations in the field in real-time. This paper is a valuable resource for plant lesions detection researchers, practitioners, and industry experts. It provides a comprehensive understanding of the current state of research utilizing ML and DL techniques for plant lesions detection, highlights the benefits and limitations of these methods, and proposes potential solutions to overcome the challenges of their implementation. In addition, the need for larger and more intricate experimental data sets was identified as a subject for further investigation.

5 Challenges in existing systems

5.1 overcoming small dataset challenge.

Using data augmentation techniques to fictitiously expand the dataset is one method. Another strategy is to use knowledge from models that have already been trained on bigger data sets to smaller data sets. The third approach successfully addresses the small sample problem by combining the first two approaches. Despite these achievements, a significant obstacle in the field of DL-based plant pest identification is still the limited dataset problem. Future research should therefore concentrate on creating new tools and techniques to successfully address this issue and enhance the functionality of DL models in this domain.

5.2 Plant image amplification for lesions segmentation

In recent years, data amplification technology has been utilized extensively in the field of plant pest detection in order to circumvent the issue of small data set size. These techniques involve the use of image manipulation operations including mirroring, translation, shearing, scaling, and contrast alteration in order to create additional training examples for a DL model. In order to enrich tiny datasets, generative adversarial networks (GANs) ( Goodfellow et al., 2020 ) and automated encoders ( Pu et al., 2016 ) were also utilized to generate fresh, diverse samples. It has been demonstrated that these strategies considerably enhance the performance of DL models for plant pest detection. It is essential to emphasize, however, that the efficacy of these strategies is contingent on the quality and diversity of the original dataset. Additionally, the produced samples must be thoroughly analyzed to confirm their suitability for DL model training. Data amplification, synthesis, and generative approaches are crucial components of plant pest detection model training using DL.

5.3 Transfer learning for plant disease and pest detection

Transfer learning is a technique that applies models that have been trained on large, generic datasets to more specific tasks with fewer data. This method is especially beneficial in the field of plant pest detection, where annotated data is frequently sparse. Pretrained models can be customized for specific localized plant pest and abnormality detection tasks by refining parameters or fine-tuning certain components. Transfer learning can increase model performance and minimize model development expenses, according to studies. For example, ( Oppenheim et al., 2019 ) used the VGG network to recognize natural light images of contaminated potatoes of various sizes, colors, and forms. ( Too et al., 2019 ) discovered that as the number of iterations grew, the accuracy of dense nets improved when employing fine and contrast parameters. In addition, ( Chen et al., 2020 ) demonstrate that transfer learning can accurately diagnose rice lesions photos in complicated situations with an average accuracy of 94 percent, exceeding standard training.

5.4 Optimizing network structure for plant lesion segmentation

A properly designed array structure can greatly minimize the number of samples required for plant pest and lesions segmentation. Utilizing several color channels, merging depth-separate convolution, and adding starting structures are some of the strategies employed by researchers to increase feature extraction. Specifically, Identification of plant leaf diseases using RGB pictures and a convolutional neural network with three channels (TCCNN) in ( Zhang et al., 2019 ). An enhanced CNN approach that uses deep separable convolution to detect illnesses in grapevine leaves is proposed in ( Liu et al., 2020 ), with 94.35% accuracy and faster convergence than classic ResNet and GoogLeNet structures. These examples illustrate the significance of examining network patterns for detecting plant pests and diseases with limited sample numbers.

5.5 Small-size lesions in early identification

The primary role of the attention mechanism is to pinpoint the area of interest and swiftly discard unnecessary data. A weighted sum approach with weighted coefficients can be used to separate the features and reduce background noise in plant and pest images by analyzing the images’ features. Specifically, the Attention Mechanism module can build a new noise reduction fusion function using the Softmax function by capturing the prominent image, isolating the item from the context, and utilizing and fusing the feature image with the original feature image. The attention mechanism can efficiently choose data and assign enhanced resources to the ROIs, allowing for additional precise identification of minor lesions during the early stages of pest infestations and diseases. Numerous research, such as ( Karthik et al., 2020 ) have demonstrated the efficacy of the attention based prediction system. On the industrial village dataset, the network residual attention mechanism was evaluated with an overall accuracy of 98%. In addition, to improve the precision of tiny lesion detection, research can concentrate on creating more robust preprocessing algorithms to reduce background noise and enhance picture resolution. This may involve techniques such as picture enhancement, image denoising, and image super-resolution.

5.6 Fine-grained identification

The identification of plant diseases and pests is a challenging task that is often made more complex by variations in the visual characteristics of affected plants. These variations can be attributed to external factors such as uneven lighting, extensive occlusion, and fuzzy details ( Wang et al., 2017 ). Furthermore, variations in the presence of illness and the growth of a pest can lead to subtle differences in the characterization of the same diseases and pests in different regions, resulting in “intra-class distinctions” ( Barbedo, 2018 ). Additionally, there is a problem of “inter-class resemblance,” which arises from similarities in the biological morphology and lifestyles of subclasses of diseases and pests, making it difficult for plant pathologists to differentiate between them.

In actual agricultural settings, the presence of background disturbances might make it harder to detect plant pests and diseases ( Garcia and Barbedo, 2018 ). Environment complexity and interactions with other items can further complicate the detecting procedure. It is essential to highlight, however, that images obtained under controlled conditions may not truly depict the difficulties of spotting pests and illnesses in their natural habitats. Despite advancements in DL techniques, identifying pests and diseases in real-world contexts remains a technological issue with accuracy and robustness constraints. Current research focuses mostly on the fine-grained identification of individual pest populations, and it is challenging to apply these methods to mobile, intelligent agricultural equipment for large-scale identification. Therefore, additional study is required to address these obstacles and enhance the effectiveness of agricultural decision management.

5.7 Low and high illumination problem

In the past, researchers captured photos of plant pests and illnesses using indoor lightboxes ( Martinelli et al., 2015 ). Despite the fact that this method efficiently eliminates the impacts of outdoor lighting, hence simplifying picture processing, it is essential to remember that photographs captured under natural lighting circumstances might vary significantly. The dynamic nature of natural light and the limited range of the camera’s dynamic light source might create color distortion if the camera settings are not appropriately adjusted. Moreover, the visual attributes of plant illnesses and infestations may be impacted by factors such as viewing angle and distance, offering a formidable challenge to visual recognition algorithms. This emphasizes the significance of addressing light conditions and image capture techniques when researching pests and plant diseases, as these factors can significantly impact the accuracy and dependability of results.

5.8 Challenges posed by obstruction

Currently, the majority of scientists tend to concentrate on detecting plant pests and diseases in particular ecosystems, rather than addressing the setting as a whole. Frequently, they directly intercept areas of interest in the gathered photos without completely resolving the occlusion issue. This results in low recognition accuracy and restricted applicability. There are numerous types of occlusions, including differences in leaf location, branches, external lighting, and hybrid designs. These occlusion issues are ubiquitous in the natural environment, where a lack of distinguishing characteristics and overlapping noise makes it difficult to identify plant pests and illnesses. In addition, varying degrees of occlusion may have varying effects on the recognition process, leading to errors or missed detections. Some researchers have found it challenging to identify plant pests and diseases under extreme conditions, such as in the shadow, despite recent breakthroughs in DL algorithms ( Liu and Wang, 2020 ; Liu and Wang, 2021a ). However, in recent years, a solid foundation has been established for plant utilization and pest identification in actual situations.

To improve the performance of plant pest and disease detection, it is necessary to increase the originality and efficiency of the underlying architecture, which must be improved for optimal results of lightweight network topologies. The difficulty of constructing a core framework is frequently reliant on the performance of the hardware system. Consequently, optimizing the underlying framework is crucial for enhancing efficiency and performance. Moreover, processing blockage might be unanticipated and difficult to anticipate. Therefore, it is essential to lower the complexity of model formation while simultaneously enhancing GAN exploration and preserving detection precision. GANs have the capacity to manage postural shifts and turbulent settings well. However, GAN architecture is still in its infancy and prone to issues during the learning and training phase. To aid in the evaluation of the model’s efficacy, it is essential to do additional research on the network’s outcomes.

5.9 Challenges in detection efficiency

DL algorithms have proven more effective than conventional approaches, although they are computationally intensive. This causes slower inspections and challenges in satisfying real-time requirements, particularly when a high level of detection precision is required. Frequently, in order to resolve this issue, it is required to minimize the amount of data used, which might result in poor planning and erroneous or lost identification. Therefore, it is vital to create an accurate and effective algorithm for threat identification. In agricultural applications, the process of detecting pests and illnesses using DL approaches requires three main steps: data labeling, model training, and model inference. The model inference is particularly applicable to agricultural applications in real-time. However, it should be highlighted that the majority of current mechanisms for disease and bug detection in plants rely on accurate identification, while less emphasis has been paid to the dependability of model inference. For instance, the author of ( Kc et al., 2019 ) employs an ensemble convolutional structural framework to identify plant foliar diseases in order to improve the efficiency of the model calculation process and satisfy real agricultural needs. This approach was compared to various different models, and the decreased MobileNet classification accuracy was 92.12%, with parameters that were 31 times lower than VGG and 6 times lower than MobileNet. This demonstrates that real-time crop disease diagnostics on mobile devices with limited resources strike a solid balance between speed and accuracy.

6 Discussion

6.1 datasets for identifying plant diseases and pests.

The advancement of DL technology has greatly contributed to the improvement of Identifying and managing infestations in crops and plants. Theoretical developments in image identification mechanisms have paved the way for identifying complex diseases and pests. However, it should be noted that the majority of research in this field is limited to laboratory studies and relies heavily on photographs of plant diseases and pests that have been collected. Previous research often focused on identifying specific features such as disease spots, insect appearance, and leaf identification. However, it is important to consider that plant growth is cyclical, consistent, seasonal, and regional in nature. Therefore, it is crucial to gather sample images from various stages of plant growth, different seasons, and regions to ensure a more comprehensive understanding of plant diseases and pests. This will improve the robustness and generalization of the model.

It is essential to keep in mind that the properties of plant diseases or insects which may vary in various phases of crop development. Moreover, photos of different plant species may change by location. Consequently, the majority of current research findings may not be universally relevant. Even if the recognition rate of a single test is high, the reliability of data collected at other times or locations cannot be confirmed. Much of the present study has concentrated on images in the visible spectrum, but it is crucial to remember that electromagnetic waves generate vast amounts of data outside of the visible spectrum. It is necessary to merge data from multiple sources, such as visible, near-infrared, and multispectral, to generate a comprehensive dataset on plant diseases. Future studies will emphasize the use of multi-dimensional concatenation (fusion) techniques to gather and recognize information on plant insects. It should also be highlighted that a database containing photographs of many wild plant pests and illnesses is currently in the process of being compiled. Future studies can use wearable automatic field spore traps, drone aerial photography systems, agricultural Internet of Things monitoring devices, etc. to identify wide regions of farmland, compensating for the absence of randomness in prior studies’ image samples. Improve the overall performance of the algorithm by ensuring the dataset is complete and accurate.

6.2 Pre-emptive detection of plant diseases and pests

Early Identifying the various forms of plant diseases and pests can be a difficult task. due to the fact that symptoms are not always apparent, either through visual inspection or computer analysis. In terms of research and necessity, however, early identification is essential since it helps prevent and control the spread and growth of pests and diseases. Recording photographs under favorable lighting conditions, such as sunny weather, enhance image quality, but capturing images on overcast days complicates preprocessing and decreases identification accuracy. In addition, it might be difficult to understand even high-resolution photos during the first phases of plant pests and diseases. It is necessary to incorporate meteorological and plant health data, such as temperature and humidity, to efficiently identify and predict pests and diseases. Rarely has this technique been utilized to diagnose early plant pests and diseases.

6.3 Neural network learning and development

Manual pest and disease testing are tough since it is difficult to sample for all pests and diseases, and oftentimes only accurate data are available (positive samples). However, the majority of existing systems for plant pest and disease identification utilizing DL are based on supervised learning, which involves the time-consuming collection of huge labeled datasets. Consequently, it is worthwhile to research methods of unsupervised learning. In addition, DL can be a “black box” with little explanatory power, necessitating the labeling of many learning samples for end-to-end learning. In order to assist training and network learning, it may be advantageous to combine past knowledge of brain-like computers with human visual cognitive models.

However, depth models demand a great deal of memory and testing time, making them inappropriate for mobile platforms with limited resources. Therefore, it is necessary to find solutions to reduce model complexity and speed without sacrificing precision. Choosing appropriate hyperparameters, such as learning rate, filter size, step size, and number, has proven to be a significant challenge when applying DL models to new tasks. These hyperparameters have high internal dependencies, so even small changes can have a substantial effect on the final training results.

6.4 Cross-disciplinary study

Theories such as scientific evidence and agronomic plant defenses will be merged to produce more effective field diagnostic models for crop growth and disease identification. Using this technology, plant and pest diseases can be diagnosed with greater speed and precision. In the future, it will be important to shift beyond simple surface image analysis to determine the underlying mechanisms by which pests and diseases occur, together with a full understanding of crop growth patterns, environmental conditions, and other pertinent elements. DL approaches have been demonstrated to address complicated problems that regular image processing and ML methods cannot. Despite the fact that the practical implementation of this technology is still in its infancy, it has enormous development and application potential. To reach this potential, specialists from a variety of fields, such as agriculture and plant protection, must combine their knowledge and experience with DL algorithms and models. In addition, the outcomes of this study will need to be incorporated into agricultural gear and equipment to accomplish the desired theoretical effect.

6.5 Deep learning for plant stress phenotyping: Trends and perspectives

DL and ML technologies are successful in detecting and analyzing lesions from severe abiotic stresses, such as drought. In the past decade, global crop production losses due to drought have totaled approximately $30 billion ( Agarwal et al., 2020 ). In 2012, a severe drought impacted 80% of agricultural land in the US, resulting in over two-thirds of counties being declared disaster areas. According to FAO (UN) reports, drought is the primary cause of agricultural production loss. Drought stress causes 34% of crop and livestock production loss in LDCs and LMICs, costing 37 billion USD. Agriculture sustains 82% of all drought impact. Understanding how plants adapt to stress, especially drought, is essential for securing crop yields in agriculture. DL and ML approaches are therefore a major advance in the field of plant stress biology. ML and DL can be used to categorize plant stress phenotyping problems into four categories: identification, classification, quantification, and prediction ( Singh et al., 2020 ). These categories represent a progression from simple feature extraction to increasingly more complex information extraction from images. Identification involves detecting specific stress types, such as sudden death syndrome in soybeans or rust in wheat. Classification uses ML to categorize the images based on stress symptoms and signatures, dividing the visual data into distinct stress classes, such as low, medium, or high stress categories. The final category, prediction, involves anticipating plant stress before visible symptoms appear, providing a timely and cost-effective way to control stress and advancing precision and prescriptive agriculture.

6.6 Limitations of this study

The study presented in this paper has some limitations that are attributed to its research methodology. Firstly, the study’s scope is confined to publications from 2015 to 2022, implying that recent developments in plant disease detection may not be covered. Moreover, the review does not encompass an all-inclusive list of Machine Learning (ML) and Deep Learning (DL) techniques for plant disease detection. Nevertheless, the study provides an overview of the most commonly used techniques, their advantages, limitations, and probable solutions to overcome implementation challenges. Finally, the study fails to include an extensive examination of the economic and environmental impacts of ML and DL techniques on plant disease detection. Hence, additional research is necessary to scrutinize the potential benefits and disadvantages of these techniques regarding production losses and resource utilization.

6.7 Practical implications of study

The practical implications of our research include:

● Improved plant disease detection: Our research highlights the effectiveness of using ML and DL techniques for plant disease detection, which can help improve the accuracy and efficiency of disease detection compared to traditional manual methods. By adopting these advanced technologies, farmers and plant disease specialists can detect diseases at an early stage, preventing further spread and reducing the risk of crop losses.

● Development of generalizable models: Our research emphasizes the need for developing generalizable models that can work for different plant species and diseases. The development of such models can save time and effort for researchers and practitioners, making it easier to detect and classify plant diseases in various settings.

● Accessible datasets for training and evaluation: The research emphasizes the need for more publicly available datasets for training and evaluating ML and DL models for plant disease detection. The availability of such datasets can help researchers and practitioners develop more accurate and robust models, enhancing the performance of disease detection systems.

● Potential for cost reduction: The use of ML and DL techniques in plant disease detection can reduce the need for manual labor and the cost of plant disease detection. This can be especially useful for farmers and small-scale agricultural operations who may not have access to expensive equipment or specialized expertise.

● Transferable knowledge to other fields: Our research also has the potential to inform research and development in other fields, such as medical imaging and remote sensing. The techniques and methodologies used in plant disease detection can be applied to other fields, providing insights into the potential applications of ML and DL in various domains.

7 Conclusions

The DL and ML technologies have greatly improved the detection and management of crop and plant infestations. Advances in image recognition have made it possible to identify complicated diseases and pests. However, most research in this area is limited to lab-based studies and heavily relies on collected plant disease and pest photos. To enhance the robustness and generalization of the model, it’s important to gather images from various plant growth stages, seasons, and regions. Early identification of plant diseases and pests is crucial in preventing and controlling their spread and growth, thus incorporating meteorological and plant health data, such as temperature and humidity, is necessary for efficient identification and prediction. Unsupervised learning and integrating past knowledge of brain-like computers with human visual cognition can aid in DL model training and network learning. Achieving the full potential of this technology requires collaboration between specialists from agriculture and plant protection, combining their knowledge and experience with DL algorithms and models, and integrating the results into farming equipment. The paper explores the recent progress in using ML and DL techniques for plant disease identification, based on publications from 2015 to 2022. It demonstrates the benefits of these techniques in increasing the accuracy and efficiency of disease detection, but also acknowledges the challenges, such as data availability, imaging quality, and distinguishing healthy from diseased plants. The study finds that the use of DL and ML has significantly improved the ability to identify and detect plant diseases. The novelty of this research lies in its comprehensive analysis of the recent developments in using ML and DL techniques for plant disease identification, along with proposed solutions to address the challenges and limitations associated with their implementation. By exploring the benefits and drawbacks of various methods, and offering valuable insights for researchers and industry professionals, this study contributes to the advancement of plant disease detection and prevention.

Authors contributions

MS, BS, SE-S, AA, AU, FayA, TG, TH, and FarA performed the data analysis, conceptualized this study, designed the experimental plan, conducted experiments, wrote the original draft, revised the manuscript. All authors contributed to the article and approved the submitted version.

AA acknowledges project CAFTA, funded by the Bulgarian National Science Fund. TG acknowledges the European Union’s Horizon 2020 research and innovation programme, project PlantaSYST (SGA-CSA No. 739582 under FPA No. 664620) and the BG05M2OP001-1.003-001-C01 project, financed by the European Regional Development Fund through the Bulgarian’ Operational Programme Science and Education for Smart Growth. This research work was also supported by the Cluster grant R20143 of Zayed University, UAE.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Abadi, M. (2016). “TensorFlow: learning functions at scale,” in Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming, Nara Japan, September 18 - 24, 2016. (Japan: ACM digital library), 1–1. doi: 10.1145/2951913.2976746

CrossRef Full Text | Google Scholar

Agarwal, M., Singh, A., Arjaria, S., Sinha, A., Gupta, S. (2020). ToLeD: Tomato leaf disease detection using convolution neural network. Proc. Comput. Sci. 167 (2019), 293–301. doi: 10.1016/j.procs.2020.03.225

Akbar, M., Ullah, M., Shah, B., Khan, R. U., Hussain, T., Ali, F., et al. (2022). An effective deep learning approach for the classification of bacteriosis in peach leave. Front. Plant Sci. 13. doi: 10.3389/fpls.2022.1064854

PubMed Abstract | CrossRef Full Text | Google Scholar

Alzubaidi, L., Zhang, J., Humaidi, A. J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., et al. (2021). Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. J. Big Data 8, 1–74. doi: 10.1186/s40537-021-00444-8

Anjna, Sood, M., Singh, P. K. (2020). Hybrid system for detection and classification of plant disease using qualitative texture features analysis. Proc. Comput. Sci. 167 (2019), 1056–1065. doi: 10.1016/j.procs.2020.03.404

Antonellis, G., Gavras, A. G., Panagiotou, M., Kutter, B. L., Guerrini, G., Sander, A. C., et al. (2015). Shake table test of Large-scale bridge columns supported on rocking shallow foundations. J. Geotechnical Geoenvironmental Eng. 12, 04015009. doi: 10.1061/(ASCE)GT.1943-5606.0001284

Arcaini, P., Bombarda, A., Bonfanti, S., Gargantini, A. (2020). “Dealing with robustness of convolutional neural networks for image classification,” in Proceedings - 2020 IEEE International Conference on Artificial Intelligence Testing, AITest 2020. (Oxford, UK: IEEE), 7–14. doi: 10.1109/AITEST49225.2020.00009

Badrinarayanan, V., Kendall, A., Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39 (12), 2481–2495. doi: 10.1109/TPAMI.2016.2644615

Bahrampour, S., Ramakrishnan, N., Schott, L., Shah, M. (2015). Comparative study of caffe, neon, theano, and torch for deep learning. arXiv preprint arXiv:1511.06435 132, 1–9. doi: 10.48550/arXiv.1511.06435

Barbedo, J. G. A. (2018). ScienceDirect factors influencing the use of deep learning for plant disease recognition. Biosyst. Eng. 172, 84–91. doi: 10.1016/j.biosystemseng.2018.05.013

Brahimi, M., Arsenovic, M., Laraba, S., Sladojevic, S., Boukhalfa, K., Moussaoui, A. (2018). Deep learning for plant diseases: detection and saliency map visualisation. Hum. Mach. Learning: Visible Explainable Trustworthy Transparent 6, 93–117. doi: 10.1007/978-3-319-90403-0_6

Cedric, L. S., Adoni, W. Y. H., Aworka, R., Zoueu, J. T., Mutombo, F. K., Krichen, M., et al. (2022). Crops yield prediction based on machine learning models: Case of West African countries. Smart Agric. Technol. 2 (March), 100049. doi: 10.1016/j.atech.2022.100049

Chen, J., Chen, J., Zhang, D., Sun, Y., Nanehkaran, Y. A. (2020). Using deep transfer learning for image-based plant disease identi fi cation. Comput. Electron. Agric. 173 (April), 105393. doi: 10.1016/j.compag.2020.105393

Chiu, M. T., Xu, X., Wei, Y., Huang, Z., Schwing, A., Brunner, R., et al. (2020). Agriculture-Vision : A Large Aerial Image Database for Agricultural Pattern Analysis. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). (WA, USA: IEEE), 2825–2835. doi: 10.1109/CVPR42600.2020.00290

Chung, Y., Ahn, S., Yang, J., Lee, J. (2017). Comparison of deep learning frameworks: about theano, tensorflow, and cognitive toolkit. J. Intell. Inf. Syst. 23 (2), 1–17. doi: 10.13088/jiis.2020.26.4.027

Dechant, C., Wiesner-hanks, T., Chen, S., Stewart, E. L., Yosinski, J., Gore, M. A., et al. (2017). Automated identification of northern leaf blight-infected maize plants from field imagery using deep learning. Phytopathology 107, 1426–1432. doi: 10.1094/PHYTO-11-16-0417-R

Dhaka, V. S., Meena, S. V., Rani, G., Sinwar, D., Ijaz, M. F., Woźniak, M. (2021). A survey of deep convolutional neural networks applied for prediction of plant leaf diseases. Sensors 21 (14), 4749. doi: 10.3390/s21144749

Dillon, J. V., Langmore, I., Tran, D., Brevdo, E., Vasudevan, S., Moore, D., et al. (2017). Tensorflow distributions. arXiv preprint arXiv:1711.10604 , 1–10. doi: 10.48550/arXiv.1711.10604

Domingues, T., Brandão, T., Ferreira, J. C. (2022). Machine learning for detection and prediction of crop diseases and pests: A comprehensive survey. Agriculture 12 (9), 1350. doi: 10.3390/agriculture12091350

Drenkow, N., Sani, N., Shpitser, I., Unberath, M. (2021). Robustness in deep learning for computer vision: mind the gap? arXiv preprint arXiv:2112.00639 , 1–23. doi: 10.48550/arXiv.2112.00639

Dumitrescu, F., Boiangiu, C. A., Voncila, M. L. (2022). Fast and robust people detection in RGB images. Appl. Sci. (Switzerland) 12 (3), 1–24. doi: 10.3390/app12031225

Fuentes, A., Lee, J., Lee, Y., Yoon, S., Park, D. S. (2017a). “Anomaly detection of plant diseases and insects using convolutional neural networks,” in Proceedings of the International Society for Ecological Modelling Global Conference. (Jeju, South Korea: Elsevier).

Google Scholar

Fuentes, A., Yoon, S., Kim, S. C., Park, D. S. (2017b). A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors (Switzerland) 17 (9), 1–21. doi: 10.3390/s17092022

Garcia, J., Barbedo, A. (2018). Impact of dataset size and variety on the e ff ectiveness of deep learning and transfer learning for plant disease classi fi cation. Comput. Electron. Agric. 153 (July), 46–53. doi: 10.1016/j.compag.2018.08.013

Garcia, J., Barbedo, A. (2019). ScienceDirect plant disease identification from individual lesions and spots using deep learning. Biosyst. Eng. 180 (2016), 96–107. doi: 10.1016/j.biosystemseng.2019.02.002

Genaev, M. A., Skolotneva, E. S., Gultyaeva, E. I., Orlova, E. A., Bechtold, N. P., Afonnikov, D. A.. (2021). Image-based wheat fungi diseases identification by deep learning. Plants 10 (8), 1–21. doi: 10.3390/plants10081500

Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., et al. (2020). Generative adversarial networks. Commun. ACM 63 (11), 139–144. doi: 10.48550/arXiv.1406.2661

Hasan, H., Shafri, H. Z. M., Habshi, M. (2019). “A comparison between support vector machine (SVM) and convolutional neural network (CNN) models for hyperspectral image classification,” in IOP Conference Series: Earth and Environmental Science, (Kula Lumpur, Malysia: IOP science) 357. doi: 10.1088/1755-1315/357/1/012035

Hasan, R. I., Yusuf, S. M., Alzubaidi, L. (2020). Review of the state of the art of deep learning for plant Diseases : A broad analysis and discussion. Plants 9, 1–25. doi: 10.3390/plants9101302

Huang, G., Liu, Z., van der Maaten, L., Weinberger, K. Q. (2017). “Densely connected convolutional networks,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017-January. (USA: IEEE) 2261–2269. doi: 10.1109/CVPR.2017.243

Hughes, D., Salathé, M. (2015). An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv preprint arXiv:1511.08060 , 1–7. doi: 10.48550/arXiv.1511.0806

Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., et al. (2014). Caffe : Convolutional architecture for fast feature embedding ∗ categories and subject descriptors. In Proceedings of the 22nd ACM international conference on Multimedia. (Florida, USA) 13, 675–678. doi: 10.48550/arXiv.1408.5093

Karthik, R., Hariharan, M., Anand, S., Mathikshara, P., Johnson, A., Menaka, R. (2020). Attention embedded residual CNN for disease detection in tomato leaves. Appl. Soft Comput. 86, 105933. doi: 10.1016/j.asoc.2019.105933

Kaur, L., Sharma, S. G. (2021). Identification of plant diseases and distinct approaches for their management. Bull. Natl. Res. Centre 45 (1), 1–10. doi: 10.1186/s42269-021-00627-6

Kc, K., Yin, Z., Wu, M., Wu, Z. (2019). Depthwise separable convolution architectures for plant disease classi fi cation. Comput. Electron. Agric. 165 (December 2018), 104948. doi: 10.1016/j.compag.2019.104948

Kerkech, M., Hafiane, A., Canals, R. (2020). Vine disease detection in UAV multispectral images using optimized image registration and deep learning segmentation approach. Comput. Electron. Agric. 174, 105446. doi: 10.1016/j.compag.2020.105446

Khan, R. U., Khan, K., Albattah, W., Qamar, A. M. (2021). Image-based detection of plant diseases: from classical machine learning to deep learning journey. Wireless Commun. Mobile Computing 2021, 1–13. doi: 10.1155/2021/5541859

Komar, M., Yakobchuk, P., Golovko, V., Dorosh, V., Sachenko, A. (2018). “Deep neural network for image recognition based on the caffe framework,” in 2018 IEEE Second International Conference on Data Stream Mining & Processing (DSMP), (Lviv, Ukraine: IEEE) 102–106. doi: 10.1109/DSMP.2018.8478621

Kurmi, Y., Gangwar, S. (2022). A leaf image localization based algorithm for different crops disease classification. Inf. Process. Agric. 9 (3), 456–474. doi: 10.1016/j.inpa.2021.03.001

Lee, W. H., Ozger, M., Challita, U., Sung, K. W. (2021). Noise learning-based denoising autoencoder. IEEE Commun. Lett. 25 (9), 2983–2987. doi: 10.1109/LCOMM.2021.3091800

Li, J., Liu, C., Lu, X., Wu, B. (2022). CME-YOLOv5: An efficient object detection network for densely spaced fish and small targets. Water (Switzerland) 14 (15), 1–12. doi: 10.3390/w14152412

Lin, K., Gong, L., Huang, Y., Liu, C., Pan, J. (2019). Deep learning-based segmentation and quantification of cucumber powdery mildew using convolutional neural network. Front. Plant Sci. 10. doi: 10.3389/fpls.2019.00155

Lin, T. Y., Goyal, P., Girshick, R., He, K., Dollar, P. (2020). Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. 42 (2), 318–327. doi: 10.1109/TPAMI.2018.2858826

Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., et al. (2016). “SSD: Single shot multibox detector,” in Lecture notes in computer science (Including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics) , vol. 9905 LNCS. (Amsterdam, Netherlands: Springer), 21–37. doi: 10.1007/978-3-319-46448-0_2

Liu, B., Ding, Z., Tian, L., He, D., Li, S., Wang, H. (2020). Grape leaf disease identification using improved deep convolutional neural networks. Front. Plant Sci. 11. doi: 10.3389/fpls.2020.01082

Liu, J., Wang, X. (2020). Tomato diseases and pests detection based on improved yolo V3 convolutional neural network. Front. Plant Sci. 11. doi: 10.3389/fpls.2020.00898

Liu, J., Wang, X. (2021a). Early recognition of tomato gray leaf spot disease based on MobileNetv2 − YOLOv3 model. Plant Methods 2020, 1–16. doi: 10.1186/s13007-020-00624-2

Liu, J., Wang, X. (2021b). Plant diseases and pests detection based on deep learning: a review. Plant Methods 17 (1), 1–18. doi: 10.1186/s13007-021-00722-9

Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., Alsaadi, F. E. (2017). Neurocomputing a survey of deep neural network architectures and their applications ☆. Neurocomputing 234 (November 2016), 11–26. doi: 10.1016/j.neucom.2016.12.038

Martinelli, F., Scalenghe, R., Davino, S., Panno, S., Scuderi, G., Ruisi, P., et al. (2015). Advanced methods of plant disease detection. A review. Agron. Sustain. Dev. 35, 1–25. doi: 10.1007/s13593-014-0246-1

Masilamani, G. K., Valli, R. (2021). “Art classification with pytorch using transfer learning,” in 2021 International Conference on System, Computation, Automation and Networking (ICSCAN). (Puducherry, India: IEEE), 1–5. doi: 10.1109/ICSCAN53069.2021.9526457

Mohanty, S. P., Hughes, D. P., Salathé, M. (2016). Using deep learning for image-based plant disease detection. Front. Plant Sci. 7 (September). doi: 10.3389/fpls.2016.01419

Nasirahmadi, A., Wilczek, U., Hensel, O. (2021). Sugar beet damage detection during harvesting using different convolutional neural network models. Agric. (Switzerland) 11 (11), 1–13. doi: 10.3390/agriculture11111111

Navab, N., Hornegger, J., Wells, W. M., Frangi, A. F. (2015) in 18th International Conference, Munich, Germany, October 5-9, 2015, (Germany: Springer) 9351, 12–20, proceedings, part III. doi: 10.1007/978-3-319-24574-4

Oppenheim, D., Shani, G., Erlich, O., Tsror, L. (2019). Using deep learning for image-based potato tuber disease detection. Phytopathology 109 (6), 1083–1087. doi: 10.1094/PHYTO-08-18-0288-R

Peng, Y., Wang, Y. (2022). Leaf disease image retrieval with object detection and deep metric learning. Front. Plant Sci. 13. doi: 10.3389/fpls.2022.963302

Permanasari, Y., Ruchjana, B. N., Hadi, S., Rejito, J. (2022). Innovative region convolutional neural network algorithm for object identification. J. Open Innovation: Technology Market Complexity 8 (4), 182. doi: 10.3390/joitmc8040182

Picon, A., Seitz, M., Alvarez-gila, A., Mohnke, P., Ortiz-barredo, A., Echazarra, J. (2019). Crop conditional convolutional neural networks for massive multi-crop plant disease classi fi cation over cell phone acquired images taken on real fi eld conditions. Comput. Electron. Agric. 167, 105093. doi: 10.1016/j.compag.2019.105093

Pu, Y., Gan, Z., Henao, R., Yuan, X., Li, C., Stevens, A., et al. (2016). Variational autoencoder for deep learning of images, labels and captions. Adv. Neural Inf. Process. Syst. 29, 1–13. doi: 10.48550/arXiv.1609.08976

Sabrol, H. (2015). Recent studies of image and soft computing techniques for plant disease recognition and classification. Int. J. Comput. Appl. 126, 1, 44–55. doi: 10.5120/ijca2015905982

Salakhutdinov, R., Larochelle, H. (2010). “Efficient learning of deep Boltzmann machines,” in Proceedings of the thirteenth international conference on artificial intelligence and statistics, Sardinia, Italy: mlr press. 1–40.

Sarker, I. H. (2021). Deep learning: A comprehensive overview on techniques, taxonomy, applications and research directions. SN Comput. Sci. 2 (6), 1–20. doi: 10.1007/s42979-021-00815-1

Shelhamer, E., Long, J., Darrell, T. (2017). Fully convolutional networks for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 39 (4), 640–651. doi: 10.48550/arXiv.1411.4038

Shoaib, M., Hussain, T., Shah, B., Ullah, I., Shah, S. M., Ali, F., et al. (2022a). Deep learning-based segmentation and classification of leaf images for detection of tomato plant disease. Front. Plant Sci. 13. doi: 10.3389/fpls.2022.1031748

Shoaib, M., Sayed, N. (2022). Traitement du signal YOLO object detector and inception-V3 convolutional neural network for improved brain tumor segmentation. Traitement Du Signal 39, 1, 371–380. doi: 10.18280/ts.390139

Shoaib, M., Shah, B., Hussain, T., Ali, A., Ullah, A., Alenezi, F., et al. (2022b). A deep learning-based model for plant lesion segmentation , subtype identi fi cation , and survival probability estimation 1–15. doi: 10.3389/fpls.2022.1095547

Shougang, R., Fuwei, J., Xingjian, G., Peishen, Y., Wei, X., Huanliang, X. (2020). Deconvolution-guided tomato leaf disease identification and lesion segmentation model. J. Agric. Eng. 36 (12), 186–195. doi: 10.1186/s13007-021-00722-9

Siddiqua, A., Kabir, M. A., Ferdous, T., Ali, I. B., Weston, L. A. (2022). Evaluating plant disease detection mobile applications: Quality and limitations. Agronomy 12 (8), 1869. doi: 10.3390/agronomy12081869

Singh, D., Jain, N., Jain, P., Kayal, P., Kumawat, S., Batra, N. (2020). “PlantDoc: A dataset for visual plant disease detection,” in Proceedings of the 7th ACM IKDD CoDS and 25th COMAD. (Hyderabad, India: Association for computing Machinery) 249–253.

Singh, V., Misra, A. K. (2017). Detection of plant leaf diseases using image segmentation and soft computing techniques. Inf. Process. Agric. 4 (1), 41–49. doi: 10.1016/j.inpa.2016.10.005

Sladojevic, S., Arsenovic, M., Anderla, A., Culibrk, D., Stefanovic, D. (2016). Deep neural networks based recognition of plant diseases by leaf image classification. Comput. Intell. Neurosci. 2016, 1–12. doi: 10.1155/2016/3289801

Soliman, M. M., Kamal, M. H., Nashed, M. A. E. M., Mostafa, Y. M., Chawky, B. S., Khattab, D. (2019). “Violence recognition from videos using deep learning techniques,” in 2019 Ninth International Conference on Intelligent Computing and Information Systems (ICICIS), (Cairo, Egypt: IEEE). 80–85. doi: 10.1109/ICICIS46948.2019.9014714

Stančić, A., Vyroubal, V., Slijepčević, V. (2022). Classification efficiency of pre-trained deep CNN models on camera trap images. J. Imaging 8 (2), 20. doi: 110.3390/jimaging8020020

Stewart, E. L., Wiesner-Hanks, T., Kaczmar, N., DeChant, C., Wu, H., Lipson, H., et al. (2019). Quantitative phenotyping of northern leaf blight in UAV images using deep learning. Remote Sens. 11 (19), 2209. doi: 10.3390/rs11192209

Szymak, P., Piskur, P., Naus, K. (2020). The effectiveness of using a pretrained deep learning neural networks for object classification in underwater video. Remote Sens. 12 (18), 1–19. doi: 10.3390/RS12183020

Tahir, A. M., Qiblawey, Y., Khandakar, A., Rahman, T., Khurshid, U., Musharavati, F., et al. (2022). Deep learning for reliable classification of COVID-19, MERS, and SARS from chest X-ray images. Cogn. Comput. 2019, 17521772. doi: 10.1007/s12559-021-09955-1

Tianjiao, C., Wei, D., Juan, Z., Chengjun, X., Rujing, W., Wancai, L. (2019). Intelligent identification system of disease and insect pests based on deep learning. China Plant Prot Guide 39 (004), 26–34.

Too, E. C., Yujian, L., Njuki, S., Yingchun, L. (2019). A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric. 161, 272–279. doi: 10.1016/j.compag.2018.03.032

Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M. (2015). “Learning spatiotemporal features with 3d convolutional networks,” in Proceedings of the IEEE international conference on computer vision. 4489–4497.

Ullah, A., Muhammad, K., Haq, I. U., Baik, S. W. (2019). Action recognition using optimized deep autoencoder and CNN for surveillance data streams of non-stationary environments. Future Generation Comput. Syst. 96, 386–397. doi: 10.1016/j.future.2019.01.029

Wang, B. (2022). Identification of crop diseases and insect pests based on deep learning. Sci. Program. 2022, 1–10. doi: 10.1155/2022/9179998

Wang, Q., Qi, F., Sun, M., Qu, J., Xue, J. (2019). Identification of tomato disease types and detection of infected areas based on deep convolutional neural networks and object detection techniques. Comput. Intell. Neurosci. 2019, 9142753. doi: 10.1155/2019/9142753

Wang, G., Sun, Y., Wang, J. (2017). Automatic image-based plant disease severity estimation using deep learning. Comput. Intell. Neurosci. 2017, 1–9. doi: 10.1155/2017/2917536

Wang, L., Xiong, Y., Wang, Z., Qiao, Y. (2015). Towards good practices for very deep two-stream ConvNets. arXiv preprint arXiv:1507.02159. , 1–5. doi: 10.48550/arXiv.1507.02159

Wang, Z., Zhang, S. (2018). Segmentation of corn leaf disease based on fully convolution neural network. Acad. J. Comput. Inf Sci. 1, 9–18. doi: 10.25236/AJCIS.010002

Wiesner-hanks, T., Wu, H., Stewart, E., Dechant, C., Kaczmar, N., Lipson, H., et al. (2019). Millimeter-level plant disease detection from aerial photographs via deep learning and crowdsourced data. Front. Plant Sci. 10, 1–11. doi: 10.3389/fpls.2019.01550

Xie, X., Ma, Y., Liu, B., He, J., Li, S., Wang, H. (2020). A deep-Learning-Based real-time detector for grape leaf diseases using improved convolutional neural networks. Front. Plant Sci. 11, 1–14. doi: 10.3389/fpls.2020.00751

Xu, J., Chen, J., You, S., Xiao, Z., Yang, Y., Lu, J. (2021). Robustness of deep learning models on graphs: A survey. AI Open 2 (December 2020), 69–78. doi: 10.1016/j.aiopen.2021.05.002

Yalcin, H., Razavi, S. (2016). “Plant classification using convolutional neural networks,” in 2016 Fifth International Conference on Agro-Geoinformatics (Agro-Geoinformatics). (Tianjin, China: IEEE), 1–5. doi: 10.1109/Agro-Geoinformatics.2016.7577698

Yoosefzadeh-Najafabadi, M., Earl, H. J., Tulpan, D., Sulik, J., Eskandari, M. (2021). Application of machine learning algorithms in plant breeding: Predicting yield from hyperspectral reflectance in soybean. Front. Plant Sci. 11 (January). doi: 10.3389/fpls.2020.624273

Zhang, S., Huang, W., Zhang, C. (2019). ScienceDirect three-channel convolutional neural networks for vegetable leaf disease recognition. Cogn. Syst. Res. 53, 31–41. doi: 10.1016/j.cogsys.2018.04.006

Zhao, S., Wu, Y., Ede, J. M., Balafrej, I., Alibart, F. (2021). Research on image classification algorithm based on pytorch research on image classification algorithm based on pytorch. Journal of Physics: Conference Series, 4th International Conference on Computer Information Science and Application Technology (CISAT 2021). 2010, 1–6. doi: 10.1088/1742-6596/2010/1/012009

Zhou, G., Zhang, W., Chen, A., He, M., Ma, X. (2019). Rapid detection of rice disease based on FCM-KM and faster r-CNN fusion. IEEE Access 7, 143190–143206. doi: 10.1109/ACCESS.2019.2943454

Keywords: machine learning, deep learning, plant disease detection, image processing, convolutional neural networks, performance evaluation, practical applications

Citation: Shoaib M, Shah B, EI-Sappagh S, Ali A, Ullah A, Alenezi F, Gechev T, Hussain T and Ali F (2023) An advanced deep learning models-based plant disease detection: A review of recent research. Front. Plant Sci. 14:1158933. doi: 10.3389/fpls.2023.1158933

Received: 04 February 2023; Accepted: 27 February 2023; Published: 21 March 2023.

Reviewed by:

Copyright © 2023 Shoaib, Shah, EI-Sappagh, Ali, Ullah, Alenezi, Gechev, Hussain and Ali. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Farman Ali, [email protected] ; Tariq Hussain, [email protected]

† These authors have contributed equally to this work and share first authorship

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Plant Leaf Disease Detection using CNN model

Shreyansh Patil at Vishwakarma Institute of Technology

  • Vishwakarma Institute of Technology

Ashutosh Kulkarni at Vishwakarma Institute of Technology

  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

IMAGES

  1. (PDF) IJERT-Plant Disease Detection using CNN Model and Image

    research paper on plant disease detection using cnn

  2. Plant Leaf Disease Detection Using Convolutional Neural Network Cnn In

    research paper on plant disease detection using cnn

  3. Plants

    research paper on plant disease detection using cnn

  4. (PDF) Diverse Plant Leaf Disease Detection Using CNN

    research paper on plant disease detection using cnn

  5. plant disease detection using CNN

    research paper on plant disease detection using cnn

  6. (PDF) Automated Detection of Plant Diseases Using Image Processing and

    research paper on plant disease detection using cnn

COMMENTS

  1. Plant Disease Detection using CNN

    Plant Disease Detection using CNN. Emma Harte. School of Computing. National College of Ireland. Mayor Street, IFSC, Dublin 1. Dublin, Ireland. [email protected]. Abstract — Plant ...

  2. Recent advances in plant disease severity assessment using

    Finally, this study discussed the major challenges faced by CNN-based plant disease severity assessment methods in practical applications, and provided feasible research ideas and possible ...

  3. Convolutional Neural Networks in Detection of Plant Leaf Diseases: A Review

    A survey of research initiatives that use convolutional neural networks (CNN), a type of DL, to address various plant disease detection concerns was undertaken in the current publication. In this work, we have reviewed 100 of the most relevant CNN articles on detecting various plant leaf diseases over the last five years.

  4. Identification of Plant-Leaf Diseases Using CNN and Transfer-Learning

    The timely identification and early prevention of crop diseases are essential for improving production. In this paper, deep convolutional-neural-network (CNN) models are implemented to identify and diagnose diseases in plants from their leaves, since CNNs have achieved impressive results in the field of machine vision. Standard CNN models require a large number of parameters and higher ...

  5. Research on plant disease identification based on CNN

    The plant disease identification and classification method based on the FL-EfficientNet network proposed in this paper solves the problem of imbalance in the number of samples of different kinds of plant diseases by introducing the Focal loss function in the task of multi-class plant disease classification, and effectively improves the accuracy ...

  6. Construction of deep learning-based disease detection model in plants

    To develop this system, construction of a stepwise disease detection model using images of diseased-healthy plant pairs and a CNN algorithm consisting of five pre-trained models. The disease ...

  7. (PDF) Identification of Plant-Leaf Diseases Using CNN and Transfer

    In this paper, deep convolutional-neural-network (CNN) models are implemented to identify and diagnose diseases in plants from their leaves, since CNNs have achieved impressive results in the ...

  8. Role of Convolutional Neural Networks in Plant Leaf Disease Detection

    Convolutional Neural Networks, or CNNs, have be- come a potent tool for image recognition applications, such as the identification of plant leaves. CNNs showed innovative capabilities on a range of image classification and segmentation tasks, demonstrating their ability to extract significant information from pictures. In this work, we offer a CNN-based approach to identify leaves on plants ...

  9. Early Poplar (Populus) Leaf-Based Disease Detection through Computer

    One of the key advantages of using CNN architectures in disease detection is their ability to process and analyze complex ... or pattern. There has been significant research conducted to detect plant diseases, ... The structure of this paper on poplar disease detection is methodically organized to facilitate understanding and exploration ...

  10. Plant Disease Detection and Classification Using a Deep ...

    The future of plant disease detection using CNNs involves expanding and di-versifying training datasets, optimizing CNN architectures for enhanced performance, integrating real-time monitoring technologies, exploring multimodal analysis techniques, improving the interpretability of CNN models, and addressing practical challenges for widespread ...

  11. Disease Recognition in Plant Leaves Using CNN-Based Algorithm

    In the agricultural sector, the scourge of plant diseases casts a long shadow, leading to substantial financial losses and threatening both crop quality and quantity. With global food demands on the rise and the expansion of crop cultivation to meet these needs, the urgency for effective, scalable, and prompt plant disease detection methods cannot be overstated. Traditional approaches reliant ...

  12. Plant Disease Detection Using Cnn

    Plant disease detection can be done by looking for a spot on the diseased plant's leaves. The goal of this paper is to create a Disease Recognition Model that is supported by leaf image ...

  13. Plant Disease Detection Using CNN

    Every year crops succumb to several diseases. Due to inadequate diagnosis of such diseases and not knowing symptoms of the disease and its treatment many plants die. This study provides insights into an overview of the plant disease detection using different algorithms. A CNN based method for plant disease detection has been proposed here.

  14. Leaf disease detection using convolutional neural networks: a proposed

    A proper system for identifying leaf disease, which is crucial for developing agricultural areas, has been addressed in this research using a neural network approach. This study helps find plant illnesses and their stages whenever they occur. Fungal, bacterial, and viral infections are very harmful to plants. Five major tomato diseases have been classified in this research: bacterial spot ...

  15. Plant Disease Detection Using CNN Through Segmentation and ...

    Transfer learning has been used in our proposed system for training the CNN models to detect diseases. Segmentation and object detection methods have been employed to identify individual plant leaves from images containing multiple leaves and complex backgrounds, thus facilitating the model to work on real-world images.

  16. Early Detection and Classification of Tomato Leaf Disease Using High

    In this paper, the authors reviewed all CNN variants for plant disease classification. The authors also briefed all deep learning principles used for leaf disease identification and classification.

  17. PDF Plant Leaf Disease Detection Using Convolutional Neural Network

    automated plant disease detection and classification. The convolutional neural network (CNN) model, trained on the Plant Village dataset, demonstrated high accuracy in classifying leaf images into 39 different disease categories. Through rigorous testing and validation, our system consistently provided reliable diagnoses, offering farmers an ...

  18. ToLeD: Tomato Leaf Disease Detection using Convolution ...

    Performance of various CNN for plant disease identification depends on various factors: availability of limited number of annotated; poor representation of disease symptoms, image background and capturing conditions; limited variations in disease symptoms [21].

  19. A novel groundnut leaf dataset for detection and ...

    Additionally, we provide baseline results of applying state-of-the-art CNN architectures on the dataset for groundnut disease classification, demonstrating the potential of the dataset for advancing groundnut-related research using deep learning.

  20. An Improve Method for Plant Leaf Disease Detection and Classification

    This research paper presents an enhanced CNN based MCC-ECNN model with fine-tuned hyper-parameters and various batch sizes for accurate plant leaf disease classification. In countries like India, whose important occupation is agriculture, face a huge loss when the crops get affected by any type of disease. These diseases attack the crops in various stages and can destroy the entire production.

  21. Cotton Plant Disease Detection Using CNN

    However, with the advent of intelligent and science-based technologies like machine learning and artificial intelligence (AI), it is now possible to develop quick and accurate methods for diagnosing agricultural diseases. In this study, we focus on using CNN modeling to identify and classify diseases affecting cotton plants.

  22. Plant disease detection using hybrid model based on convolutional

    This paper proposes a novel hybrid model for automatic plant disease detection based on CAE and CNN with fewer training parameters as compared to other state-of-the-art systems present in the literature.

  23. PLANT DISEASE DETECTION USING CONVOLUTIONAL NEURAL NETWORK

    This method paper may be a new approach in detecting plant diseases using the deep convolutional neural network trained and finetuned to suit accurately to the database of a plants leaves that was ...

  24. An advanced deep learning models-based plant disease detection: A

    The research focuses on publications between 2015 and 2022, and the experiments discussed in this study demonstrate the effectiveness of using these techniques in improving the accuracy and efficiency of plant disease detection.

  25. Plant Disease Detection using Convolutional Neural Network

    The proposed model using CNN was trained using images from plant village dataset and attained an accuracy of 94.87% in identifying the diseased plants with the help of image processing by OpenCV. Finally, the paper showcases the detailed analysis of the proposed scheme and results obtained by the model.

  26. PDF Plant Disease Detection Using Cnn

    Plant disease detection can be done by looking for a spot on the diseased plant's leaves. The goal of this paper is to create a Disease Recognition Model that is supported by leaf image classification. To detect plant diseases, we are utilizing image processing with a Convolution neural network (CNN).

  27. Plant Leaf Disease Detection using CNN model

    This paper proposes a method to classify leaf diseases. using the CNN model. Thus, in accord with to database. provided leaf diseases can be classified. Every plant disease can. be separately ...

  28. Plant Disease Detection and Classification by Deep Learning—A Review

    This review provides the research progress of deep learning technology in the field of crop leaf disease identification in recent years. In this paper, we present the current trends and challenges for the detection of plant leaf disease using deep learning and advanced imaging techniques.