Use este identificador para citar ou linkar para este item: http://www.alice.cnptia.embrapa.br/alice/handle/doc/1134326
Registro completo de metadados
Campo DCValorIdioma
dc.contributor.authorGONÇALVES, J. P.
dc.contributor.authorPINTO, F. A. C.
dc.contributor.authorQUEIROZ, D. M.
dc.contributor.authorVILLAR, F. M. M.
dc.contributor.authorBARBEDO, J. G. A.
dc.contributor.authorDEL PONTE, E. M.
dc.date.accessioned2021-09-14T13:01:47Z-
dc.date.available2021-09-14T13:01:47Z-
dc.date.created2021-09-14
dc.date.issued2021
dc.identifier.citationBiosystems Engineering, v. 210, p. 129-142, Oct. 2021.
dc.identifier.urihttp://www.alice.cnptia.embrapa.br/alice/handle/doc/1134326-
dc.descriptionColour-thresholding digital imaging methods are generally accurate for measuring the percentage of foliar area affected by disease or pests (severity), but they perform poorly when scene illumination and background are not uniform. In this study, six convolutional neural network (CNN) architectures were trained for semantic segmentation in images of individual leaves exhibiting necrotic lesions and/or yellowing, caused by the insect pest coffee leaf miner (CLM), and two fungal diseases: soybean rust (SBR) and wheat tan spot (WTS). All images were manually annotated for three classes: leaf background (B), healthy leaf (H) and injured leaf (I). Precision, recall, and Intersection over Union (IoU) metrics in the test image set were the highest for B, followed by H and I classes, regardless of the architecture. When the pixel-level predictions were used to calculate percent severity, Feature Pyramid Network (FPN), Unet and DeepLabv3+ (Xception) performed the best among the architectures: concordance coefficients were greater than 0.95, 0.96 and 0.98 for CLM, SBR and WTS datasets, respectively, when confronting predictions with the annotated severity. The other three architectures tended to misclassify healthy pixels as injured, leading to overestimation of severity. Results highlight the value of a CNN-based automatic segmentation method to determine the severity on images of foliar diseases obtained under challenging conditions of brightness and background. The accuracy levels of the severity estimated by the FPN, Unet and DeepLabv3 + (Xception) were similar to those obtained by a standard commercial software, which requires adjustment of segmentation parameters and removal of the complex background of the images, tasks that slow down the process.
dc.language.isoeng
dc.rightsopenAccesseng
dc.subjectAprendizado profundo
dc.subjectFitopatometria
dc.subjectInteligência artificial
dc.subjectAprendizado de máquina
dc.subjectRede neural convolucional
dc.subjectSegmentação de imagem
dc.subjectPhytopathometry
dc.subjectMachine learning
dc.subjectConvolutional neural network
dc.subjectImage segmentation
dc.titleDeep learning architectures for semantic segmentation and automatic estimation of severity of foliar symptoms caused by diseases or pests.
dc.typeArtigo de periódico
dc.subject.thesagroDoença de Planta
dc.subject.nalthesaurusArtificial intelligence
dc.subject.nalthesaurusPlant diseases and disorders
dc.subject.nalthesaurusNeural networks
riaa.ainfo.id1134326
riaa.ainfo.lastupdate2021-09-14
dc.identifier.doihttps://doi.org/10.1016/j.biosystemseng.2021.08.011
dc.contributor.institutionJULIANO P. GONÇALVES, UFV; FRANCISCO A. C. PINTO, UFV; DANIEL M. QUEIROZ, UFV; FLORA M. M. VILLAR, UFV; JAYME GARCIA ARNAL BARBEDO, CNPTIA; EMERSON M. DEL PONTE, UFV.
Aparece nas coleções:Artigo em periódico indexado (CNPTIA)

Arquivos associados a este item:
Arquivo Descrição TamanhoFormato 
AP-Predictive-models-Forests-2021.pdf3,68 MBAdobe PDFThumbnail
Visualizar/Abrir

FacebookTwitterDeliciousLinkedInGoogle BookmarksMySpace