Comparison of Plant Detection Performance of CNN-based Single-stage and Two-stage Models for Precision Agriculture

Recai Özcan, Kemal Tütüncü, Murat Karaca


The fact that arable land is not increasing in proportion to the ever-increasing population will increase the need for food in the coming years. For this reason, it is necessary to increase the yield of crops to make optimum use of arable land. One of the most important reasons for the decrease in yield and quality of crops is weeds. Herbicides are generally preferred for weed management. Due to deficiencies in herbicide application methods, only 0.015-6% of herbicides reach their target. The use of herbicides, which is an important part of the agricultural system, is an issue that needs to be emphasized, considering the risk of residue and environmental damage. In parallel with the rapid development of electronic and computer technologies, artificial intelligence applications have had the opportunity to develop. In this context, the use of artificial intelligence for plant detection in the subsystems of herbicide application machines will contribute to the development of precision agriculture techniques. In this study, the plant detection performances of single-stage and two-stage Convolutional Neural Network (CNN)-based deep learning (DL) models are evaluated. In this context, a dataset was created by taking images of Zea mays, Rhaponticum repens (L.) Hidalgo, and Chenopodium album L. plants in agricultural lands in Konya. With this dataset, the training of the models was carried out by the transfer learning method. The evaluation metrics of the trained models were calculated using the error matrix. In addition, training time and prediction time were used as quantitative metrics in the evaluation of the models. The plant detection performance, training time, and prediction time of the models were 85%, 8 h, 1.21 s for SSD MobileNet v2 and 99%, 22 h, 2.32 s for Faster R-CNN Inception v2, respectively. According to these results, Faster R-CNN Inception v2 is outperform in terms of accuracy. However, in cases where training time and prediction time are important, the SSD MobileNet v2 model can be trained with more data to increase its accuracy.  


Precision Agriculture, Plant Detection, SSD, Faster R-CNN, Performance Evaluation

Full Text:



Ali A, Streibig JC, Christensen S, Andreasen C (2015). Image-based thresholds for weeds in maize fields. Weed Research 55(1): 26-33.

Asad MH, Bais A (2020). Weed detection in canola fields using maximum likelihood classification and deep convolutional neural network. Information Processing in Agriculture 7(4): 535-545.

Bàrberi PAOLO (2002). Weed management in organic agriculture: are we addressing the right issues? Weed Research 42(3): 177-193.

dos Santos Ferreira A, Freitas DM, Silva GG, Pistori H, Folhes MT (2017). Weed detection in soybean crops using ConvNets. Computers and Electronics in Agriculture 143: 314-324.

Girshick R, Donahue J, Darrell T, Malik J (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, June 2014, pp. 580-587.

Girshick R (2015). Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision, December 2015, Piscataway, New Jersey, United States, pp. 1440-1448.

Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv preprint arXiv:1704.04861.

Jiao L, Zhang F, Liu F, Yang S, Li L, Feng Z, Qu R (2019). A survey of deep learning-based object detection. IEEE access 7: 128837-128868.

Liu B, Bruch R (2020). Weed Detection for Selective Spraying: a Review. Current Robotics Reports 1(1): 19-26.

Martínez SS, Gila DM, Beyaz A, Ortega JG, García JG (2018). A computer vision approach based on endocarp features for the identification of olive cultivars. Computers and Electronics in Agriculture 154: 341-346.

Pérez-Ortiz M, Peña JM, Gutiérrez PA, Torres-Sánchez J, Hervás-Martínez C, López-Granados F (2016). Selecting patterns and features for between- and within- crop-row weed mapping using UAV-imagery. Expert Systems with Applications 47: 85-94.

Ren S, He K, Girshick R, Sun J (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems 28.

Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC (2018). MobileNetV2: Inverted Residuals and Linear Bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, June 2018, pp. 4510-4520.

Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016). Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, June 2016, pp. 2818-2826.

Tensorflow (2021), TensorFlow 2 Detection Model Zoo, (Access date: 4 May 2022).

Tzutalin. LabelImg. Git code (2015). (Access date: 18 Sep 2022).

Vasileiadis VP, Otto S, Van Dijk W, Urek G, Leskovšek R, Verschwele A, Furlan L, Sattin M (2015). On-farm evaluation of integrated weed management tools for maize production in three different agro-environments in Europe: Agronomic efficacy, herbicide use reduction, and economic sustainability. European Journal of Agronomy 63: 71-78.

Zaidi SSA, Ansari MS, Aslam A, Kanwal N, Asghar M, Lee B (2022). A survey of modern deep learning based object detection models. Digital Signal Processing 126: p. 103514.

Zheng Y, Zhu Q, Huang M, Guo Y, Qin J (2017). Maize and weed classification using color indices with support vector data description in outdoor fields. Computers and Electronics in Agriculture 141; 215-222.


  • There are currently no refbacks.

Creative Commons Lisansı
Bu eser Creative Commons Alıntı-GayriTicari-Türetilemez 4.0 Uluslararası Lisansı ile lisanslanmıştır.