In recent years, olive trees have been increasingly threatened by “Olive Quick Decline Syndrome” (OQDS), a disease caused by the harmful bacterium Xylella fastidiosa. This disease poses a significant challenge in Europe, especially in Italy, where olive oil production plays a major economic role. The syndrome harms the plant by thinning the xylem tissue, which disrupts the flow of water inside the plant. This reduction in the plant’s vascular system causes leaf necrosis along the margins or tips, often followed by chlorosis and, in many cases, premature leaf drop. The symptoms usually begin in isolated sections of the foliage but gradually spread until the entire canopy is affected. Early detection of Xylella fastidiosa is, therefore, essential to protect local plant life and hopefully prevent the disease from spreading to other plants.
How Sensor 2.0 uses AI-driven drones to detect Xylella fastidiosa
To assess and quantify a plant’s health status, the NDVI (Normalized Difference Vegetation Index) is widely used as a metric. This index is calculated based on the correlation between two spectral bands (red and near-infrared) acquired through a multispectral camera. Recent advancements in technology have made it possible to develop commercially available mobile platforms equipped with multispectral camera systems, ideal for large-scale data acquisition. By utilizing data captured by drone-mounted multispectral cameras, we propose a deep learning approach to identify and classify olive trees in rural landscapes, enabling the assessment of individual plants’ health status. Specifically, data was captured using both RGB and multispectral cameras during drone flights over olive tree fields. Once individual trees were identified in the RGB image, the NDVI index was calculated and combined with deep learning techniques to determine for each olive tree the health status.
Challenges of detecting tree health and optimising AI models for accurate classification
Several challenges emerged during the development of this project, particularly regarding various aspects of image acquisition via drone and subsequent image processing to obtain necessary data. The use of a drone equipped with a multispectral camera was assessed as a cost-effective tool for developing such applications. However, the use of a drone introduces constraints, notably with respect to flying altitude. Regulations require drones to operate above a minimum altitude for safety, which directly impacts image resolution and, consequently, the level of detail we can capture on individual trees. Therefore, a key initial step was to define the data acquisition parameters and determine the altitude needed to acquire useful data for tree health assessment.
Operating the drone at a higher altitude offers the advantage of capturing a larger area within a single image, allowing more trees to be monitored simultaneously. This approach streamlines data collection by reducing the number of images needed to cover the study area, which in turn shortens the overall data acquisition time. However, this higher vantage point comes at the cost of image resolution, as details on individual trees are less distinct.
Lower altitudes, by contrast, improve image clarity and enhance the quality of multispectral data, allowing for finer analysis of tree characteristics, such as leaf health, chlorophyll content, and other indicators critical for assessing tree vitality. To determine the optimal balance between area coverage and data quality, we conducted a preliminary acquisition campaign where images were taken at various legally permissible altitudes. This process involved capturing and comparing images at different heights to evaluate the trade-offs between field-of-view coverage and resolution quality. After assessing the images acquired at each altitude, we found that maintaining the drone at an altitude of 25 meters above ground yielded a high level of detail in the images, particularly beneficial for multispectral analysis. This altitude provided sufficiently high-resolution data to discern individual tree characteristics while still allowing for a practical level of area coverage, making it ideal for our project requirements.
Identifing individual trees in drone-captured images
The second major challenge was to determine how to accurately identify individual trees in drone-captured images. Various approaches have been proposed in the literature, such as reconstructing a 3D point cloud from a sequence of images taken during flight. Although this method can distinguish tree height, point cloud reconstruction requires substantial processing time and computational power. To optimize resource usage, we focused on applying Deep Learning techniques to develop an object detection model capable of real-time operation. Typically, data acquisition is time-intensive, and with drone operations limited by battery life, it is crucial to maximize the amount of data collected during each acquisition campaign. Data augmentation, a valuable Machine Learning technique, can address this need. While certain approaches employ mathematical methods like autoencoders, we opted for a different strategy to streamline the process and enhance the training dataset.
Rather than increasing the number of images for training tree recognition, we enhanced the labeling of training images to boost the model’s training efficiency. A pre-trained object detection network was used initially to identify as many trees as possible. This training set was then refined through 3D reconstruction to differentiate individual trees within each image. We subsequently re-trained the network, improving its performance with the enriched dataset. By refining the training set in this way, we enhanced the model’s ability to accurately identify individual trees in each image, thereby improving its overall performance.
To assess the health status of individual trees, a supervised approach was employed by labeling health conditions within the training dataset and training the network to recognize the health status of previously unseen trees. For classification of each tree’s health, we used the NDVI (Normalized Difference Vegetation Index) indicator, calculated from the drone’s multispectral images. Since we only had overhead images of the trees and could not accurately reconstruct the volume of each tree in real time, a supervised approach was essential for training a neural network to make these assessments. We specifically implemented a Dual YOLO network architecture. In this setup, one branch of the network processes the RGB image of each individual tree, while the second branch processes the NDVI value associated with the tree. This dual-input approach serves a dual purpose: it enables the network to access the maximum amount of information without reducing image resolution, and it automatically excludes unnecessary background data, focusing solely on the features relevant to classification. During the training phase, RGB images and NDVI values for each tree, labeled with the correct health status, are fed into the network. The Dual YOLO network then combines the outputs from both branches to deliver a final classification, effectively leveraging both visual and spectral data to determine the health status of individual trees.
Share results from their trials and the impact on olive tree management.
The SENSOR 2.0 project delivers groundbreaking advancements in olive grove management, particularly in olive tree identification and the early detection of the devastating Xylella fastidiosa pathogen. The project’s AI model integrates olive tree recognition with disease monitoring, employing sophisticated machine learning algorithms that not only recognize individual olive trees but also assess their health with an anticipated accuracy rate between 70% and 80%. This model detects early signs of Xylella fastidiosa by analyzing both RGB and NDVI imagery to identify subtle but significant shifts in leaf color and thermal patterns, which may signal early infection. By providing olive farmers and agricultural experts with detailed, data-driven insights, SENSOR 2.0 enhances decision-making related to disease prevention and treatment strategies. Furthermore, the project generates a comprehensive database that captures imagery and environmental data over time, building a valuable resource for long-term monitoring of olive tree health. SENSOR 2.0 also prioritizes the development of a robust and user-friendly data collection methodology. This includes defining optimal drone flight parameters—such as altitude—to capture high-quality images essential for health assessments and Xylella detection. The system’s emphasis on ease of use ensures that even users with limited technical knowledge can effectively operate the system, making high-quality agricultural insights accessible to a broader audience.
Two fundamental aspects could be take into account with the development of the project: Socio-econimic impact and environmental impact. The SENSOR 2.0 project is poised to deliver substantial socio-economic benefits. By improving olive grove management and increasing crop health and yields, it enhances economic returns for farmers, with positive effects rippling through the broader agricultural sector. Additionally, the adoption of this advanced technology creates new job opportunities, particularly in tech-focused roles such as drone piloting and data analysis, and fosters skills development in rural areas, empowering local communities. The project also addresses environmental sustainability by enabling more precise and efficient use of resources, notably pesticides and water. This targeted approach reduces environmental impact while maintaining high crop standards. The project further supports biodiversity conservation; by controlling Xylella fastidiosa, SENSOR 2.0 contributes to preserving olive trees, a vital part of regional agricultural biodiversity. Finally, the extensive environmental data collected by SENSOR 2.0 offers invaluable insights into how environmental factors affect olive grove health, potentially guiding broader environmental management and conservation efforts.