Abstract
Automatic in-field fruit recognition techniques can be used to estimate fruit number, fruit size, fruit skin color, and yield in fruit crops. Fruit color and size represent two of the most important fruit quality parameters in stone fruit (Prunus sp.). This study aimed to evaluate the reliability of a commercial mobile platform, sensors, and artificial intelligence software system for fast estimates of fruit number, fruit size, and fruit skin color in peach (Prunus persica), nectarine (P. persica var. nucipersica), plum (Prunus salicina), and apricot (Prunus armeniaca), and to assess their spatial and temporal variability. An initial calibration was needed to obtain estimates of absolute fruit number per tree and a forecasted yield. However, the technology can also be used to produce fast relative density maps in stone fruit orchards. Fruit number prediction accuracy was ≥90% in all the crops and training systems under study. Overall, predictions of fruit number in two-dimensional training systems were slightly more accurate. Estimates of fruit diameter (FD) and color did not need an initial calibration. The FD predictions had percent standard errors <10% and root mean square error <5 mm under different training systems, row spacing, crops, and fruit position within the canopy. Hue angle, a color attribute previously associated with fruit maturity in peach and nectarine, was the color attribute that was best predicted by the mobile platform. A new color parameter—color development index (CDI), ranging from 0 to 1—was derived from hue angle. The adoption of CDI, which represents the color progression or distance from green, improved the interpretation of color measurements by end-users as opposed to hue angle and generated more robust color estimations in fruit that turn purple when ripe, such as dark plum. Spatial maps of fruit number, FD, and CDI obtained with the mobile platform can be used to inform orchard decisions such as thinning, pruning, spraying, and harvest timing. The importance and application of crop yield and fruit quality real-time assessments and forecasts are discussed.
Accurate in-field estimations of crop characteristics can support orchard management and improve packout. Stone fruit (Prunus sp.) customer satisfaction is greatly influenced by the color and size of fruit, two of the most important fruit quality parameters. A variety of proximal sensors deployed onto mobile platforms have the potential to estimate fruit size, fruit skin color, and yield in orchards. There has been an increasing number of scientific publications on using sensors in orchards for yield estimation and decision support (Anderson et al. 2021a). The ability to locate fruit in the tree canopy is a fundamental requirement. Gongal et al. (2015) presented a comprehensive review of systems and technologies used in machine vision research for fruit detection and localization. Variable lighting conditions, occlusions, and fruit clustering are some of the challenges in detection accuracy in an orchard environment. Spectral cameras appear to have good fruit segmentation accuracy (Gongal et al. 2015) but require high computation power and are not superior to color cameras (Gutiérrez et al. 2019b). Color cameras provide color, geometric, and texture information that is used in retrieving features for fruit detection.
Crop load can be calculated as the total fruit number per tree or per unit of canopy or trunk cross-sectional area. Orchard crop load measurements require fruit detection and location to support fruit count metrics. Yield can then be calculated by multiplying a fruit weight (FW) by fruit number. Stein et al. (2016) described an image-based mango (Mangifera indica) fruit detection, localization, and yield estimation approach. The Shrimp robot, a ground-based vehicle developed by Australian Center for Field Robotics (University of Sydney, Sydney, Australia), was used to estimate yield in mango (Stein et al. 2016), almond [Prunus dulcis (Underwood et al. 2016)], and apple [Malus domestica (Hung et al. 2015)] orchards. Gutiérrez et al. (2019a) described a ground-based hyperspectral imaging approach for extensive mango yield estimation. Ji et al. (2012) developed a computer vision system consisting of a two-camera stereo rig mounted on an autonomous vehicle to drive between rows and scan both sides in apple orchards. A computer vision algorithm was used to detect and register apples from the sequential images acquired to generate apple counts as a proxy of crop yield estimation. Crop yield estimation errors for red apple and green apple were 3.2% and 1.2%, respectively, showing the system worked well for both skin colors. A multiview geometry machine vision technique to improve mango fruit detection in the presence of occlusion was first developed by Stein et al. (2016). Most recently, a similar machine vision technique was presented by Anderson et al. (2021b), who reported high repeatability of mango detection in night-time imaging with an absolute error of 2% or less.
Only a few studies have been carried out to estimate fruit size in orchards using machine vision systems (Apolo-Apolo et al. 2020; Cheng et al. 2017; Gongal et al. 2018; Regunathan and Lee 2005; Stajnko et al. 2004; Wang et al. 2017). Gongal et al. (2018) presented a mobile platform for apple fruit size estimation in orchard using a three-dimensional (3D) machine vision system and compared prediction accuracy with estimations from two-dimensional (2D) pixel-based images. The accuracy of estimating the major axis of apple fruit using 3D coordinates was 69%. This increased to 85% when a 2D pixel-based method was used, indicating there is a good potential for using this method to estimate fruit size in orchards. Méndez et al. (2019) presented a yield and fruit size estimation system using 3D light detection and ranging (LiDAR) sensor with color capture of oranges, although this yield estimation approach required significant scanning time and was not practical for large commercial orchards. More recently, Gené-Mola et al. (2021) developed an in-field apple fruit size estimation using photogrammetry-derived 3D point clouds that again required significant processing time to analyze trees in large commercial orchards.
Fruit color sensors are mostly used post-harvest in commercial grading lines for sorting fruit. In-field fruit color estimates captured by imaging sensors mounted on ground-based mobile platforms can be used to estimate fruit maturity or simply to prioritize areas of the orchard block that are ready to harvest when fruit reach their full coloration. Wendel et al. (2018) showed that ground-based mobile hyperspectral measurements of fruit skin color can be used to estimate mango fruit maturity. Gutiérrez et al. (2019b) further showed that dry matter concentration (i.e., an indicator of maturity) in mango could also be estimated in the orchard using only three color channels akin to normal color (and not hyper/multispectral), but with slightly lower performance compared with a hyperspectral technique. This indicates it is feasible to map fruit ripeness throughout the orchard using a mobile platform.
Handheld color measurement devices often use reflectance spectra from the object of interest (fruit) to derive color attributes in different color scales [e.g., red-green-blue (RGB), cyan-magenta-yellow-key (CMYK), hexadecimal (HEX), Commission Internationale de l’Éclairage (CIE) XYZ (CIEXYZ) and CIE lightness (L*), a*, b*, chroma (C*), and hue angle (h°) (CIELab/LCh)]. The CIELab/LCh color space (International Organization for Standardization 1976) primarily expresses color using five color parameters: L*, a*, b*, C*, and h°. According to McGuire (1992), C* and h° are better indicators of fruit color than L*, a*, and b*. Different authors have linked h° to stone fruit maturity (Ferrer et al. 2005; Robertson et al. 1990; Scalisi et al. 2020), and in a recent work, measurements of CIELab/LCh color attributes with a handheld color sensor (Rubens Technologies Pty Ltd, Knoxfield, Victoria, Australia) were used to describe the relationship between fruit color development and time from harvest in ‘Majestic Pearl’ nectarine [Prunus persica var. nucipersica (Scalisi et al. 2021a)].
A mobile platform that quickly scans large commercial orchards and predicts crop parameters, such as crop load, fruit size, and color, would be beneficial for the stone fruit industry. There are only a few commercial proximal and mobile sensor systems available that focus on locating and counting fruit that also measure their quality attributes. Green Atlas Pty Ltd (Sydney, NSW, Australia) is a commercial provider that recently developed Cartographer, a mobile platform that has been successfully used to estimate fruitlets and fruit in many different fruit and nut crops. The mobile platform uses a combination of RGB cameras and processing software that allow fruit count densities to be quickly mapped over entire orchards. The mobile platform has been recently scientifically validated to estimate tree size, flower cluster number, and crop load (fruit number) in apple (Scalisi et al. 2021b). This new technology provides an opportunity to address the aforementioned requirements of the stone fruit industry.
The objective of this study was to evaluate the mobile platform as a rapid stone fruit orchard assessment platform to estimate fruit number, fruit size, and fruit skin color. More specifically, this work aimed to calibrate fruit counts detected by the mobile platform against ground truth measurements (i.e., observed variables) of fruit number, and validate FD and fruit color development estimates in peach (P. persica), nectarine, plum (P. salicina), and apricot (P. armeniaca).
Materials and methods
Study sites
This study was conducted in the stone fruit experimental orchard and the Sundial orchard at the Tatura SmartFarm, Tatura, Victoria, Australia (lat. 36°26′7″S, long. 145°16′8″ E, elevation 113 m) and in three nearby commercial orchard blocks (Varapodio S T & R, Ardmona, Victoria, Australia) during the 2020–21 season.
The stone fruit experimental orchard (3.5 ha) has 5813 trees consisting of apricot, nectarine, peach, and plum cultivars with diverse rootstocks, training systems, and tree spacings, and is used by Agriculture Victoria researchers for a range of agronomic experiments. The orchard layout has north-south rows spaced at 4.5 m and trees planted at 1.0 to 2.5 m along each row. The Sundial orchard (1.4 ha) is a circular orchard consisting of eight radial arms, four of which are planted with 720 nectarine trees and the other four arms with 720 apple trees at 1- and 3.5-m tree and row spacing, respectively. The ‘Majestic Pearl’ nectarine is grown on four training systems: Tatura trellis, vertical leader, cantilever trellis 1 (trees leaning 30° to the left-hand side when moving from the center to the outer circle of the Sundial orchard), and cantilever trellis 2 (trees leaning 30° to the right-hand side). Data on ‘Glacier’ peach (2.6 ha), ‘Summer Flare 34’ nectarine (1.7 ha), and ‘October Sun’ plum (0.6 ha) trees in three commercial orchard blocks was collected for technology evaluation purposes.
Mobile platform and predictive algorithms
The mobile platform fruit detector (Cartographer) was used to scan trees in the stone fruit and Sundial orchards at the Tatura SmartFarm at different times (1 month before harvest and at harvest) during fruit development. The mobile platform is equipped with RGB-D cameras that log images at a rate of five images/s. The images are simultaneously georeferenced using an on-board global positioning system (GPS). GPS locations were adjusted using a real-time kinematic global navigation satellite system [RTK GNSS (Reach RS+; EMLID, Budapest, Hungary)] receiver interfaced with the GPS unit mounted on the mobile platform via long-range wide-area network (LoRaWAN) connectivity. Convolutional neural network models were used for fruit detection.
A smartphone interface was used to control the logging of data and metadata related to the corresponding scans. Two different types of scans were performed (i.e., stationary and mobile scans). Stationary scans were carried out for initial training of the mobile platform to predict FD and color. Full orchard mobile scans were used to collect images that were then used to train the detection models to use for fruit number estimations. Mobile scans were carried out driving the mobile platform at a constant speed of ∼8 to 10 km·h−1. Logging was switched on a few meters before the start of the measurement section and off a few meters past the end of the measurement section. Short mobile scans were also carried out on three 10-m tree-line zones (∼30 to 45 trees per orchard) having a range from low to high fruit number per tree. Here, all fruit were manually counted to obtain ground truth fruit number in each zone for subsequent calibration of fruit detections generated by the mobile platform. Each zone was scanned from both tree sides (west and east) of the north-south rows. The distances from the mobile platform to fruit and tree were assessed using a combination of the RGB-D and LiDAR technology. Averages of the two row sides were obtained to improve the calibration in each zone and counter false negative and false positive detections.
Scans were also conducted at Varapodio S T & R commercial orchard blocks on ‘Glacier’ peach (7 Jan 2021), ‘Summer Flare 34’ nectarine (28 Jan 2021), and ‘October Sun’ plum [26 Jan and 25 Feb 2021 (i.e., 1 month before and at harvest, respectively)].
Estimations of crop parameters
Fruit number
Fruit detections obtained from the mobile scans of all the measurement plots in the stone fruit and Sundial experimental orchards were reprocessed using a calibration factor derived from predictions against ground truth values. This calibration factor was equivalent to the slope of a linear regression with intercept = 0 obtained by the mobile platform counts (predicted) in experimental plots against total fruit counts per plot obtained with a commercial fruit grader equipped with optical sensors (InVision 9000; Compac Sorting Equipment Ltd, Colliver, Victoria, Australia) at harvest (ground truth). Fruit number was obtained on different crop × training system combinations: ‘Golden May’ apricot and ‘Angeleno’ plum under Tatura trellis and vase training systems; ‘August Flame’ peach under Tatura trellis and vertical leader training systems; and ‘Majestic Pearl’ nectarine under Tatura trellis, vertical leader, cantilever trellis 1, and cantilever trellis 2 training systems. Calibrated spatial maps were used to portray fruit densities (e.g., fruit per tree, fruit per linear meter) throughout the orchard.
FD and color
Ground truth FD and color for validations of stationary mobile platform predictions were obtained on fruit from different crops (apricot, peach, nectarine, plum), training systems (Tatura trellis, vertical leader, vase, cantilever trellis 1, and cantilever trellis 2), row spacing (4.5 and 3.5 m), and canopy height [position of the fruit within the canopy: low (< 1.2 m), medium (1.2 to 2.0 m), and high (> 2.0 m)]. The stone fruit cultivars scanned at the Tatura SmartFarm were Majestic Pearl and Autumn Bright nectarine, Golden May apricot, Snow Flame 25 and August Flame peach, and Angeleno plum. Both FD and color were measured at different times (1 month before harvest and at harvest) during the season to increase the variability of the measures in the collected data. Data from different collection times were then pooled together and ground truth FD and color data were compared with stationary predictions. The mobile platform predictions were generated based on the information within the fruit bounding boxes generated by the mobile platform fruit detector (Fig. 1A).
Ground truth FDs were measured perpendicular to the main axis of the fruit with a digital Bluetooth caliper (OriginCal; iGAGING, San Clemente, CA, USA) (Fig. 1B). Estimates of FD were obtained from stationary and mobile scans at different stages (1 month before harvest and at harvest) throughout the season. Two independent reliability tests were carried out to determine the reliability of FD estimates obtained with 1) stationary and 2) mobile scans, due to the potential effect of the mobile platform motion on shape distortion of detected objects in images. Estimating size involved two main steps. First, fruit was detected by the same neural network deployed for fruit counts. Second, LiDAR and vision data were combined to estimate the metric width of the bounding box in meters. These two steps were intentionally separated to verify the accuracy of the approach in each of them. In the first phase, the ability to measure the metric width of fruit in the images was verified based on the assumption that fruit had perfect bounding boxes placed around them (Fig. 1A). Every image obtained from stationary scans was hand labeled for fruit with pixel precision. The labeling and the choice of which fruit to label were carried out manually rather than using software. A first reliability test was then carried out to determine the accuracy of FD predictions on stationary scans. Subsequently, a proprietary Green Atlas algorithm was used to automatically select a subset containing the most accurate FD estimates to use in mobile scans. This algorithm reduced the number of fruit on which FD was measured compared with the total detected fruit. However, in common orchard scans, thousands of fruit are measured, and it is not necessary to measure the size of every single fruit to obtain an accurate estimate of FD for the orchard- or row-subset. Mobile scan data were used to validate FD estimation. Special purpose computer vision technique was used to combine camera image and LiDAR data to improve the accuracy of FD estimation.
Validation of FD predictions in peach, nectarine, and plum commercial orchards obtained from mobile scans was carried out by comparing 1) the average FD of the detected fruit in three row sections per crop (nectarine row section length = 25 m; peach row section length = 10 m; plum row section length = 12 m) against 2) the average FD in sub-samples (n = 60 to 200) measured in the same row sections.
Spatial maps of CDI were generated from the mobile platform scans in the commercial orchards planted with ‘Glacier’ peach (7 Jan 2021), ‘Summer Flare 34’ nectarine (28 Jan 2021), and ‘October Sun’ plum [26 Jan and 25 Feb 2021 (i.e., 1 month before and at harvest, respectively)] at different times of the day.
Geographic processing and data analysis
An open-source geographic information system software [QGIS ver. 3.10 (QGIS Development Team 2021)] was used to process data obtained with the mobile platform at the Tatura SmartFarm. Plot-scale shapefiles of each plot within each experimental orchard were generated using QGIS and plot polygons were used as the intersecting layer for the extraction of data. Alignment was cross-checked by verifying the positions of experimental plot delimiters (i.e., trellis posts) in the RGB images collected by the mobile platform. Spatial maps were generated with a combination of QGIS and a Green Atlas proprietary online tool that provides an interface to extract data and quick visualization of spatial maps for registered users. Spatial variability of fruit number, fruit CDI, and FD in commercial orchards was assessed using coefficients of variation (CV).
Calibrations of detected fruit were carried out as explained in the materials and methods section to adjust for the effects of fruit hidden in the canopy, fruit detections from trees in the neighboring row(s), and the transformation from fruit no./image to fruit no./tree. The root mean square error (RMSE) was used to quantify prediction errors on the same unit of the predicted variables. Coefficients of determination (R2) and percent standard errors were used to assess the robustness of linear models. The FD and fruit color predictions were not calibrated, as their value was directly assessed against ground truth. Lin’s concordance correlation coefficient (rc) (Lin 1989) was used to validate predictions by returning a value that represents the degree of agreement between predicted and ground truth FD and fruit color.
Datasets were analyzed with a statistical software (Genstat 21st ed.; VSN International Ltd., Hemel Hempstead, Hertfordshire, England) and graphs were generated using a scientific graphing and data analysis software (SigmaPlot ver. 12.5; Systat Software, San Jose, CA, USA).
Results
Reliability of fruit number, FD, and fruit color predictions
Fruit number
The relationships of detected against ground truth fruit number in apricot, plum, peach, and nectarine are shown in Fig. 3. Distinct clusters were observed for different training systems with clearly separate linear relationships within each cluster. The only exception occurred in plum, where the linear fits for Tatura trellis and vase had similar slopes (Fig. 3B), although Tatura trellis trees bore more fruit than vase trees. Linear model statistics and calibration factors for each crop and training system are reported in Table 1. Tatura trellis and vase systems both had a good linear association in apricot (Fig. 3A) and peach (Fig. 3C), but the slopes of the relationships were visibly different between training systems. Different calibration factors were used for each training system, a standard practice advised by Green Atlas to improve the performance of the prediction. Prediction of fruit number in plum was similar between Tatura trellis and vase trees (i.e., similar calibration factor), although percent standard error (SE) was slightly higher in the latter. Overall, the percent SE was always ≤ 10%. In apricot, plum, and peach, predictions in Tatura trellis systems had lower errors compared with vase (apricot and plum) and vertical leader (peach) (Table 1). Highest errors were observed in nectarine systems, due to a smaller sample size (number of plots, n) compared with apricot, plum, and peach (Table 1). The highest RMSE was 14 fruit/tree and was obtained in experimental plots of Tatura trellis plum that had a high average fruit number per tree (313) and a percent SE of only ∼2.5%.
Calibration factor, root mean square error (RMSE), and percent standard errors (SE) of the estimates of fruit number detected with the mobile platform in apricot, plum, peach, and nectarine cultivars.
Fruit diameter
Estimations of FD with the stationary mobile platform were tightly associated with ground truth measurements, regardless of row spacing (Fig. 4A), training system (Fig. 4B), canopy height (Fig. 4C), and crop (Fig. 4D). These factors, although thought to influence the prediction models, did not form distinct clusters in the linear association. Therefore, data of fruit from all the crops, training systems, row spacings, and canopy heights were pooled together (Fig. 4E). The best linear fit was very close to the y = x line and the prediction model had an rc > 0.90 and an RMSE < 5.0 mm.
The validation of the mobile platform predictions of FD from mobile scans in commercial orchards generated a similar confidence to the reliability of predictions from stationary scans (Fig. 5) The best fit line was very close to the line of perfect fit (y = x), accuracy (rc) was also more than 90%, and the RMSE was <2.0 mm.
Fruit color
Predictions of CIELab/LCh color parameters measured by the mobile platform had different associations with ground truth measurements obtained with the portable colorimeter (Fig. 6). Linear models and statistics of predictions vs. observed CIELab/LCh color parameters are reported in Table 2. In peach and nectarine fruit, that is, fruit in which h° is a well-documented quality parameter in literature (Ferrer et al. 2005; Robertson et al. 1990; Scalisi et al. 2020), the rc of the h° prediction models was 0.858 and 0.848, respectively, and the estimation error was below 10% (Table 2). In apricot and plum, despite h° being the better predicted color attribute, the reliability of the measure was much poorer (rc < 0.40) than in peach and nectarine, with apricot predictions showing large noise and plum’s highlighting distinct clusters of data in the linear association (Fig. 6). Relationships between other CIELab color attributes (i.e., L*, a*, b*, and C*) measured by the mobile platform and by the portable colorimeter were poor (rc < 0.40) in all the stone fruit crops (Table 2).
Linear regression models of predicted fruit color attributes for peach, nectarine, apricot, and plum shown in Fig. 6. Linear models, probability (P), coefficient of determination (R2), Lin’s concordance correlation coefficient (rc), root mean square error (RMSE), and percent standard error (SE) of color attributes in stone fruit cultivars. Sample number reported in brackets under cultivar names.
Relationships between predicted and ground truth fruit CDI for ‘Snow Flame’ peach, ‘Majestic Pearl’ nectarine, ‘Golden May’ apricot, and ‘Angeleno’ plum are presented in Fig. 7A–D, respectively. The CDI prediction accuracies in peach, nectarine, and apricot were similar to the ones observed for h°. The use of CDI corrected the discontinuity that created distinct point clusters in predictions of h° in plum (Fig. 6D). Nevertheless, the relationship between predicted and observed CDI did not improve the relationship obtained with h° (Fig. 6D).
Spatial mapping of validated fruit number, FD, and fruit color
Density maps representing the spatial distribution of calibrated fruit number per tree and FD within the commercial orchard of ‘October Sun’ plum are shown in Fig. 8A and B, respectively. Summary statistics obtained at harvest in the ‘Glacier’ peach, ‘Summer Flare 34’ nectarine, and ‘October Sun’ plum commercial orchards are reported in Table 3. The prediction of plum fruit number had the smallest error, but overall, errors below 8% were obtained. Within-orchard variability of fruit number per tree was higher in plum (CV = 47%) than in other crops (Table 3), and was affected by the considerably higher fruit number in the south end of the orchard block (Fig. 8A). Both FD and CDI were also slightly more variable in plum than in nectarine and peach, although the overall spatial variability was low (CV ≤ 12% and ≤ 7% for FD and CDI, respectively). The mobile platform predictions indicated that ‘Summer Flare 34’ nectarine had similar FD metrics to ‘Glacier’ peach (Table 3).
Summary block statistics obtained in Varapodio S T & R (Ardmona, Victoria, Australia) commercial orchards at harvest. Standard errors (SE) of calibration factors are shown in brackets and block coefficients of variation (CV) are reported for fruit number, fruit diameter (FD), and color development index (CDI).
Spatial maps of fruit color were generated using the average CDI for each spatial point measured by the mobile platform. Figure 9A and B show CDI spatial color maps at harvest in the peach and nectarine commercial orchards, respectively. Visually, a small variation of color across the orchard was noticeable in ‘Glacier’ peach and ‘Summer Flare 34’ nectarine, in line with CVs reported in Table 3. Figure 9C and D show CDI in ‘October Sun’ plum 1 month before and at harvest, respectively. Figure 9C unveiled the position of pollenizers—a different cultivar that yielded fruit maturing earlier than ‘October Sun’ plum—with dark-purple points, whereas ‘October Sun’ fruit was still greener than pollenizers. At harvest, fruit reached higher CDIs across the block and differences in fruit color between main cultivar and pollenizers were almost canceled. Also, the temporal distinctive coloration can be clearly differentiated from Fig. 9C and D for ‘October Sun’ plum, where plum scanned 1 month before harvest appeared much greener (Fig. 9C) than they did at harvest (Fig. 9D).
Discussion
The mobile platform fruit detector was tested for its capability to predict fruit number, FD, and fruit color in peach, nectarine, plum, and apricot. Predictions of fruit number per tree were possible by calibrating detections of fruit number per image. To obtain absolute values of fruit number per tree, it is important to apply customized calibration factors for each tree training system and overall orchard conditions (i.e., the standard commercial practice for Green Atlas). Without calibration, the mobile platform currently generates relative fruit number densities that can be spatially mapped to support orchard management decisions at a block level.
The percentage of fruit number prediction error in the models was steadily between 2% and 6% in apricot and plum trained to Tatura trellis or vase and on peach trained to vertical leader or Tatura trellis (Table 1). The higher errors observed in nectarine were due to the smaller sample size but remained <10.2%. Fruit number calibration factors varied under different crops and training systems (Table 1). Overall, there was a tendency to lower prediction errors in tree systems with canopies trained on trellises, where fruit is typically found in configurations more comparable to 2D walls. This observation is in line with Bortolotti et al. (2021), who found that fruit detections are improved in 2D systems. In our study, predictions of fruit number in Tatura trellis systems had lower errors than in other training systems in all the crops (Table 1). Some of the reasons behind the occurrence of false negatives (i.e., fruit not detected) and false positives (i.e., single fruit detected as doubles, or non-fruit objects identified as fruit) were as follows: 1) higher leaf area density that caused high fruit occlusion, 2) smaller fruit size relative to leaf area, 3) the three-dimensionality of canopy architecture, 4) dark fruit skin color that makes fruit appearance similar to shadows and dark areas in the images, and 5) fruit detections from adjacent rows. To the best of our knowledge, literature on orchard fruit detections in peach, nectarine, plum, or apricot is limited (Kurtulmus et al. 2014; Saedi and Khosravi 2020).
FD estimations were very accurate (rc > 0.90) in images collected with stationary (Fig. 4) and mobile state of the platform (Fig. 5). Crop, row spacing, training system, or canopy height had no significant effects on the estimation of fruit size (Fig. 4). These results suggest that no calibration is necessary for accurate estimations of fruit size. Data of fruit size at harvest from the commercial orchard blocks were not available and orchard average FD (and total fruit number) predicted by the mobile platform could not be compared with packhouse results to further validate the prediction method. However, the validations carried out in this study unveiled fruit size estimation errors <5 mm, in line with previous errors reported in mango by Wang et al. (2017).
After fruit number calibration, it is possible to estimate yield (e.g., tonnes per hectare, kilograms per tree) by multiplying the total orchard block fruit number by the average FW derived from FD relationships, because final fruit fresh weight is strongly related to FD. Scalisi et al. (2019) observed a robust power relationship between FW and FD in ‘September Bright’ nectarine fruit at different fruit growth stages (FW = 0.0018 × FD2.709; R2 = 0.997, RMSE = 2.2 g, n = 150). Crop- or cultivar-specific FW to FD relationships need to be established for best yield predictions. Data obtained in a packing shed from a fruit grader where FD and FW are measured on individual pieces of fruit can be used to derive this relationship. A correct estimation of FD is important to calculate FW and in turn generate predictions of yield (Wang et al. 2017).
Estimation of fruit color is generally more complex than fruit number and FD because of the sensitivity of sensors to external environments and the typically subjective perception of color. McGuire (1992) highlighted how the use of CIELab and/or LCh color attributes guarantees an objective color measure. Literature showed that h° is associated with redness and maturity in peach and nectarine (Ferrer et al. 2005; Robertson et al. 1990; Scalisi et al. 2020, 2021a). Our results showed good accuracy of h° predictions in peach and nectarine (Fig. 6; Table 2). The clustering of points observed in the prediction of h° in ‘Angeleno’ plum was likely due to a combination of the range of the measured variable (0–360) and the color properties of this cultivar. In fact, near harvest, most fruit have a dark purple coloration and high levels of wax coating, meaning that they will have h° between 0° and 90° and between 200° and 360°, leading to a gap in coloration between 90° and 200°. Little errors in the predictions of fruit skin color with h° ∼0° or 360° will project the linear regression points near the 0° or the 360° ticks in the graph (Fig. 6D), generating two additional distant clusters in the bottom right and top left corners of the graph. Poor relationships (rc < 0.40) between other CIELab color attributes (i.e., L*, a*, b*, and C*) measured by the mobile platform and by the portable colorimeter in all the stone fruit crops (Table 2) were most likely due to the fundamental difference between the contact sensing technology used for ground truthing and the prediction from proximal sensing. Proximal sensing might have been influenced by external light and skin reflectivity. However, our results suggest that predictions of h° were relatively sensor- and light-independent, as good accuracy was achieved in peach and nectarine without calibrations. Our findings are in line with what has been observed in previous work carried out in apricot, where two contact sensors with different illuminant settings measured similar h° values in ‘Golden May’ apricot collected at different maturity stages (Scalisi et al. unpublished data). Therefore, among the other color parameters, h° appears to be the most promising color attribute for proximal machine vision.
The new color parameter CDI, which is derived from h°, was shown to be a useful index to predict the development of fruit color in peach (Fig. 7A) and nectarine (Fig. 7B) with relatively high accuracy (rc > 0.84) and can be successfully used for spatial mapping in these crops (Fig. 9A and B). In addition, spatial mapping of CDI over time, portraying the temporal and spatial variability in a plum orchard (Fig. 9C and D), showed the position of pollenizers within the block. The relationship between predicted and observed CDI in plum (Fig. 7D) did not improve the relationship obtained with h° ground truth. The lack of a significant relationship between ground truth CDI and CDI predicted from the mobile platform in plum was likely driven by 1) the disagreement of h° measured by the two devices, and by 2) an interference with the waxy cuticle layer present on plum skin when measuring with the contact sensor. Therefore, there is no evidence that CDI predictions from the mobile platform do not reflect the actual color development of plum. In fact, spatial maps in Fig. 9C and D suggest that CDI in plum truly followed a temporal evolution that reflected the change in fruit skin pigmentation. The use of CDI to monitor fruit color development and potentially maturity in fruit crops allows the estimation of fruit color (i.e., one of the most important fruit quality parameters). In terms of orchard management, spatial mapping of CDI can support decision making on 1) utilization of reflective mulching, 2) spraying of bio-stimulants that enhance anthocyanins and red pigmentation, 3) mechanical defoliation, and 4) thinning. Harvest timing could possibly be determined from fruit achieving a pre-established CDI threshold that meets fruit quality standards for specific destination markets.
In this study, fruit color measurements were collected at different times of the day under different light environments. An influence of external light on color detection cannot be ruled out, as this study does not present enough evidence for a definitive recommendation. Uniform light conditions are generally only achieved at night, assuming the absence of fog. In addition, nocturnal scans have the potential to improve the prediction of CIELab/LCh parameters that can be more useful in other crops. Hue angle is the single best predictor of redness development in fruit, as it is less affected by external light because it measures the angular relationship between yellow-blueness vs. greenness-redness, regardless of lightness and color intensity. This was empirically demonstrated in a previous study that compared measurements of CDI (i.e., a transformation of hue angle) under different light environments (Scalisi et al. 2022). Future evaluations of fruit skin color predictions in plum cultivars with yellow, orange, or red fruit skin will be needed to validate the utilization of CDI as the best color predictor for machine vision applications in stone fruit.
Green Atlas currently supports an experimental feature that produces spatial maps of h°. Our results suggest that spatial maps of CDI over time are the preferred method of presenting variability in stone fruit skin color. The mobile platform can additionally deliver estimates of tree size, such as canopy area and density, that can be modeled in relation to fruit number, fruit size, and fruit color. This provides a powerful tool for researchers and growers to establish best practices to obtain uniform and high-quality fruit. Furthermore, this study highlighted the utility of the experimental orchards at the Tatura SmartFarm to test and evaluate cutting-edge technology in different crops and very diverse orchard conditions.
Conclusions
This study presented pioneering research for the application of reliable, fast, and mobile predictions of fruit number, FD, and fruit color in stone fruit crops (peach, nectarine, plum, apricot). In summary, fruit number estimations were accurate after an initial calibration process. The data processing time is on average between 1.5- and 2.0-fold the time needed for scanning the orchard, based on Green Atlas’ experience in commercial orchards. Uncalibrated spatial maps can be used as a visual tool to assess spatial variability in several crop parameters (e.g., fruit number, fruit size, and fruit color). Potential future research would assess the technology to accurately predict fruit number for different crops, cultivars, row spacings, and training systems without the need for initial calibration. We found FD predictions were accurate under different measurement conditions (e.g., row spacing, crop, training system, fruit position in the canopy). When used in combination, spatial maps of fruit number, FD, yield, and fruit color represent a practical tool for growers to better understand the variability across an orchard block to manage labor and orchard operations like fruit thinning, pruning, spraying, and harvest logistics. Early fruit size, fruit quality, and yield predictions can inform and support orchard decision management in the logistics and post-harvest handling of crops and can help forecast production revenues.
Future development of the mobile platform for stone fruit crops should focus on the following: 1) reducing the need for orchard-specific calibrations by using stone fruit datasets from different regions, crops, cultivars, tree and row spacings, and training systems; 2) evaluating the estimation of yield by applying established conversions from FD to FW for forecasting purposes; 3) comparing the accuracy of fruit color estimations in nocturnal and diurnal scans; and 4) streamlining the application of the spatial data to orchard operations and precision spatial management like flower and fruitlet thinning, pest and disease spraying, and fruit harvesting.
Units
References
Anderson, N.T., Walsh, K.B. & Wulfsohn, D. 2021a Technologies for forecasting tree fruit load and harvest timing–from ground, sky and time Agronomy (Basel) 11 7 1409 https://doi.org/10.3390/agronomy11071409
Anderson, N.T., Walsh, K.B., Koirala, A., Wang, Z., Amaral, M.H., Dickinson, G.R., Sinha, P. & Robson, A.J. 2021b Estimation of fruit load in Australian mango orchards using machine vision Agronomy (Basel) 11 9 1711 https://doi.org/10.3390/agronomy11091711
Apolo-Apolo, O.E., Martínez-Guanter, J., Egea, G., Raja, P. & Pérez-Ruiz, M. 2020 Deep learning techniques for estimation of the yield and size of citrus fruits using a UAV Eur. J. Agron. 115 126030 https://doi.org/10.1016/j.eja.2020.126030
Bortolotti, G., Bresilla, K., Piani, M., Grappadelli, L.C. & Manfrini, L. 2021 2D tree crops training system improve computer vision application in field: A case study. In 2021 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor) IEEE 120 124 https://doi.org/10.1109/MetroAgriFor52389.2021.9628839
Cheng, H., Damerow, L., Sun, Y. & Blanke, M. 2017 Early yield prediction using image analysis of apple fruit and tree canopy features with neural networks J. Imaging 3 1 6 https://doi.org/10.3390/jimaging3010006
Ferrer, A., Remón, S., Negueruela, A.I. & Oria, R. 2005 Changes during the ripening of the very late season Spanish peach cultivar Calanda: Feasibility of using CIELAB coordinates as maturity indices Scientia Hort. 105 4 435 446 https://doi.org/10.1016/j.scienta.2005.02.002
Gené-Mola, J., Sanz-Cortiella, R., Rosell-Polo, J., Escolà, A. & Gregorio, E. 2021 In-field apple size estimation using photogrammetry-derived 3D point clouds: Comparison of 4 different methods considering fruit occlusions Comput. Electron. Agric. 188 107629 https://doi.org/10.1016/j.compag.2021.106343
Gongal, A., Amatya, S., Karkee, M., Zhang, Q. & Lewis, K. 2015 Sensors and systems for fruit detection and localization: A review Comput. Electron. Agric. 116 8 19 https://doi.org/10.1016/j.compag.2015.05.021
Gongal, A., Karkee, M. & Amatya, S. 2018 Apple fruit size estimation using a 3D machine vision system Inf. Process. Agric. 5 4 498 503 https://doi.org/10.1016/j.inpa.2018.06.002
Gutiérrez, S., Wendel, A. & Underwood, J. 2019a Ground based hyperspectral imaging for extensive mango yield estimation Comput. Electron. Agric. 157 126 135 https://doi.org/10.1016/j.compag.2018.12.041
Gutiérrez, S., Wendel, A. & Underwood, J. 2019b Spectral filter design based on in-field hyperspectral imaging and machine learning for mango ripeness estimation Comput. Electron. Agric. 164 104890 https://doi.org/10.1016/j.compag.2019.104890
Hung, C., Underwood, J., Nieto, J. & Sukkarieh, S. 2015 A feature learning based approach for automated fruit yield estimation 485 498 Mejias, L, Corke, P & Roberts, J. Field and service robotics. Springer Cham, Switzerland https://doi.org/10.1007/978-3-319-07488-7_33
International Organization for Standardization 1976 ISO 11664-4:2008(en), Colorimetry — Part 4: CIE 1976 L*a*b* Color space https://www.iso.org/obp/ui/#iso:std:iso:11664:-4:ed-1:v1:en/ [accessed 14 Sep 2022]
Islam, M.S RGB-to-CIELab 2021. Github repository. https://github.com/sirajulislam/RGB-to-CIELab/blob/main/rgb2_labrgblch.py/ [accessed 14 Sep 2022]
Ji, W., Zhao, D., Cheng, F., Xu, B., Zhang, Y. & Wang, J. 2012 Automatic recognition vision system guided for apple harvesting robot Comput. Electr. Eng. 38 1186 https://doi.org/10.1016/j.compeleceng.2011.11.005
Kurtulmus, F., Lee, W.S. & Vardar, A. 2014 Immature peach detection in color images acquired in natural illumination conditions using statistical classifiers and neural network Precis. Agric. 15 1 57 79 https://doi.org/10.1007/s11119-013-9323-8
Lin, L.I 1989 A concordance correlation coefficient to evaluate reproducibility Biometrics 45 255 268 https://doi.org/10.2307/2532051
McGuire, R.G 1992 Reporting of objective color measurements HortScience 27 12 1254 1255 https://doi.org/10.21273/HORTSCI.27.12.1254
Méndez, V., Perez-Romero, A., Sola-Guirado, R., Miranda-Fuentes, A., Manzano-Agugliaro, F., Zapata-Sierra, A. & Rodríguez-Lizana, A. 2019 In-field estimation of orange number and size by 3D laser scanning Agronomy (Basel) 9 12 885 https://doi.org/10.3390/agronomy9120885
Regunathan, M. & Lee, W.S. 2005 Citrus fruit identification and size determination using machine vision and ultrasonic sensors 2005 ASAE Ann Intl Mtg. Am Soc Agric Biol Eng Tampa, FL, USA https://doi.org/10.13031/2013.19821
Robertson, J.A., Meredith, F.I., Horvat, R.J. & Senter, S.D. 1990 Effect of cold storage and maturity on the physical and chemical characteristics and volatile constituents of peaches (cv. Cresthaven) J. Agr. Food Chem. 38 3 620 624 https://doi.org/10.1021/jf00093a008
Saedi, S.I. & Khosravi, H. 2020 A deep neural network approach towards real-time on-branch fruit recognition for precision horticulture Expert Syst. Appl. 159 113594 https://doi.org/10.1016/j.eswa.2020.113594
Scalisi, A., O’Connell, M.G., Stefanelli, D. & Bianco, L.R. 2019 Fruit and leaf sensing for continuous detection of nectarine water status Front Plant Sci. 10 805 https://doi.org/10.3389/fpls.2019.00805
Scalisi, A., Pelliccia, D. & O’Connell, M.G. 2020 Maturity prediction in yellow peach (Prunus persica L.) cultivars using a fluorescence spectrometer Sensors (Basel) 20 22 6555 https://doi.org/10.3390/s20226555
Scalisi, A, O’Connell, M.G., Pelliccia, D., Plozza, T., Frisina, C., Chandra, S. & Goodwin, I. 2021a Reliability of a handheld Bluetooth colorimeter and its application to measuring the effects of time from harvest, row orientation and training system on nectarine skin color Horticulturae 7 8 255 https://doi.org/10.3390/horticulturae7080255
Scalisi, A., McClymont, L., Underwood, J., Morton, P., Scheding, S. & Goodwin, I. 2021b Reliability of a commercial platform for estimating flower cluster and fruit number, yield, tree geometry and light interception in apple trees under different rootstocks and row orientations Comput. Electron. Agric. 191 106519 https://doi.org/10.1016/j.compag.2021.106519
Scalisi, A, O’Connell, M.G., Islam, M.S. & Goodwin, I. 2022 A fruit colour development index (CDI) to support harvest time decisions in peach and nectarine orchards Horticulturae 8 5 459 https://doi.org/10.3390/horticulturae8050459
Stajnko, D., Lakota, M. & Hočevar, M. 2004 Estimation of number and diameter of apple fruits in an orchard during the growing season by thermal imaging Comput. Electron. Agric. 42 1 31 42 https://doi.org/10.1016/S0168−699(03)00086-3
Stein, M., Bargoti, S. & Underwood, J. 2016 Image based mango fruit detection, localization and yield estimation using multiple view geometry Sensors (Basel) 16 11 1915 https://doi.org/10.3390/s16111915
Underwood, J.P., Hung, C., Whelan, B. & Sukkarieh, S. 2016 Mapping almond orchard canopy volume, flowers, fruit and yield using lidar and vision sensors Comput. Electron. Agric. 130 83 96 https://doi.org/10.1016/j.compag.2016.09.014
Wang, Z., Walsh, K.B. & Verma, B. 2017 On-tree mango fruit size estimation using RGB-D images Sensors (Basel) 17 12 2738 https://doi.org/10.3390/s17122738
Wendel, A., Underwood, J. & Walsh, K. 2018 Maturity estimation of mangoes using hyperspectral imaging from a ground based mobile platform Comput. Electron. Agric. 155 298 313 https://doi.org/10.1016/j.compag.2018.10.021