Abstract
In horticultural leafy vegetable production, continuously monitoring crop size indicators such as the leaf area index (LAI), leaf fresh weight (LFW), and leaf length (LL) is of practical value because these indicators are related to crop yields and harvest timing. The aim of this study was to develop a method that enables the continuous, automatic estimation of the LAI, LFW, and LL of a Chinese chive (Allium tuberosum) canopy by combining timelapse photography with allometric equations. LAI was estimated based on the gap fractions of nadir photographs (i.e., the fractions of nonleaf area), which were retrieved using the deep learning framework DeepLabv3+ with satisfactory accuracy (mean intersection over union, 0.71). This photographically estimated LAI (LAIphoto) corresponded well with the destructively measured LAI (LAIdest) (LAIphoto = 0.96LAIdest, R2 = 0.87). LAIphoto was then used as the input of allometric regression equations relating LAIphoto with LFW and LL. A power function (y = axb) fit the observed LAIphoto–LFW and LAIphoto–LL relationships well (R2 = 0.89 and 0.74, respectively). By combining nadir timelapse photography with the allometric equations, changes in the LFW and LL of a Chinese chive canopy were estimated successfully for a 9-month cultivation period. Our approach can replace time-consuming, labor-intensive manual measurements of these crop size indicators for Chinese chive and may be applicable to other crops with different parameter sets.
In the horticultural crop production of leafy vegetables, it is of practical importance to monitor crop size indicators such as the leaf area index (LAI), leaf fresh weight (LFW), and leaf length (LL). LAI, defined as half the total leaf area per unit ground surface area (Chen and Black, 1992), represents the amount of plant tissue available for photosynthesis and transpiration. LFW is closely related to grower income, as the marketable yields and prices of leafy vegetables are usually determined on a fresh-weight basis. LL determines the harvest timing, as some leafy vegetables are harvested when they reach the marketable LL specified in each market. These crop size indicators can be measured directly by the destructive sampling of crops, but destructive sampling is time-consuming and labor intensive, and is thus not practical to conduct regularly in commercial horticultural crop fields.
A red–green–blue (RGB) photograph contains visible information that is easily discernable by the human eye, and many attempts have been made to extract quantitative crop size indicators from RGB photographs. For example, RGB digital cameras were used to estimate the leaf area and LAI in cabbage and broccoli (Chien and Lin, 2005); lettuce (Jung et al., 2015; Yeh et al., 2014); wheat, soybeans, and corn (Liu et al., 2013); spinach (Nomura et al., 2020); and eggplant (Nomura et al., 2022). In addition, RGB digital cameras were used to estimate other crop size indicators, such as crop height (Sritarapipat et al., 2014) and weight (Sritarapipat et al., 2014; Van Henten and Bontsema, 1995). Compared with other sensors (e.g., stereo cameras, light detection and ranging, multispectral and hyperspectral cameras), RGB cameras are advantageous because of their affordability (Jin et al., 2020) and are thus suitable for application at small-scale crop production sites (i.e., by small-scale horticultural growers). RGB images can be taken using unmanned aerial vehicles (Córcoles et al., 2013; Raj et al., 2021; Roth et al., 2018), timelapse digital cameras (Kim et al., 2019; Larbi and Green, 2018; Nomura et al., 2022), and even smartphones (Bianchi et al., 2017; Confalonieri et al., 2013; Tichý, 2016). Among these devices, a timelapse digital camera is suitable for the automatic, long-term, continuous monitoring of plant canopies at fixed points (Ryu et al., 2012). In conventional horticultural greenhouses, timelapse cameras can be installed easily on horizontal beams above crop canopies and used to collect nadir-looking RGB photographs at a fixed interval (Nomura et al., 2022). Compared with other camera angles, nadir-looking photographs are convenient, because the camera angle can be adjusted easily (e.g., by using a bubble level). In addition, the nadir camera angle can be advantageous for monitoring changes in high LAI values because photographs taken from the nadir camera angle are less susceptible to leaf saturation, because even at high LAI values, nadir photographs tend to preserve nonleaf pixels (i.e., gaps), from which the LAI is estimated (Liu et al., 2013).
Among the many crop size indicators that may be computable from RGB images, LAI can be estimated based on a sound theoretical background. LAI can be estimated based on the Beer–Lambert law, which originally describes the attenuation of light in a canopy (Monsi et al., 2005; Yan et al., 2019). In photographic LAI estimation, the gap fraction (i.e., the nonleaf area ratio in a photograph) is used instead of light attenuation. Photographically estimated LAI values can then be related to other crop size indicators (LFW and LL), as an increase in leaf area is accompanied by increases in leaf weight and length. These allometric relationships in plants have been studied extensively in forests (Bloomberg et al., 2008; Kebede and Soromessa, 2018; Paul et al., 2016) and several crops (Bakhshandeh et al., 2012; Colaizzi et al., 2017; Reddy et al., 1998). Such allometric relationships enable us to estimate a difficult-to-measure objective variable (e.g., aboveground biomass) from easy-to-measure input variables (e.g., stem diameter) once regression equations are established based on the direct measurements of these variables. LAI values estimated from photography may be suitable as input variables of allometric equations because they can be obtained nondestructively, continuously, and automatically. Several studies have used allometric equations in combination with tree and field crop photography (Hays et al., 2020; Quint and Dech, 2010; Raj et al., 2021). A similar approach may be applicable for estimating leafy vegetable size indicators, which are of great concern to horticultural growers.
The aim of our study was to develop a method that enables the continuous, automatic estimation of LAI, LFW, and LL of a Chinese chive (Allium tuberosum) canopy by combining timelapse photography with allometric equations. Chinese chive is a popular herb in Japan and is cultivated for culinary and medicinal use (Imahori et al., 2007; Takagi, 1994). Its green leaves can be harvested multiple times in a cultivation period, as a tuft of leaves can regrow from the base, where the leaves are snipped during harvesting. The continuous estimation of LFW and LL of Chinese chive is of practical value because the harvestable yield and harvest timing of Chinese chive are determined based on LFW and LL, respectively. Here, we obtained regression equations relating photographically estimated LAI (LAIphoto) with LFW and LL based on destructive measurements and nadir photography. To derive LAIphoto, a state-of-the-art deep learning architecture, DeepLabv3+ (Chen et al., 2018), was applied to the leaf-to-nonleaf segmentation task. The derived regression equations were applied to estimate nondestructively LFW and LL of Chinese chive canopies from only nadir photographs.
Materials and Methods
Field experiments
Plant material and cultivation.
Chinese chive (cv. Miracle Greenbelt; Musashino Seed Co., Ltd., Tokyo, Japan) plants were cultivated in a greenhouse (20 m long, 7.5 m wide, and 2.8 m high) located at the Kochi Agriculture Research Center (lat. 33°35′27.9″N, long. 133°38′38.8″E). In the greenhouse, four ridges, each measuring 14 m long, 1.5 m wide, and 0.3 m tall, were constructed (Fig. 1). Within each ridge, clumps of Chinese chive plants (four individual plants per clump) were planted in a row at a 28-cm interval in both the along-the-ridge and across-the-ridge directions (four rows of plants were formed along the ridges). The plants were transplanted into the greenhouse on 9 July 2020 after a 5-month nursery period in plastic seed trays, and the plants continued to grow until 16 May 2021.
Each ridge was covered with white plastic mulch that had holes 14 cm in diameter at 28-cm intervals. On the mulch, sprinkler hoses were placed at the center of each ridge and provided water (adjusted between 149 and 373 mL⋅d–1 per clump depending on solar radiation and leaf area). Together with the irrigation water, liquid fertilizer (NPK = 10:4:6; Tomy Liquid Fertilizer Black, Katakura & Co-op Agri Corporation, Tokyo, Japan) was used to provide 53.6 g⋅m–2 N from 29 Oct. 2020 until 11 May 2021, in addition to the base fertilizer provided before transplantation (NPK = 14:11:13, Ecolong® 413-100, Jcam Agri. Co., Ltd., Tokyo, Japan), which amounted to 27.8 g⋅m–2 N. The top and side greenhouse windows were opened so that the greenhouse temperature did not exceed 24 °C. During the winter, a heater was operated to maintain greenhouse temperatures greater than 10 °C. Daytime CO2 concentration was maintained above 400 ppm using a CO2 generator from 25 Dec. 2020 until 21 Mar. 2021.
Acquisition of nadir photographs and corresponding LAI, LFW, and LL values for model construction.
To derive regression equations relating LAIphoto to LFW and LL, a dataset of nadir photographs and corresponding destructively sampled LAI, LFW, and LL values was prepared. Nadir canopy photographs were taken with a timelapse digital camera (TLC-200 Pro; Brinno, Taipei, Taiwan) (Fig. 2A and D). The timelapse camera was attached to a 1.6-m-tall aluminum frame placed above the Chinese chive canopy (see Fig. 1). After a photograph was taken, destructive measurements of LAI, LL, and LFW immediately below the timelapse camera were conducted, for which four clumps of plants in the row were snipped at the base just above ground level. To prevent LFW loss resulting from transpiration during the midday hours, the destructive measurements were conducted in the early morning (i.e., within 2 hours after dawn). LAI and LFW were calculated by dividing the raw measurements of the leaf area and fresh weight (per clump) by 0.28 × 0.28 m2 (because the clumps of Chinese chive plants were planted at 28-cm intervals in a grid-like pattern). LL was measured as the maximum leaf length within a clump of Chinese chive plants. These measurements were conducted every week from 8 Aug. 2020 until 11 May 2021. The measurement locations were changed every week to avoid any disturbance from previous destructive samplings, and the aluminum frame holding the timelapse camera was moved every week. In addition to these weekly destructive measurements, all plants were harvested six times during the cultivation period. (Chinese chive can be harvested several times throughout the year because leaves regrow after harvesting.) The plants were harvested on 27 Oct. and 7 Dec. 2020; and 25 Jan., 9 Mar., 12 Apr., and 10 May 2021.
The timelapse camera took one still photograph every 10 min and saved the series of photographs as a video in .avi format. The resolution of the photographs was 1280 × 720 pixels. Preinstalled optical lenses (BCS 019, Brinno) with view angles of 112° were used. The focus of each timelapse camera was adjusted manually, and the white balance was set to automatic. All other settings (e.g., exposure, saturation, contrast, sharpness) were the default values.
Long-term estimation of crop size indicators using timelapse photography.
Along with the data acquisition procedures described thus far, continuous estimations of LL and LFW were conducted in the same greenhouse throughout the cultivation period. A timelapse digital camera (TLC-200 Pro, Brinno) was installed on the horizontal beam of the greenhouse, ≈1.7 m above the top of the ridge. Timelapse photographs were taken every hour with the same settings described in the previous section. Daily changes in LL and LFW were estimated based on daily photographs taken between 8:00 and 9:00 am.
Image analysis
Figure 3 summarizes the flow of analysis in our study. The entire analytical procedure can be divided into three phases: 1) the training of a semantic segmentation model, 2) the estimation of LAIphoto, and 3) the development of allometric equations for estimating LFW and LL.
In phase 1, a semantic segmentation model, DeepLabv3+, was trained to segment leaf and nonleaf areas using a dataset of Chinese chive images with corresponding, manually prepared ground-truth labels (see section “Semantic segmentation using DeepLabv3+”). The performance of DeepLabv3+ was also estimated using a test dataset.
In phase 2, LAIphoto was computed based on each photograph’s gap fraction (P0). To do this, the obtained timelapse videos were first decomposed into a series of still photographs using free software called FFmpeg (version 4.2.1). Distortion was removed from the photographs by the procedure described in section “Undistortion of photographs,” and the photos were then cropped to 400 × 400 pixels to exclude side pixels corresponding to furrows (i.e., LAI and LFW per unit of ground area were calculated based on the area of the ridges). Next, the cropped photographs were segmented into leaf and nonleaf pixels using DeepLabv3+. Last, LAIphoto was estimated using gap-fraction theory (see section “LAI estimation theory”).
In phase 3, allometric equations relating LAIphoto with LFW and LL were derived based on the LAIphoto values and destructively measured LFW and LL values (see section “Allometric equations for estimating leaf length and fresh weight” and previous section “Acquisition of nadir photographs and corresponding LAI, LFW, and LL values for model construction”).
Semantic segmentation using DeepLabv3+.
Undistortion of photographs.
A photograph can contain distortion that depends on the angle and focal length of the lens. To calculate the P0 values correctly, the distortion in each photograph was removed using an open-source image analysis package, OpenCV (version 4.4.0) (Bradski, 2000), implemented in Python (version 3.6.7). The procedure was as follows (Zhang, 2000): 1) a chessboard pattern (7 × 7) attached to a planar surface was prepared, 2) 15 photographs of the plane were taken under different orientations, 3) the corners of the chessboard were detected, and 4) the five intrinsic camera parameters and all the extrinsic parameters (i.e., the distortion coefficients, rotation vectors, and translation vectors) were estimated. The obtained intrinsic camera parameters and distortion coefficients were saved and used to undistort all the other images taken by the same camera.
LAI estimation theory
When applying Eq. [3] to estimate LAIphoto, however, G(θ) should be known a priori. G(θ) is often assumed to have a constant value of 0.5, which corresponds to the spherical leaf-angle distribution in which the leaf normals are oriented in all directions with equal probability (de Wit, 1965). The spherical leaf-angle distribution has been considered a good first approximation for most crop canopies (Goudriaan, 1988; Spitters, 1986), and several previous studies have applied this assumption successfully to estimate the LAI values of crops (Liu and Pattey, 2010; Nomura et al., 2020). The camera view zenith angle was fixed at θ = 0° (nadir) because of the ease of camera installation at θ = 0°, because horizontal beams are available in typical greenhouse facilities. Several studies have reported successful LAI estimations with the nadir angle (Liu and Pattey, 2010; Nomura et al., 2020).
For the estimation of
Allometric equations for estimating leaf length and fresh weight
Regression equations used to relate the photographically estimated leaf area index (LAIphoto) with the destructively estimated leaf area index (LAIdest), leaf fresh weight (LFW), and leaf length (LL).
The relationship between LAIphoto and LL was also fitted with Eq. [4] because the relationship was found to be highly nonlinear.
Linear regressions were performed based on least-squares fitting using the Python statsmodels library (version 0.13.2) (Seabold and Perktold, 2010). The nonlinear regression in Eq. [4] was performed by minimizing the sum of the squared errors between the measured and predicted values using the Python lmfit package (version 1.0.1) (Newville et al., 2014) based on the Levenberg–Marquardt method.
Results and Discussion
Leaf-to-background segmentation.
In our analysis, the canopy photographs were segmented into leaf and nonleaf areas using DeepLabv3+ (Fig. 2C and F). The segmentation performance was evaluated based on the IoU values computed for the 10 test images with corresponding ground-truth labels. The mean IoU of the 10 test images was relatively high (0.71), indicating the overall good performance of DeepLabv3+. The true and estimated P0 values also agreed well, with small errors (Fig. 4). The regression line between the true and estimated P0 values had a high coefficient of determination (R2 = 0.917), with a slope value close to unity (y = 1.01x – 0.03).
When the crops were small and the leaves were narrow, some background pixels around the leaves were misclassified as leaf pixels (Fig. 2C). In addition, as shown in Fig. 2C, exposed soil pixels tended to be misclassified as leaves. These misclassifications at an early leaf growth stage, however, seemed relatively insignificant for the accuracy of P0 estimations, because P0 values are high (≈0.8) at the early growth stage of leaves. At later growth stages with high LAI values (i.e., P0 ≤ 0.3 in Fig. 4), P0 estimation errors tend to become large because photographs containing high LAI values tend to include many small gaps, which are more difficult to detect than large gaps. The small-gap detection accuracy may be improved by using a camera with a higher spatial resolution (Chianucci, 2016).
In our study, most camera settings related to image quality (e.g., shutter speed, ISO, white balance, contrast, sharpness, and saturation) were adjusted automatically according to the internal algorithm of the camera device such that the captured photographs would appear correctly when viewed with the human eye. Nevertheless, the captured photographs looked very different because of changing background light conditions according to the positions of the sun and clouds. Background light conditions have been a critical issue in conventional leaf-to-background segmentation methods [e.g., thresholding methods based on pixel colors (Leblanc et al., 2005)]. With these conventional methods, it is often necessary to take photographs under specific background light conditions (e.g., under diffuse light). In deep-learning-based segmentation (more specifically, CNNs), in contrast, background light conditions can be less critical (Díaz et al., 2021) because CNN models can perform leaf-to-nonleaf segmentation based not only on pixel color, but also on many other features (e.g., edges, shapes, textures) extracted through the convolution process (Gu et al., 2018). Therefore, in our study, the leaf-to-background segmentation performed by DeepLabv3+ seemed to be less affected by background light conditions. At later growth stages, however, the segmentation performance decreased when leaf pixels appeared white as a result of the reflection of sunlight, as these leaves were possibly misclassified as white plastic mulch. Segmentation performance may be improved by adding more training data to allow the deep-learning model to experience more image patterns (Ning et al., 2020; Shorten and Khoshgoftaar, 2019).
It was reported that the imbalance of segmentation classes (in our case, leaf and nonleaf areas) in a training dataset can be detrimental in the application of CNNs to classification problems (Buda et al., 2018). Thus, we examined the relative frequencies of the P0 values in the training images (Fig. 5). As shown in Fig. 5, the P0 values in the training images were distributed more or less uniformly between zero (i.e., completely filled with leaves) and one (i.e., no leaves), indicating little imbalance in the proportions of growth stages in the training dataset. Therefore, variations in the P0 estimation accuracies were probably not caused by imbalanced leaf and nonleaf areas in the training dataset.
LAI estimation accuracy.
Figure 6 shows a comparison between LAIdest and LAIphoto. LAIphoto was estimated based on the P0 values in the nadir photographs (Eq. [3]), whereas LAIdest was estimated based on the destructive sampling of four clumps of Chinese chive plants. LAIphoto and LAIdest showed a strong linear relationship (R2 = 0.87), with a slope close to unity (LAIdest = 0.96LAIphoto), indicating the applicability of the Beer–Lambert law (Eq. [2]) for estimating the LAI of a Chinese chive canopy. In addition, the nearly 1:1 relationship between LAIdest and LAIphoto indicates that the assumptions of G (θ) = 0.5 (i.e., a spherical leaf-angle distribution) and Ω (θ) = 1 (i.e., randomly distributed leaves) were reasonable in the LAI estimation of Chinese chive plants. This result is consistent with those reported in several previous studies, in which the same method was validated for the crop canopies of corn, soybeans, spinach, and wheat (Liu et al., 2013; Nomura et al., 2020). The relationship between LAIdest and LAIphoto was more scattered at higher LAI values, probably because of larger errors in the P0 estimations (see Fig. 4). In addition, when LAI is large (i.e., when P0 is small), small errors in P0 values can be converted to large errors in the LAI because of the high sensitivity of Eq. [3] to small P0 (θ) values (Liu et al., 2013). In our case, the LAI estimation errors seemed to become larger when the LAI was ≥2.5, which corresponded to P0 = 0.29. These results highlight the importance of leaf-to-background segmentation in accurately estimating the LAI from photographs.
Regression equations for estimating LFW and LL.
Figure 7 shows the relationship between LAIphoto and LFW per unit of ground area. The relationship between LAIphoto and LFW appeared to be linear, as reported in several previous studies (Cho et al., 2007; Ghoreishi et al., 2012). However, a model evaluation based on the AIC indicated that the power function (LFW = 1005LAIphoto0.96) would yield a smaller prediction error than linear functions (Table 2). The AIC of the power function was less than those of the linear functions with and without the y-intercept.
Performance of regression models in estimating the leaf fresh weight (LFW) from the photographically estimated leaf area index (LAIphoto).
Figure 8 shows the relationship between LAIphoto and LL. In contrast to the apparent linear relationship between LAIphoto and LFW per ground area, the relationship between LAIphoto and LL was highly nonlinear; the slope of the LAIphoto–LL curve became less steep with increased LAIphoto. This result suggests that an increase in the leaf area of Chinese chive was caused by different mechanisms depending on the leaf growth stages. At a low LAI just after harvesting, an increase in the leaf area is caused mainly by leaf elongation (i.e., an increase in LL). As the leaf area increases, however, increases in leaf width and tillering (i.e., the production of side shoots) contribute more to the increase in leaf area, resulting in the saturated LAIphoto–LL curve in Fig. 8. The power function (Eq. [4]) fit well to the observed LAIphoto–LL relationship (LL = 339LAIphoto0.29), with a high R2 value of 0.74. The sds (i.e., error bars) of LL in the four clumps of Chinese chive plants (Fig. 8) were small compared with those of LFW (Fig. 7), suggesting that plants within a canopy tend to converge to a similar height despite their different weights (Nagashima and Terashima, 1995).
Long-term estimations of LFW and LL.
Figure 9 shows the daily changes in LFW and LL estimated from timelapse photography combined with the regression equations relating LAIphoto to LFW and LL. The timelapse photography captured six periodic growth curves, which ended with steep drops at the harvesting times. Each growth curve (solid lines) increased steadily with minimal fluctuations, indicating the capability of timelapse photography to monitor crop growth continuously. LFW increased somewhat linearly during each growth period, whereas the rate of increase in LL slowed toward the end of each growth period. Both the photographically estimated LFW and LL corresponded well with the direct measurements. Figure 9 highlights the usefulness of the photographic–allometric method in monitoring crop size indicators.
Limitations.
In our study, photographs were taken from a height less than 2 m. In many conventional greenhouses in Japan, horizontal beams are usually available at 2 m above the ground, and thus, the situation described in our study is not a special case, at least in Japan. However, in many cases, a timelapse digital camera is installed more distant from the canopy, and the photographs cover a larger ground area, resulting in a decreased spatial resolution (i.e., the photographs become coarser). In these cases, the accuracy of LAI estimation is deteriorated because of the increased number of mixed pixels (i.e., pixels that cannot be categorized as either leaf or nonleaf areas). This deterioration of the LAI estimation accuracy will be more severe in canopies with higher LAI values because of the associated increase in small gaps, which are difficult to detect (Chianucci, 2016). In addition, in a canopy photograph taken from a farther distance, the pixel size of the region of interest for LAI estimation (i.e., the cropped photographic area; in our study, 400 × 400 pixels) should be decreased because the derived regression equations (i.e., the parameters in Figs. 6–8) were calibrated using cropped photographs that did not contain furrow pixels (i.e., the calculations were performed using only the parts of the photographs corresponding to ridge areas).
Conclusion
This study combined photography with allometry to estimate nondestructively crop size indicators such as LAI, LFW, and LL. The state-of-the-art deep-learning architecture DeepLabv3+ segmented nadir photographs successfully of a Chinese chive canopy into leaf and background pixels. From the segmented images, LAIphoto was estimated based on the Beer–Lambert law. LAIphoto was then used to estimate LFW and LL based on allometric relationships. The allometric relationships of LAIphoto to LFW and LL could be represented well by power functions. Once these relationships were established, nondestructive, continuous, automated estimations of LFW and LL became possible using a timelapse digital camera.
There are several crucial processes that need to be improved in future work. Leaf-to-nonleaf segmentation is one of these processes, and it can influence significantly the accuracy of LAI estimation. For this task, deep-learning frameworks (e.g., DeepLabv3+) show promise, and future studies should explore the optimal use of these techniques (e.g., model structure, model parameters, training data). Leaf-to-nonleaf segmentation can also be influenced by other factors such as the spatial resolution and illumination of photographs; thus, optimal camera use (e.g., device and camera settings) should be explored in future work. In addition, the conversion of LAIphoto to LFW and LL, which depended entirely on empirical regression equations in our study, may be improved with additional knowledge about the physical properties of leaves and canopies (Koyama and Smith, 2022). For example, LAIphoto may be converted to LFW by multiplying LAIphoto by the leaf mass per area, which can be obtained by leaf-scale destructive measurements. With such additional knowledge, it may be possible to add physical meanings and generalizability to the regression models and to decrease the need for labor-intensive data collections (see section “Field experiments”) for model construction. Once allometric models are established based on measurements, the proposed approach should be applicable to other leafy vegetable crop canopies, enabling the continuous, automated measurement of crop size indicators.
Literature Cited
Archontoulis, S.V. & Miguez, F.E. 2015 Nonlinear regression models and applications in agricultural research Agron. J. 107 786 798 https://doi.org/10.2134/agronj2012.0506
Bakhshandeh, E., Soltani, A., Zeinali, E. & Kallate-Arabi, M. 2012 Prediction of plant height by allometric relationships in field-grown wheat Cereal Res. Commun. 40 413 422 https://doi.org/10.1556/CRC.40.2012.3.10
Baret, F., de Solan, B., Lopez-Lozano, R., Ma, K. & Weiss, M. 2010 GAI estimates of row crops from downward looking digital photos taken perpendicular to rows at 57.5° zenith angle: Theoretical considerations based on 3D architecture models and application to wheat crops Agr. For. Meteorol. 150 1393 1401 https://doi.org/10.1016/j.agrformet.2010.04.011
Bianchi, S., Cahalan, C., Hale, S. & Gibbons, J.M. 2017 Rapid assessment of forest canopy and light regime using smartphone hemispherical photography Ecol. Evol. 7 10556 10566 https://doi.org/10.1002/ece3.3567
Black, T.A., Chen, J.-M., Lee, X. & Sagar, R.M. 1991 Characteristics of shortwave and longwave irradiances under a Douglas-fir forest stand Can. J. For. Res. 21 1020 1028 https://doi.org/10.1139/x91-140
Bloomberg, M., Mason, E.G., Jarvis, P. & Sedcole, R. 2008 Predicting seedling biomass of radiata pine from allometric variables New For. 36 103 114 https://doi.org/10.1007/s11056-008-9086-7
Bozdogan, H 1987 Model selection and Akaike’s information criterion (AIC): The general theory and its analytical extensions Psychometrika 52 345 370 https://doi.org/10.1007/BF02294361
Bradski, G 2000 The OpenCV Library Dr. Dobb’s J. Softw. Tools 120 122 125
Buda, M., Maki, A. & Mazurowski, M.A. 2018 A systematic study of the class imbalance problem in convolutional neural networks Neural Netw. 106 249 259 https://doi.org/10.1016/j.neunet.2018.07.011
Chen, J.M. & Black, T.A. 1992 Defining leaf area index for non-flat leaves Plant Cell Environ. 15 421 429 https://doi.org/10.1111/j.1365-3040.1992.tb00992.x
Chen, J.M. & Cihlar, J. 1995 Plant canopy gap-size analysis theory for improving optical measurements of leaf-area index Appl. Opt. 34 6211 https://doi.org/10.1364/ao.34.006211
Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F. & Adam, H. 2018 Encoder-decoder with atrous separable convolution for semantic image segmentation 833 851 Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). https://doi.org/10.1007/978-3-030-01234-2_49
Chianucci, F 2016 A note on estimating canopy cover from digital cover and Hemispherical Photography 50 1 10
Chien, C.F. & Lin, T.T. 2005 Non-destructive growth measurement of selected vegetable seedlings using orthogonal images Trans. ASAE 48 1953 1961 https://doi.org/10.13031/2013.20015
Cho, Y.Y., Oh, S., Oh, M.M. & Son, J.E. 2007 Estimation of individual leaf area, fresh weight, and dry weight of hydroponically grown cucumbers (Cucumis sativus L.) using leaf length, width, and SPAD value Scientia Hort. 111 330 334 https://doi.org/10.1016/j.scienta.2006.12.028
Colaizzi, P.D., Evett, S.R., Brauer, D.K., Howell, T.A., Tolk, J.A. & Copeland, K.S. 2017 Allometric method to estimate leaf area index for row crops Agron. J. 109 883 894 https://doi.org/10.2134/agronj2016.11.0665
Confalonieri, R., Foi, M., Casa, R., Aquaro, S., Tona, E., Peterle, M., Boldini, A., De Carli, G., Ferrari, A., Finotto, G., Guarneri, T., Manzoni, V., Movedi, E., Nisoli, A., Paleari, L., Radici, I., Suardi, M., Veronesi, D., Bregaglio, S., Cappelli, G., Chiodini, M.E., Dominoni, P., Francone, C., Frasso, N., Stella, T. & Acutis, M. 2013 Development of an app for estimating leaf area index using a smartphone: Trueness and precision determination and comparison with other indirect methods Comput. Electron. Agr. 96 67 74 https://doi.org/10.1016/j.compag.2013.04.019
Córcoles, J.I., Ortega, J.F., Hernández, D. & Moreno, M.A. 2013 Estimation of leaf area index in onion (Allium cepa L.) using an unmanned aerial vehicle Biosyst. Eng. 115 31 42 https://doi.org/10.1016/j.biosystemseng.2013.02.002
de Wit, C.T 1965 Photosynthesis of leaf canopies Agr. Res. Rep. 1 54
Díaz, G.M., Negri, P.A. & Lencinas, J.D. 2021 Toward making canopy hemispherical photography independent of illumination conditions: A deep-learning-based approach Agr. For. Meteorol. 296 https://doi.org/10.1016/j.agrformet.2020.108234
Ghoreishi, M., Hossini, Y. & Maftoon, M. 2012 Simple models for predicting leaf area of mango (Mangifera indica L.) J. Biol. Earth Sci. 2 45 53 https://doi.org/10.6084/m9.figshare.95498.v1
Goudriaan, J 1988 The bare bones of leaf-angle distribution in radiation models for canopy photosynthesis and energy exchange Agr. For. Meteorol. 43 155 169 https://doi.org/10.1016/0168-1923(88)90089-5
Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B., Liu, T., Wang, X., Wang, G., Cai, J. & Chen, T. 2018 Recent advances in convolutional neural networks Pattern Recogn. 77 354 377 https://doi.org/10.1016/j.patcog.2017.10.013
Hays, B.R., Riginos, C., Palmer, T.M., Gituku, B.C. & Goheen, J.R. 2020 Using photography to estimate above-ground biomass of small trees J. Trop. Ecol. 36 213 219 https://doi.org/10.1017/S0266467420000139
Hu, R., Yan, G., Mu, X. & Luo, J. 2014 Indirect measurement of leaf area index on the basis of path length distribution Remote Sens. Environ. 155 239 247 https://doi.org/10.1016/j.rse.2014.08.032
Imahori, Y., Suzuki, Y., Kawagishi, M., Ishimaru, M., Ueda, Y. & Chachin, K. 2007 Physiological responses and quality attributes of Chinese chive leaves exposed to CO2-enriched atmospheres Postharvest Biol. Technol. 46 160 166 https://doi.org/10.1016/j.postharvbio.2007.04.008
Jin, X., Zarco-Tejada, P.J., Schmidhalter, U., Reynolds, M.P., Hawkesford, M.J., Varshney, R.K., Yang, T., Nie, C., Li, Z., Ming, B., Xiao, Y., Xie, Y. & Li, S. 2020 High-troughput etimation of cop taits: A review of gound and aerial phenotyping platforms IEEE Geoscience and Remote Sensing Magazine vol. 9 no. 1 200 231 March 2021, https://doi.org/10.1109/MGRS.2020.2998816
Jung, A 2019 Imgaug documentation Readthedocs. io, 25 June. <https://buildmedia.readthedocs.org/media/pdf/imgaug/stable/imgaug.pdf>
Jung, D.-H., Park, S.H., Han, X.Z. & Kim, H.-J. 2015 Image processing methods for measurement of lettuce fresh weight J. Biosyst. Eng. 40 89 93 https://doi.org/10.5307/jbe.2015.40.1.089
Kebede, B. & Soromessa, T. 2018 Allometric equations for aboveground biomass estimation of Olea europaea L. subsp. cuspidata in Mana Angetu Forest Ecosyst. Health Sustain. 4 1 12 https://doi.org/10.1080/20964129.2018.1433951
Kim, J., Ryu, Y., Jiang, C. & Hwang, Y. 2019 Continuous observation of vegetation canopy dynamics using an integrated low-cost, near-surface remote sensing system Agr. For. Meteorol. 264 164 177 https://doi.org/10.1016/j.agrformet.2018.09.014
Koyama, K. & Smith, D.D. 2022 Scaling the leaf length-times-width equation to predict total leaf area of shoots Ann. Bot. https://doi.org/10.1093/aob/mcac043
Lang, A.R.G. & Xiang, Y. 1986 Estimation of leaf area index from transmission of direct sunlight in discontinuous canopies Agr. For. Meteorol. 37 229 243 https://doi.org/10.1016/0168-1923(86)90033-X
Larbi, P.A. & Green, S. 2018 Time series analysis of soybean response to varying atmospheric conditions for precision agriculture Precis. Agr. 19 1113 1126 https://doi.org/10.1007/s11119-018-9577-2
Leblanc, S.G., Chen, J.M., Fernandes, R., Deering, D.W. & Conley, A. 2005 Methodology comparison for canopy structure parameters extraction from digital hemispherical photography in boreal forests Agr. For. Meteorol. 129 187 207 https://doi.org/10.1016/j.agrformet.2004.09.006
Liu, J. & Pattey, E. 2010 Retrieval of leaf area index from top-of-canopy digital photography over agricultural crops Agr. For. Meteorol. 150 1485 1490 https://doi.org/10.1016/j.agrformet.2010.08.002
Liu, J., Pattey, E. & Admiral, S. 2013 Assessment of in situ crop LAI measurement using unidirectional view digital photography Agr. For. Meteorol. 169 25 34 https://doi.org/10.1016/j.agrformet.2012.10.009
Monsi, M., Saeki, T. & Schortemeyer, M. 2005 On the factor light in plant communities and its importance for matter production Ann. Bot. 95 549 567 https://doi.org/10.1093/aob/mci052
Nagashima, H. & Terashima, I. 1995 Relationships between height, diameter and weight distributions of Chenopodium album plants in stands: Effects of dimension and allometry Ann. Bot. 75 181 188 https://doi.org/10.1006/anbo.1995.1010
Newville, M., Stensitzki, T., Allen, D.B. & Ingargiola, A. 2014 LMFIT: Non-linear least-square minimization and curve-fitting for python https://doi.org/10.5281/ZENODO.11813
Ning, H., Li, Z., Wang, C. & Yang, L. 2020 Choosing an appropriate training set size when using existing data to train neural networks for land cover segmentation Ann. GIS 26 329 342 https://doi.org/10.1080/19475683.2020.1803402
Nomura, K., Saito, M., Kitayama, M., Goto, Y., Nagao, K., Yamasaki, H., Iwao, T., Yamazaki, T., Tada, I. & Kitano, M. 2022 Leaf area index estimation of a row-planted eggplant canopy using wide-angle time-lapse photography divided according to view-zenith-angle contours Agr. For. Meteorol. 319 108930 https://doi.org/10.1016/j.agrformet.2022.108930
Nomura, K., Takada, A., Kunishige, H., Ozaki, Y., Okayasu, T., Yasutake, D. & Kitano, M. 2020 Long-term and continuous measurement of canopy photosynthesis and growth of spinach Environ. Control Biol. 58 21 29 https://doi.org/10.2525/ecb.58.21
Paul, K.I., Roxburgh, S.H., Chave, J., Jacqueline, R., Zerihun, A., Specht, A., Lewis, T.O.M., Lauren, T., Baker, T.G., Adams, M.A., Huxtable, D.A.N. & Kelvin, D. 2016 Testing the generality of above-ground biomass allometry across plant functional types at the continent scale 2106–2124 https://doi.org/10.1111/gcb.13201
Quint, T.C. & Dech, J.P. 2010 Allometric models for predicting the aboveground biomass of Canada yew (Taxus canadensis Marsh.) from visual and digital cover estimates Can. J. For. Res. 40 2003 2014 https://doi.org/10.1139/X10-146
Raj, R., Walker, J.P., Pingale, R., Nandan, R., Naik, B. & Jagarlapudi, A. 2021 Leaf area index estimation using top-of-canopy airborne RGB images Int. J. Appl. Earth Obs. Geoinf. 96 102282 https://doi.org/10.1016/j.jag.2020.102282
Reddy, V.R., Pachepsky, Y.A. & Whisler, F.D. 1998 Allometric relationships in field-grown soybean Ann. Bot. 82 125 131 https://doi.org/10.1006/anbo.1998.0650
Roth, L., Aasen, H., Walter, A. & Liebisch, F. 2018 Extracting leaf area index using viewing geometry effects: A new perspective on high-resolution unmanned aerial system photography ISPRS J. Photogramm. Remote Sens. 141 161 175 https://doi.org/10.1016/j.isprsjprs.2018.04.012
Ryu, Y., Verfaillie, J., Macfarlane, C., Kobayashi, H., Sonnentag, O., Vargas, R., Ma, S. & Baldocchi, D.D. 2012 Continuous observation of tree leaf area index at ecosystem scale using upward-pointing digital cameras Remote Sens. Environ. 126 116 125 https://doi.org/10.1016/j.rse.2012.08.027
Seabold, S. & Perktold, J. 2010 Statsmodels: Econometric and statistical modeling with Python 96 96 Proceedings of the 9th Python Science Conference https://doi.org/10.25080/majora-92bf1922-011
Shorten, C. & Khoshgoftaar, T.M. 2019 A survey on image data augmentation for deep learning J. Big Data 6 https://doi.org/10.1186/s40537-019-0197-0
Spitters, C.J.T 1986 Separating the diffuse and direct component of global radiation and its implications for modeling canopy photosynthesis part II: Calculation of canopy photosynthesis Agr. For. Meteorol. 38 231 242 https://doi.org/10.1016/0168-1923(86)90061-4
Sritarapipat, T., Rakwatin, P. & Kasetkasem, T. 2014 Automatic rice crop height measurement using a field server and digital image processing Sensors (Switzerland) 14 900 926 https://doi.org/10.3390/s140100900
Takagi, H 1994 Vegetables peculiar to Japan 111 112 Konishi, K., Iwahori, S., Kitagawa, H. & Yakuwa, T. Horticulture in Japan. Asakura Publishing Co., Ltd. Tokyo, Japan
Tichý, L 2016 Field test of canopy cover estimation by hemispherical photographs taken with a smartphone J. Veg. Sci. 27 427 435 https://doi.org/10.1111/jvs.12350
Van Henten, E.J. & Bontsema, J. 1995 Non-destructive crop measurements by image processing for crop growth control J. Agr. Eng. Res. https://doi.org/10.1006/jaer.1995.1036
Weiss, M., Baret, F., Smith, G.J., Jonckheere, I. & Coppin, P. 2004 Review of methods for in situ leaf area index (LAI) determination part II: Estimation of LAI, errors and sampling Agr. For. Meteorol. 121 37 53 https://doi.org/10.1016/j.agrformet.2003.08.001
Yan, G., Hu, R., Luo, J., Weiss, M., Jiang, H., Mu, X., Xie, D. & Zhang, W. 2019 Review of indirect optical measurements of leaf area index: Recent advances, challenges, and perspectives Agr. For. Meteorol. 265 390 411 https://doi.org/10.1016/j.agrformet.2018.11.033
Yeh, Y.H.F., Lai, T.C., Liu, T.Y., Liu, C.C., Chung, W.C. & Te Lin, T. 2014 An automated growth measurement system for leafy vegetables Biosyst. Eng. 117 43 50 https://doi.org/10.1016/j.biosystemseng.2013. 08.011
Zhang, Z 2000 A flexible new technique for camera calibration IEEE Trans. Pattern Anal. Mach. Intell. 22 1330 1334 https://doi.org/10.1109/34.888718
Parameter values used for the training of DeepLabv3+.