Robotics in Agriculture The Australian Centre for Field Robotics is currently pursuing exciting research and development projects in agricultural robotics, which will have a large and long-term impact in Australia and globally over the next decade. We are offering several masters projects to give an opportunity for highly motivated students to join the team. Students will require a strong background in mathematics and programming (mostly Matlab or Python) with an emphasis on lidar and camera data processing, and a strong interest in research. For more information about our work, please visit: http://sydney.edu.au/acfr/agriculture Contact James Underwood: [email protected]
8
Embed
Robotics in Agriculture - Uppsala UniversityRobotics in Agriculture The Australian Centre for Field Robotics is currently pursuing exciting research and development projects in agricultural
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Robotics in Agriculture
The Australian Centre for Field Robotics is currently pursuing exciting research and development
projects in agricultural robotics, which will have a large and long-term impact in Australia and
globally over the next decade. We are offering several masters projects to give an opportunity for
highly motivated students to join the team. Students will require a strong background in
mathematics and programming (mostly Matlab or Python) with an emphasis on lidar and camera
data processing, and a strong interest in research.
For more information about our work, please visit: http://sydney.edu.au/acfr/agriculture
Investigating the use of multi-modal sensing, for avocado, mango and
macadamia tree characterisation The aim of this project is to investigate the use of multiple sensor modalities (including multi/hyper-
spectral vision, thermal, 3D Lidar) on the Shrimp mobile robotic ground vehicle to characterise
commercially grown avocado, mango and macadamia trees. The aim is to measure properties such
as geometry, light interception, health indicators, vigour and fruit yield, as part of a larger program
of research in tree crop management. The wider project, including several universities, is a multi-
scale approach, where satellite, aerial and ground data will be combined, and compared to direct
manual measurements of the trees (e.g. including a subjective tree health rating, harvested fruit
quantity and size distribution, canopy light interception, etc.)
The ability to map tree characteristics and particularly fruit yield prior to harvest assists growers in
managing the production. While satellite based methods are suitable for national scale assessments,
and have even been successful for estimating some properties per individual trees, it has not been
possible to estimate yield accurately at that scale, the cost can be high and the availability at key
seasonal times can be compromised by cloud cover. Terrestrial sensing offers a higher resolution
data source, at a potentially lower cost.
The robotic ground vehicle Shrimp has successfully been used to estimate yield in apple and almond
orchards using high resolution camera and lidar data, however, avocado, mango and macadamia
trees have different canopy structures, with a high degree of occlusion, which necessitates new
approaches to data processing.
This project will investigate multiple modalities of sensor data, in conjunction with comprehensive
ground truth information, to determine which sensing modalities are most capable of estimating the
properties that are important to commercial growers.
The Shrimp mobile robot, equipped with a multi-modal sensor array
Improving image based yield estimates using multiple view-points
The aim of this project is to improve camera based yield estimation algorithms by using images of trees taken from multiple view-points. Consistent and accurate yield estimation is an important component of precision agriculture, as it allows the farm managers to perform cost efficient, high spatial and temporal farm surveys that can be used to debug and optimise farm operations.
In previous work (Hung 2013) an apple trellis yield estimation algorithm samples images at a fixed
0.5m spacing along orchard rows, to avoid overlapping data. This simple approach provides
consistent yield estimation (highly linearly correlated to row harvest yield), however, the estimates
are consistently lower due to visually occluded fruit. In (Nuske 2014) image registration is performed
to discover the region of overlap, and the maximum count in overlapping regions is used to minimise
the effect of occlusion. These approaches preform registration on an image level rather than on
individual fruits and do not guarantee absolute fruit count, the final yield is generated via a
calibration process.
Other approaches perform registration on individual fruits, for example in (Wang 2013) stereo is
used to find the 3D positions of apples allowing repetitions from multiple images to be clustered.
However, stereo is not always applicable, particularly when the canopy geometry is highly complex.
A more suitable approach is described In (Song 2014), where parameter estimation is performed on
the observation model to derive the yield estimate.
The objectives of this project are as follows: 1. Implement a simulated tree/fruit geometry to optimise sensor and sampling strategy 2. Investigate the effectiveness of existing approaches applied to multiple different fruit types,
including apples, avocados and mangos. 3. Compare yield estimation performance with explicit occlusion management to linear
correlation from fixed image spacing or multi-viewpoint selection. 4. Improve occlusion detection methods, for example, by extending (Song 2014) to include the
fundamental camera matrix.
References
- Hung, Calvin, James Underwood, Juan Nieto, and Salah Sukkarieh. "A Feature Learning Based Approach for Automated Fruit Yield Estimation." 9th Conference on Field and Service Robotics (2013)
- Nuske, Stephen, Kyle Wilshusen, Supreeth Achar, Luke Yoder, and Sanjiv Singh. "Automated Visual Yield Estimation in Vineyards." Journal of Field Robotics 31, no. 5 (2014): 837-860.
- Song, Y., C. A. Glasbey, G. W. Horgan, G. Polder, J. A. Dieleman, and G. W. A. M. van der Heijden. "Automatic fruit recognition and counting from multiple images." Biosystems Engineering 118 (2014): 203-215.
- Wang, Qi, et al. "Automated crop yield estimation for apple orchards." Experimental Robotics. Springer International Publishing, 2013.
Apple detection and counting: the detected apples are highlighted by the red circles.
Learning orchard characteristics for universal tree-segmentation The aim of this project is to investigate the use of machine learning to model the geometric
characteristics of different orchards, for the purpose of automatic tree detection and segmentation
in three dimensional lidar data and imagery.
In previous work (Wellington 2013, Jagbrant 2014, Underwood 2015), an algorithm based on a
hidden semi-Markov model (HSMM) was developed and used first in the context of citrus and then
almond tree segmentation. The method relies on hand-tuned models to represent the individual
tree and whole orchard geometry. In theory, the method is applicable to tree-segmentation in any
orchard with discrete trees, yet in practice, tuning the models by hand can be challenging and sub-
optimal.
The objectives of this project are:
1. To extend the existing almond orchard framework for orchards with different fruit varieties.
This results in differing tree geometries (e.g. avocadoes, mangos, lychees, macadamias,
trellised apples, etc) and different constraints (e.g. with or without GPS occlusion from the
canopies). By solving segmentation for these varieties, it is anticipated that an
understanding of the commonality between different orchard models will be gained, in
preparation for the second task:
2. To devise a method to automatically learn the most suitable models and model parameters
from the data.
References - Underwood, J.P., Jagbrant, G., Nieto, J. and Sukkarieh, S. (2015). Lidar-Based Tree Recognition and Platform
Localization in Orchards. Journal of Field Robotics (JFR)
- Jagbrant, G., Underwood, J.P., Nieto, J. and Sukkarieh, S. (2014). Lidar based localisation in almond orchards. In to
appear in Proceedings of Field and Service Robotics (FSR)
- Wellington, C., Campoy, J., Khot, L. and Ehsani, R. (2012). Orchard tree modeling for advanced sprayer control and
automatic tree inventory. In International Conference on Intelligent Robots and Systems (IROS) Workshop on
Agricultural Robotics, pages 5-6.
Typical results of HSMM segmentation of almond trees. Different colours represent different trees, with red boundaries.
Left: Shrimp at a lychee orchard. Right: lidar point cloud of a single block of lychee-trees, coloured by elevation.
Investigating the use of multi-modal sensing from the Ladybird,, for
crop phenotyping. The aim of this project is to investigate the use of multiple sensor modalities (including
mono/stereo/hyper-spectral vision, thermal, 3D Lidar) on the Ladybird mobile robotic ground
vehicle to characterise the traits of crops that are grown for controlled scientific studies. The aim is
to estimate properties such as geometry, crop cover, health indicators, vigour, biomass, in the
context of carefully controlled experiments where properties such as the use of herbicides are
intentionally varied.
The ability to estimate and map these properties accurately and autonomously will create a new tool
for scientists around the world, performing similar experiments, where to date, field measurements
are either done with expensive (and restrictive) gantry cranes, or by a labour intensive hand
measurement process (often by eye or with a ruler). Furthermore, many of these properties are
useful knowledge for commercial growers, where more accurate and timely management decisions
can save money and reduce environmental impact. The same system will serve a dual role.
For this project, we have gathered data from all of the sensors mentioned above and we are in the
process of acquiring professionally obtained hand measurements of the crop. Many combinations
can be explored, mapping from the various sensors (or sensor combinations) and the various traits
of the crop, for which we have ground truth for comparison.
Lidar data of beetroot leaves, coloured by elevation (pink=top of leaves, cyan=top of earth bed). Observations corresponding to the bed are occasionally seen through the leaves and can be used to form a bed/leaf model.
Top view and side view of coloured 3D stereo point cloud of spinach. Towards the left, the earth bed is mostly exposed, towards the right the spinach can be seen.