SNAMP Pub #24: Delineating Individual Trees from Lidar Data: A Comparison of Vector- and Raster-based Segmentation Approaches
Article Title: Delineating Individual Trees from Lidar Data: A Comparison of Vector- and Raster-based Segmentation Approaches.
Authors: Marek K Jakubowski, Wenkai Li, Qinghua Guo, Maggi Kelly.
- Light detection and ranging (lidar) data is increasingly being used for forest mapping.
- One common task is to delineate from the lidar data the boundaries of individual trees; these mapped trees can be used to understand the way animals use the forest, or to estimate biomass, or to classify the forest into density classes.
- In this work we explored two different algorithms for delineating individual trees from high-density (9 pulses/m2) discreet lidar data and WorldView-2 imagery (WorldView-2 is a new sensor that provides images with detail in the color and infrared portions of the spectrum at 2m resolution).
- We compared the two approaches to each other and to ground reference data across forest density (low, medium and high) with respect to crown size, tree height and crown shape, and considered whether or not trees were dominant or not.
- The tree height agreement was high between the two approaches and the ground data (r2: 0.93–0.96). Tree detection rates increased for more dominant trees (8–100 percent).
- The two approaches delineated tree boundaries that differed in shape: the lidar-approach produced fewer, more complex, and larger polygons that more closely resembled real forest structure.
The successful detection and delineation of individual trees is critical in forest science, allowing for multi-scale analysis of the role of trees in forest functioning. There are many possible methods to delineate trees with remotely sensed data. In this work we explored two different algorithms for delineating individual trees from high-density (9 pulses/m2) discreet lidar data and WorldView-2 imagery combined. The first was a segmentation algorithm using the raw lidar point cloud (called the “3D method”), and the second was an object-based image analysis (OBIA) approach that used a raster layer of canopy height derived from the raw lidar point cloud (called the “OBIA method”). Both methods also used the WorldView-2 imagery. The two methods were compared in terms of their agreement to ground referenced tree heights, as well as tree detection across crown class and tree density.
Both methods performed better in sparse forest than in dense forests. The overall tree height agreements of the “OBIA method” and the “3D method” were high in sparse forests (r2 = 0.9309 and r2 = 0.9163, respectively) and decreased in densely vegetated areas. The tree heights derived by both methods correlate very well (r2 = 0.9545) indicating that the polygons detected the same trees. We also analyzed the rate of tree detection by both methods in various crown classes. All dominant overstory trees (DBH >19.5 cm) were detected by both of the segmentation approaches. However, the rate of detection dropped when trees were occluded by taller or bigger trees. Tree crowns delineated by each method were different in terms of area and shape. The “OBIA method” produced crowns that were smaller and more numerous than those of the “3D method” which were more similar in shape to real tree crowns.
- We compared two methods that delineate individual trees from high density (9 pulses/m2) discreet lidar data and WorldView-2 imagery data.
- The methods produced similar depictions of forest crowns, with some differences. Both methods performed better in sparse forest than in dense forests, and with taller trees than with understory trees.
- The method that delineated trees from the raw data produced tree crowns that were larger and with more realistic shapes than the method that derived tree crowns from a lidar-derived raster layer.
- Further research is necessary to automate both methods for data in dense forests.
Jakubowski, M. J., W. Li, Q. Guo, and M. Kelly. 2013. Delineating individual trees from lidar data: a comparison of vector- and raster-based segmentation approaches. Remote Sensing, 5: 4163-4186.
The full paper is available here.
For more information about the SNAMP project and the Spatial team, please see the: Spatial Team Website.
To learn more about lidar data, check out our lidar FAQs sheet. For more information on lidar data in SNAMP, see the spatial team website: http://snamp.cnr.berkeley.edu/teams/spatial, and our spatial team newsletters that focus on lidar: Vol. 2, No. 3, and Vol. 5, No. 1.