|Paper title||Improved deadwood mapping with UAVs and deep learning|
|Form of presentation||Poster|
Deadwood, both standing and fallen, are important components for the biodiversity of boreal forests, as it offers a home for several endangered species (such as fungi, mosses, insects and birds). According to the State of Europe’s Forests 2020 report, Finland ranks on the bottom among the European countries in the amount of both standing and fallen deadwood (m³/ha), with only 6 m³/ha of deadwood on average . There are, however, large differences between different forest types, as non-managed old-growth forests have several times more decaying wood compared to managed forests. There is a severe lack of stand-level deadwood data in Finland, as the Finnish national forest inventory focuses on large scale estimates, andin the forest inventories aiming for operative forest data the deadwood is not measured at all. As the amount of deadwood (t/ha) is proposed to be one of the mandatory forest ecosystem condition indicators in the Eurostat legal proposal and in the national biodiversity strategy, there is an increasing need for accurate stand-level deadwood data.
Compared to most other forest variables, estimating the amount of deadwood is far more challenging. As the generation of deadwood in the forest is a stochastic process that is difficult to model. Building accurate models for deadwood estimation is especially difficult for managed forests, as harvesting the trees effects on how much of the deadwood is generated. Because of these factors, having reliable estimates for the amount of deadwood require much more field observations compared to estimates for the growing trees. Right now, the only way to get accurate estimations of deadwood are direct measurements in the field, which are both time-consuming and expensive. Due to this, developing new and improved field data collection methods is required.
In the recent decade, computer vision methods have advanced rapidly, and they can be used to automatically detect and classify individual trees from high quality Unmanned Aerial Vehicle (UAV) imagery accurately. This makes it possible to better utilize UAVs for field data collection, as the UAV data are spatially continuous, already georeferenced and cover larger areas compared to traditional field work. UAVs are also only method for remotely mapping small objects, such as deadwood, as even the most spatially accurate commercial satellites provide 30cm ground sampling distance, compared to less than 5 cm that is easily achievable with UAVs. It is worth noting, though, that the spatial coverage of UAVs is not feasible for operational, large-scale mapping, and that the information that can be extracted from aerial imagery is limited to what can be seen from above, as much of the forest floor is obscured by the canopy. Nevertheless, even with these shortcomings, we consider efficient usage of UAVs to be valuable for field data collection, especially when the variables of interest are, for instance, distributions of different tree species and deadwood.
Our first study area is in Hiidenportti, Eastern-Finland, from where we have collected 10km² of UAV data with around 4cm ground sampling distance, as well as extensive and accurately located field data for standing and downed deadwood. Our other study area is in Evo, Southern-Finland, from which we have several RGB UAV images with ground sampling distances varying from 1.3 to 5 cm. The total area covered by Evo data is around 20km². In Evo, our field data consists of field plot data with plot-level deadwood metrics among the collected features. Both of our study areas contain both managed forests as well as conservation areas, offering a representative sample of different Finnish forest types.
In this study, we apply state-of-the-art instance segmentation method, Mask R-CNN, to detect both standing and fallen deadwood from RGB UAV imagery. Using only the field plot data is not sufficient for our methods , as training deep learning models requires large amounts of training data. Instead, we utilize expert-annotated virtual plots to train our models. We extract 90x90 meter square patches that are centered around the field plot locations, and all standing and fallen deadwood present in these plots are manually annotated. In the case of overlapping virtual plots, we extract rectangular are that contains each of these plots. These data are then mosaicked to smaller images and used to train the object detection models. We are using only the data from Hiidenportti to train our models and use the data from Evo to evaluate how these methods work outside of the geographical training location.
We compare our results with both the expert-annotated virtual plots as well as with accurate field-measured plot level data. We evaluate our models with the common object detection metrics, such as Average Precision and Average Recall. We also compare the results with different plot-level metrics, such as the total number of deadwood instances and the total length of downed deadwood, and estimate how much of the deadwood present in the field can be detected from aerial UAV imagery and what factors (such as canopy cover, forest type and deadwood dimensions and decaying rate) affect the detections. According to our preliminary results, the models are able to correctly detect around 68% of the annotated groundwood instances, and there are several cases where the model detects instances the experts have missed.