Day 4

Detailed paper information

Back to list

Paper title DEM-guided Flood Segmentation in Sentinel-2 Images
Authors
  1. Ben Gaffinet RSS-Hydro Speaker
  2. Guy Schumann RSS-Hydro/WASDI
  3. Laura Giustarini RSS-Hydro
  4. Ron Hagensieker osir.io
Form of presentation Poster
Topics
  • D1. Managing Risks
    • D1.03 Satellite EO for Disaster Risk Transfer & Insurance
Abstract text In the aftermath of flood disasters, (re-)insurance has to make critical decisions about activating emergency funds in a timely manner. Rebuilding efforts rely on appropriate payouts of insurance policies. A fast assessment of flood extents based on EO data facilitates decision making for large scale floods.

The local risk of damaging assets through floods is an essential information to set flood insurance premiums appropriately to allow both fair coverage and sustainability of the financing of the insurance. Long historic archives of EO data can and should be exploited to provide (re-)insurance with a solid risk analysis and validate their catastrophe models for high-impact events.

Flood Segmentation in optical images is often hindered by the presence of clouds. As a consequence a substantial volume of optical data is disregarded and excluded from risk analysis. We seek to address this problem by applying machine learning to reconstruct floods in partially clouded optical images. We present flood segmentation results for cloud-free scenarios and an analysis of the resulting algorithm’s transferability to other geographic locations. For our investigation we use freely available satellite imagery from the Copernicus programme. In conjunction DEM based data is used which forms the backbone to address the issue of cloud presence at a later stage.
The Sentinel-2 mission comprises a constellation of two identical polar-orbiting satellites with a revisit time of five days at the equator. For our study we use all bands available at 10 meters and 20 meters resolution which covers RGB, and various Infrared wavelengths. All Sentinel inputs are atmospherically corrected by either choosing Level-2A images or using SNAP for preprocessing.

The Copernicus Digital Elevation Model (DEM) with global coverage at 30 meter resolution (GLO-30m) is provided by ESA as a dataset openly available to any registered user.
From the DEM, additional quantities can be derived that support the identification of possibly flooded areas. The slope of the terrain helps understanding the flow of water. Flow Accumulation helps the delineation of the flooded shorelines, supporting the algorithm in filling up the DEM according to the location in which water is accumulated i.e., cells characterized by high values in the flow accumulation grid. The Height Above Nearest Drainage (HAND) is a drainage normalized version of a DEM. It normalizes topography according to the local relative heights found along the drainage network, and in this way, presents the topology of the relative soil gravitational potentials, or local draining potentials. It has been demonstrated to show a high correlation with the depth of the water table. The Topographic Wetness Index (TWI) is a useful quantity to estimate where water will accumulate in an area with elevation differences. It is a function of slope and the upstream contributing area i.e., flow accumulation.

We distinguish two scenarios for which the reference data is created differently while there is no change in input data preparation. The first case is for segmenting permanent waters for which the reference data is directly extracted from Open Street Maps (OSM). Second is the case of real floods where flood experts are manually labelling the flood extent.

The study uses a combination of two popular neural network architectures to achieve two different purposes. Most importantly, a U-Net architecture is set up to address the image segmentation task. U-Net is, especially in remote sensing, a very popular architecture for this task. Initially the input goes through a sequential series of convolution blocks, that consist of repeated convolutions followed by ReLU layers and downsamplings (max pooling), comparable to conventional LeNets. At the end of these iterations, the operations are reverted via deconvolutions and upsamplings, while additionally, the convolutional layers are concatenated. This is repeated until the original image shape is achieved, and optimization is performed to minimize loss over the entire scene. We extend this architecture by ingesting a Squeeze and Excitation block prior to the U-NET block. This block has the purpose of deriving importance weights for the input channels, e.g. the Sentinel-2 as well as the DEM and its derivative bands, which are then used to estimate the importance of sensors via their contribution to the output. The squeezing works through condensing the previous feature map (or in our case, input data), per channel into a single element via a global max pooling operation. A series of a fully connected, ReLU, and another fully connected layer (with number of channels output), followed by a sigmoid, is then used in the excitation part to multiplicate, in effect to weigh, the input features. This vector of weights can be interpreted as a measure of feature importance, aside from its positive effects on model accuracy. We hence propose a measure to validate the importance of different input datasets, which can as well be visualized and correlated with different landscapes or surface features.

Our entire pipeline is set up within the Microsoft Azure cloud to provide scalability and computational efficiency. We have created two pipelines, one for model training and validation, which also serves to enable retraining and future transferability; and a second pipeline to conduct inference.

Our work focuses on two study sites, Balaton lake in Hungary and Thrace in Greece. The study site in Hungary contains rivers, lakes and urban areas which represent a good diversity in features to be expected in a flood scene. Only permanent waters are being mapped in the Balaton case. The Greek case consists of a river flood that took place on 27th of March 2018. The test set is created with manual labels from the Greek case while the Balaton OSM data is used for additional training data and a preliminary study on purely permanent water scenarios.

Within the FloodSENS project we have long term goals of global operability. For this reason our training and test datasets associated with different AOI’s, are organized to enable a trackable creation of various models, e.g. to fulfill global or regional operability. Our data structure is organized in a modular fashion to facilitate all this, yet at the current stage we provide accuracy metrics on the level of the introduced distinct case studies. I.e., models are trained to specifically optimize outcome based on training and test data from these AOI’s. The proposed network yields meaningful accuracies for the separation of water and non-water areas, while in general the separation of permanent and non-permanent (flood) waters, without the assistance of auxiliary data, remains challenging.

Our current investigations into the weighting produced by the SENet blocks offers clear indications and patterns based on landscapes, into which sensors play a role under which terrain conditions. We quantify the significant advantage of Sentinel-2 over the DEM-based products, at least within a cloud-free scenario. We can further showcase the relevance on the level bands and channels, given indication on the usefulness of deriving different DEM metrics, such as slope, terrain roughness etc. in terms of assisting the flood mapping effort.