Day 4

Detailed paper information

Back to list

Paper title Emulation of synthetic hyperspectral Sentinel-2-like images using Neural Networks
Authors
  1. Miguel Morata Dolz Universitiy of Valencia Speaker
  2. Adrian Perez University of Valencia
  3. Bastian Siegmann Forschungszentrum Juelich GmbH
  4. Juan Pablo RIVERA-CAICEDO CONACYT-UAN
  5. Jochem Verrelst University of Valencia
Form of presentation Poster
Topics
  • C1. AI and Data Analytics
    • C1.07 ML4Earth: Machine Learning for Earth Sciences
Abstract text Emulation of synthetic hyperspectral Sentinel-2-like images using Neural Networks

Miguel Morata, Bastian Siegmann, Adrian Perez, Juan Pablo Rivera Caicedo, Jochem Verrelst

Imaging spectroscopy provides unprecedented information for the evaluation of the environmental conditions in soil, vegetation, agricultural and forestry areas. The use of imaging spectroscopy sensors and data is growing to maturity with research activities focused on proximal, UAV, airborne and spaceborne hyperspectral observations. However, presently there are only a few hyperspectral satellites in operation. An alternative approach to approximate hyperspectral images acquired from space is to emulate synthetic hyperspectral data from multi-spectral satellites such as Sentinel-2 (S2). The principle of emulation is approximating the input-output relationships by means of a statistical learning model, also referred to as emulator (O’Hagan 2006, Verrelst et al., 2016). Emulation recently emerged as an appealing acceleration technique in processing tedious imaging spectroscopy applications such as synthetic scene generation (Verrelst et al., 2019) and in atmospheric correction routines. The core idea is that once the emulator is trained, it allows generating synthetic hyperspectral images consistent with an input multispectral signal, and this at a tremendous gain in processing speed. Emulating a synthetic hyperspectral image from multi-spectral data is challenging because of its one-to-many input-output spectral correspondence. Nevertheless, thanks to dimensionality reduction techniques that take advantage of the spectral redundancy, the emulator is capable of relating the output hyperspectral patterns that can be consistent with the input spectra. As such, emulators allow finding statistically the non-linear relationships between the low resolution and high spectral resolution data, and thus can learn the most common patterns in the dataset.

In this work, we trained an emulator using two coincident reflectance subsets, consisting of a S2 multi-spectral spaceborne image as input, and a HyPlant airborne hyperspectral sensor image as output. The images were recorded on 26th and 27th of June 2018, respectively, and were acquired around the city of Jülich in the western part of Germany. The S2 image provides multispectral information using 13 bands in the range of 430 to 2280 nm. The used image was acquired by the MSI sensor of S2A and provided bottom-of-atmosphere (BOA) reflectance data (L2A). The influence in the performance of choosing spatial resampling to 10 or 20 m resolution and the exclusion of Aerosol and Water Vapour bands have been assessed. The HyPlant DUAL image provides contiguous spectral information from 402 to 2356 nm with a spectral resolution of 3-10 nm in the VIS/NIR and 10 nm in the SWIR spectral range. We used the BOA reflectance product of 9 HyPlant flight lines mosaiced to one image and compared it with the S2 scene.

Regarding the role of machine learning (ML) algorithms to serve as an emulator, kernel-based ML methods have proven to perform accurate and fast when trained with few samples. Instead, when many samples are introduced into training, kernel-based ML methods are computationally costly, while neural networks (NN) keep performing fast and accurately with increasing samples. For this reason, given a dense random sampling over the S2 image and corresponding HyPlant data as output, evaluating multiple ML algorithms led to superior accuracies achieved by NN in emulating hyperspectral data. Using the NN model, a final emulator has been developed that converts an S2 image into a hyperspectral S2-like image. As such, the texture of S2 has been preserved while the hyperspectral datacube has the spectral characteristics and quality of HyPlant data. Following, the S2-like synthetic hyperspectral image has been successfully validated against a reference dataset obtained by HyPlant with a R2 of 0.85 and NRMSE of 3.45%. We observed that the emulator is able to generate S2-like hyperspectral images with high accuracy including spectral ranges not covered by S2. Finally, it must be remarked that emulated images do not replace hyperspectral image data recorded by spaceborne sensors. However, they can serve as synthetic test data in the preparation of future imaging spectroscopy missions such as FLEX or CHIME. Furthermore, the emulation technique opens the door to fuse high spatial resolution multi-spectral images with high spectral resolution hyperspectral images.

O’Hagan, A. Bayesian analysis of computer code outputs: A tutorial. Reliab. Eng. Syst. Saf. 2006, 91, 1290–1300.
Verrelst, J.; Sabater, N.; Rivera, J.P.; Muñoz Marí, J.; Vicent, J.; Camps-Valls, G.; Moreno, J. Emulation of Leaf, Canopy and Atmosphere Radiative Transfer Models for Fast Global Sensitivity Analysis. Remote Sens. 2016, 8, 673.
Verrelst, J.; Rivera Caicedo, J.P.; Vicent, J.; Morcillo Pallarés, P.; Moreno, J. Approximating Empirical Surface Reflectance Data through Emulation: Opportunities for Synthetic Scene Generation. Remote Sens. 2019, 11, 157.