Day 4

Detailed paper information

Back to list

Paper title Relevance Extraction from Sentinel-2 Multispectral Images using Deep Support Vector Data Description
Authors
  1. Omid Ghozatlou University Politehnica of Bucharest Speaker
  2. Mihai Datcu DLR - German Aerospace Center
Form of presentation Poster
Topics
  • C1. AI and Data Analytics
    • C1.07 ML4Earth: Machine Learning for Earth Sciences
Abstract text Relevance extraction plays an important role and essential step in various image processing applications such as image classification, active learning, sample labeling, and content-based image retrieval (CBIR) tasks. The most major part of CBIR is querying image contents includes two steps. The first is feature extraction, which creates a set of features for describing and characterizing images, and the second is relevance retrieval, which looks for and retrieves images that are similar to the query image. It's worth noting that relevance extraction has a significant impact on image retrieval performance.
Support vector data description (SVDD) is the well-known and traditional approach for one-class classification or anomaly detection. The main idea of SVDD is mapping samples of the class of interest into a hypersphere so that samples of the class of interest fall inside of this hypersphere and samples of other classes fall out of it. Integrating state-of-the-art deep learning (DL) algorithms with conventional modeling is essential to solving complex science and engineering problems. In the last decayed DL got a lot of attention from various applications. Using a deep neural network (DNN) provides high-level features extraction. LeNet, a well-known DNN in computer vision was used to map samples from the input space into the features latent space in this study. The objective of the DNN is to minimize the Euclidian distance between the center of the hypersphere with the output of the given training samples.
In order to compare the method with state-of-the-art, we employed benchmarked EuroSAT dataset that was captured by the Sentinel-2 satellite. The dataset includes 27000 samples of multispectral Sentinel-2 images within 10 different classes. Therefore, there are 10 setups in which of them the class of interest is different. For the training stage of DNN, we use only one class as a class of interest. In other words, the DNN does not see samples of other classes. For testing the network, all samples of the dataset are used to measure the classes of interest and other classes. The trained DNN predicts the score for each sample of the test set. The score measures the distance of output of the network from the center of the hypersphere. The lower distance represents the relevant samples to the class of interest and the highest distance corresponds to the most ambiguous sample of the dataset.