Multi-Objective Deep Learning for Image Fusion in Remote Sensing

Date

2021-07-15

ORCID

Journal Title

Journal ISSN

Volume Title

Publisher

item.page.doi

Abstract

Remote sensing image fusion plays an important role in Earth monitoring applications. Earth observation satellites capture remote images in different modalities that can be classified into two main ones: (1) Panchromatic images which represent spatial details, (2) Multispectral images which represent spectral content. These two modalities are captured at the same time at different resolutions. The task of image fusion is to fuse these two images in such a way that the fused image reflect both spectral and spatial attributes. Deep learning models are increasingly used for this purpose. In this dissertation, two major challenges in multispectral image fusion is discussed as related to deep learning solutions. The first challenge involves training deep neural networks with a task-specific loss function. This challenge is addressed in this dissertation by designing a loss function to minimize spectral and spatial distortions at the same time. The second challenge involves capturing of panchromatic and multispectral images at different resolutions, and consequently the ground-truth data are not normally available. This challenge is addressed in this dissertation by employing deep generative models that ease the dependency on reference data for training. Extensive experimental studies are conducted to show the effectiveness of the developed solutions. The results obtained indicate the superiority of the developed solutions over recently developed deep learning models.

Description

Keywords

Deep learning (Machine learning), Remote-sensing images, Multispectral imaging

item.page.sponsorship

Rights

Citation