Azarang, ArianManoochehri, Hafez E.Kehtarnavaz, Nasser2019-10-312019-10-312019-03-182169-3536https://hdl.handle.net/10735.1/7050This paper presents a deep learning-based pansharpening method for fusion of panchromatic and multispectral images in remote sensing applications. This method can be categorized as a component substitution method in which a convolutional autoencoder network is trained to generate original panchromatic images from their spatially degraded versions. Low resolution multispectral images are then fed into the trained convolutional autoencoder network to generate estimated high resolution multispectral images. The fusion is achieved by injecting the detail map of each spectral band into the corresponding estimated high resolution multispectral bands. Full reference and no-reference metrics are computed for the images of three satellite datasets. These measures are compared with the existing fusion methods whose codes are publicly available. The results obtained indicate the effectiveness of the developed deep learning-based method for multispectral image fusion. © 2019 IEEE.enIEEE Open Access. Commercial reuse is prohibited.©2019 IEEEhttps://open.ieee.org/index.php/publishing-options/ieee-access-journal/Convolutions (Mathematics)Deep learningRemote sensingMultispectral imagingRemote sensingConvolutional Autoencoder-Based Multispectral Image FusionarticleAzarang, A., H. E. Manoochehri, and N. Kehtarnavaz. 2019. "Convolutional Autoencoder-Based Multispectral Image Fusion." IEEE Access 7: 35673-35683, doi: 10.1109/ACCESS.2019.29055117