Abstract
Pixel level image fusion refers to the processing and synergistic combination of information gathered by various imaging sources to provide a better understanding of a scene. We formulate the image fusion as an optimization problem and propose an information theoretic approach in a multiscale framework to obtain its solution. A biorthogonal wavelet transform of each source image is first calculated, and a new Jensen-Rényi divergence-based fusion algorithm is developed to construct composite wavelet coefficients according to the measurement of the information patterns inherent in the source images. Experimental results on fusion of multi-sensor navigation images, multi-focus optical images, multi-modality medical images and multi-spectral remote sensing images are presented to illustrate the proposed fusion scheme.
Get full access to this article
View all access options for this article.
