Infrared and visible image fusion aims to synthesize a new image with complementary information of the source images such as the thermal radiation information and detailed texture information. However, the existing methods applied in the image fusion tasks tend to pay more attention to the improvements of visual quality and multiple evaluation indicators, while ignoring the importance of semantic information in high-level missions of image processing. In this paper, we present a novel approach to image fusion, where the synthesis of fused images is achieved through the utilization of semantic layouts generated via unsupervised learning. With the introduction of Attention Mechanism, the relationships between each pair of pixel points are gained to construct soft semantic layouts and further capture the global context information to make the same kind of semantics show the same fusion effect. By leveraging the semantic information, our method enables the automatic learning of fusion weights for two source images at the same spatial position. In comparison with other state-of-the-art image fusion methods, our experimental results obtained excellent performance both on qualitative results and quantitative indicators. Moreover, our method retains the high-level semantic information to the greatest extent, which is one of our outstanding characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.