Open Access
Issue |
JNWPU
Volume 40, Number 6, December 2022
|
|
---|---|---|
Page(s) | 1404 - 1413 | |
DOI | https://doi.org/10.1051/jnwpu/20224061404 | |
Published online | 10 February 2023 |
- KUMAR S B K. Image fusion based on pixel significance using cross bilateral filter[J]. Signal Image & Video Processing, 2015, 9(5): 1193–1204. [Article] [Google Scholar]
- BAVIRISETTI D P, DHULI R. Fusion of Infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform[J]. IEEE Sensors Journal, 2015, 16(1): 203–209. [Article] [Google Scholar]
- MA J, CHEN C, LI C, et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion, 2016, 31: 100–109. [Article] [CrossRef] [Google Scholar]
- FU Z Z, ZHAO Y F, XU Y W, et al. Gradient structural similarity based gradient filtering for multi-modal image fusion[J]. Information Fusion, 2020, 53: 251–268. [Article] [CrossRef] [Google Scholar]
- LI H, WU X J. Infrared and visible image fusion using latent low-rank representation[EB/OL]. (2018-4-24)[2022-11-11]. [Article] [Google Scholar]
- PEI Peipei, YANG Yanchun, DANG Jianwu, et al. Infrared and visible image fusion method based on rolling guidance filter and convolution sparse representation[J]. Laser & Optoelectronics Progress, 2022, 59(12): 56–63. [Article] (in Chinese) [Google Scholar]
- YANG Yong, LI Luyi, HUANG Shuying, et al. Remote sensing image fusion with convolutional sparse presentation based on adaptive dictionary learning[J]. Journal of Signal Processing, 2020, 36(1): 125–138. [Article] (in Chinese) [Google Scholar]
- ZHOU Z, WANG B, LI S, et al. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with gaussian and bilateral filters[J]. Information Fusion, 2016, 30: 15–26. [Article] [CrossRef] [Google Scholar]
- PRABHAKAR K R, SRIKAR V S, BABU R V. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs[C]//Proceeding of the 2017 IEEE International Conference on Computer Vision, 2017 [Google Scholar]
- ZHANG Y, LIU Y, SUN P, et al. IFCNN: a general image fusion framework based on convolutional neural network[J]. Information Fusion, 2020, 54: 99–118. [Article] [CrossRef] [Google Scholar]
- LI H, WU X J, KITTLER J. Infrared and visible image fusion using a deep learning framework[C]//Proceeding of the 24th International Conference on Pattern Recognition, 2018 [Google Scholar]
- LI H, WU X J, DURRANI T S. Infrared and visible image fusion with resnet and zero-phase component analysis[J]. Infrared Physics & Technology, 2019, 102: 103039 [CrossRef] [Google Scholar]
- CHEN Guangqiu, WANG Shuai, HUANG Dandan, et al. Infrared and visible image fusion based on multiscale local extrema decomposition and resnet152[J]. Journal of Optoelectronics·Laser, 2022, 33(3): 283–295. [Article] (in Chinese) [Google Scholar]
- ZHANG H, WU C, ZHANG Z, et al. ResNeSt: split-attention networks[C]//Proceeding of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022 [Google Scholar]
- HE K, ZHANG X, REN S, et al. Deep residual learning for image recognitio[C]//Proceeding of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016 [Google Scholar]
- JIE H, LI S, GANG S, et al. Squeeze-and-excitation networks[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011–2023. [Article] [CrossRef] [Google Scholar]
- LI X, WANG W, HU X, et al. Selective kernel networks[C]//Proceeding of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020 [Google Scholar]
- DENG J, DONG W, SOCHER R, et al. ImageNet: a large-scale hierarchical image database[C]//Proceeding of the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2009: 248–255 [Google Scholar]
- MAAS A, HANNUN A, NG A. Rectifier nonlinearities improve neural network acoustic models[C]//Proceeding of the 30th International Conference on Machine Learning, 2013 [Google Scholar]
- CLEVERT D, UNTERTHINER T, HOCHREITER S. Fast and accurate deep network learning by exponential linear units(ELUs)[EB/OL]. (2015-11-23)[2022-11-11]. [Article] [Google Scholar]
- BISWAS K, KUMAR S, BANERJEE S, et al. SMU: smooth activation function for deep networks using smoothing maximum technique[EB/OL]. (2021-11-23)[2022-11-11]. [Article] [Google Scholar]
- MA Qi, ZHU Bin, ZHANG Hongwei. Dual-band image fusion method based on VGGNet[J]. Laser & Infrared, 2019, 49(11): 7. [Article] [Google Scholar]
- TOET A. The TNO multiband image data collection[J]. Data in Brief, 2017, 15: 249–251. [Article] [CrossRef] [Google Scholar]
- ZHANG X, YE P, XIAO G. VIFB: a visible and infrared image fusion benchmark[C]//Proceeding of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020 [Google Scholar]
- XU Dongdong. Research on infrared and visible image fusion based on unsupervised deep learning[D]. Changchun: Chinese Academy of Sciences, 2020: 72–74 (in Chinese) [Google Scholar]
- JAGALINGAM P, HEGDE A V. A review of quality metrics for fused image[J]. Aquatic Procedia, 2015, 4: 133–142. [Article] [CrossRef] [Google Scholar]
- WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Trans on Image Processing, 2004, 13(4): 600–612. [Article] [NASA ADS] [CrossRef] [Google Scholar]
- ESKICIOGLU A M, FISHER P S. Image quality measures and their performance[J]. IEEE Trans on Communications, 1995, 43(12): 2959–2965. [Article] [CrossRef] [Google Scholar]
- XYDEAS C S, PV V. Objective image fusion performance measure[J]. Military Technical Courier, 2000, 56(4): 181–193 [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.