Open Access
Volume 39, Number 4, August 2021
Page(s) 930 - 936
Published online 23 September 2021
  1. Zhu Z Q, Chai Y, Yin H P, et al. A novel dictionary learning approach for multi-modality medical image fusion[J]. Neurocomputing, 2016, 214: 471–482 [Article] [Google Scholar]
  2. Wu Chunming, Chen Long. Infrared and visible image fusion method of dual NSCT and PCNN[J]. PLOS ONE, 2020, 15(9): 1–15 [Article] [Google Scholar]
  3. Wang Zhishe, Xu Jiawei, Jiang Xiaolin, et al. Infrared and visible image fusion via hybrid decomposition of NSCT and morphological sequential toggle operator[J]. Optik, 2020, 201: 163497 [Article] [NASA ADS] [CrossRef] [Google Scholar]
  4. Zhao Changli, Zhang Baohui, Wu Jie, et al. Fusion of infrared and visible images based on gray energy difference[J]. Infrared Technology, 2020, 42(8): 775–782 [Article] [Google Scholar]
  5. Zhang Kang, Huang Yongdong, Yuan Xia, et al. Infrared and visible image fusion based on intuitionistic fuzzy sets[J]. Infrared Physics & Technology, 2020, 105: 1–7 [Google Scholar]
  6. Yu Shen, Chen Xiaopeng. Infrared and visible image fusion based on a latent low-rank representation nested with multiscale geometric transform[J]. IEEE Access, 2020, 8: 110214–110226 [Article] [Google Scholar]
  7. Jiang Zetao, Jiang Qi, Huang Yongsong, et al. Infrared and low-light-level visible light enhancement image fusion method based on latent low-rank represent and composite filtering[J]. Acta Photonica Sinica, 2020, 49(4): 0410001 [Article] [Google Scholar]
  8. Wang X Z, Yin J, Zhang K, et al. Ifrared weak-small targets fusion based on latent low-rank representation and DWT[J]. IEEE Access, 2019, 7: 112681–112692 [Article] [Google Scholar]
  9. Liu G C, Lin Z C, Yu Y. Robust subspace segmentation by low-rank representation[C]//Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 2010: 663–670 [Google Scholar]
  10. Liu Guangcan, Yan Shuicheng. Latent low-rank representation for subspace segmentation and feature extraction[C]//IEEE International Conference on Computer Vision, Barcelona, Spain, 2011: 1615–1622 [Google Scholar]
  11. Ma J, Zhou Z, Wang B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82: 8–17 [NASA ADS] [CrossRef] [Google Scholar]
  12. Jiang Zetao, He Yuting, Zhang Shaoqin. Infrared and Low-light-level visible image fusion algorithm based on contrast enhancement and Cauchy fuzzy function[J]. Acta Photonica Sinica, 2019, 48(6): 0610001 [Article] [Google Scholar]
  13. Zhang Xuefeng, Yan Hui, He Hao. Multi-focus image fusion based on fractional-order derivative and intuitionistic fuzzy sets[J]. Front Inform Technol Electron Eng, 2020, 21(6): 834–843 [Article] [Google Scholar]
  14. Balasubramaniam P, Ananthi V P. Image fusion using intuitionistic fuzzy sets[J]. Information Fusion, 2014, 20: 21–30 [Article] [Google Scholar]
  15. Wei Tan, Zhou Huixin, Song Jiangluqi, et al. Infrared and visible image perceptive fusion through multi-level Gaussian curvature filtering image decomposition[J]. Applied Optics, 2019, 58(12): 3064–3073 [Article] [NASA ADS] [CrossRef] [Google Scholar]
  16. Xydeas C S, Petrovic V. Objective image fusion performance measure[J]. Electronics Letters, 2000, 36(4): 308–309 [Article] [CrossRef] [Google Scholar]
  17. Ma K, Zeng K, Wang Z. Perceptual quality assessment for multi-exposure image fusion[J]. IEEE Trans on Image Processing, 2015, 24(11): 3345–3356 [Article] [NASA ADS] [CrossRef] [Google Scholar]
  18. Zhang Lin, Zhang Lei, Mou Xuanqin, et al. FSIM: a feature similarity index for image quality assessment[J]. IEEE Trans on Image Processing, 2011, 20(8): 2378–2386 [Article] [NASA ADS] [CrossRef] [Google Scholar]
  19. Aktar M, Mamun M A, Hossain M A, et al. Weighted normalized mutual information based change detection in remote sensing images[C]//19th International Conference on Computer and Information Technology, Dhaka, 2016: 257–260 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.