Open Access
Issue |
JNWPU
Volume 42, Number 4, August 2024
|
|
---|---|---|
Page(s) | 744 - 752 | |
DOI | https://doi.org/10.1051/jnwpu/20244240744 | |
Published online | 08 October 2024 |
- SHAFER A S. Using color to separate reflection components[J]. Color Research & Application, 1985, 10(4): 210–218 [CrossRef] [Google Scholar]
- ZHU Xinli, ZHANG Yasheng, FANG Yuqiang, et al. Review of multi-exposure image fusion methods[J]. Laser & Optoelectronics Progress, 2023, 60(22): 2200003 (in Chinese) [CrossRef] [Google Scholar]
- FERIS S R, RASKAR R, TAN K, et al. Specular highlights detection and reduction with multi-flash photography[J]. Journal of the Brazilian Computer Society, 2006, 12(1): 35–42. [Article] [CrossRef] [Google Scholar]
- MA K, DUANMU Z, ZHU H, et al. Deep guided learning for fast multi-exposure image fusion[J]. IEEE Trans on Image Processing, 2020, 29(1): 2808–2819 [CrossRef] [Google Scholar]
- PRABHAKAR K R, SRIKAR V S, BABU R V. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs[C]//IEEE International Conference on Computer Vision, 2017: 4724–4732 [Google Scholar]
- XU H, MA J, ZHANG X P. MEF-GAN: multi-exposure image fusion via generative adversarial networks[J]. IEEE Transa on Image Processing, 2020, 29(5): 7203–7216 [CrossRef] [Google Scholar]
- AMIN-NAJI M, AGHAGOLZADEH A, EZOJI M. Ensemble of CNN for multi-focus image fusion[J]. Information Fusion, 2019(51): 201–214 [CrossRef] [Google Scholar]
- XIANG Y, ZULQARNAIN S G, HANLIN Q, et al. Structural similarity loss for learning to fuse multi-focus images[J]. Sensors, 2020, 20(22): 6647–6664 [NASA ADS] [CrossRef] [Google Scholar]
- PENG F, ZHANG M, LAI S, et al. Deep HDR reconstruction of dynamic scenes[C]//IEEE International Conference on Image, Vision and Computing, 2018: 347–351 [Google Scholar]
- ILG E, MAYER N, SAIKIA T, et al. Flownet 2.0: evolution of optical flow estimation with deep networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 2462–2470 [Google Scholar]
- YAN Q, GONG D, SHI Q, et al. Attention-guided network for ghost-free high dynamic range imaging[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 1751–1760 [Google Scholar]
- HAN D, LI L, GUO X, et al. Multi-exposure image fusion via deep perceptual enhancement[J]. Information Fusion, 2022, 79: 248–262 [CrossRef] [Google Scholar]
- RADFORD A, METZ L, CHINTALA S. Unsupervised representation learning with deep convolutional generative adversarial networks[J/OL]. (2016-01-07)[2023-05-22]. [Article] [Google Scholar]
- MA K, ZENG K, WANG Z. Perceptual quality assessment for multi-exposure image fusion[J]. IEEE Trans on Image Processing, 2015, 24(11): 3345–3356 [NASA ADS] [CrossRef] [Google Scholar]
- FU J, LI W, DU J, et al. DSAGAN: a generative adversarial network based on dual-stream attention mechanism for anatomical and functional image fusion[J]. Information Sciences, 2021, 576: 484–506 [CrossRef] [Google Scholar]
- HE K, SUN J, TANG X. Guided image filtering[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 2012, 35(6): 1397–1409 [Google Scholar]
- LUO Lingjie. Research on image highlight removal based on deep learning[D]. Hangzhou: Hangzhou Dianzi University, 2020 (in Chinese) [Google Scholar]
- XU J, LI Z, DU B, et al. Reluplex made more practical: Leaky ReLU[C]//2020 IEEE Symposium on Computers and Communications, 2020: 1–7 [Google Scholar]
- WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Trans on Image Processing, 2004, 13(4): 600–612 [NASA ADS] [CrossRef] [Google Scholar]
- HUANG H, LIN L, TONG R, et al. Unet 3+: a full-scale connected unet for medical image segmentation[C]//2020 IEEE International Conference on Acoustics, Speech and Signal Processing, 2020: 1055–1059 [Google Scholar]
- ZHANG Xiaoli, LI Xiongfei, LI Jun. Validation and correlation analysis of metrics for evaluating performance of image fusion[J]. Acta Automatica Sinica, 2014, 40(2): 306–315 (in Chinese) [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.