Open Access
Volume 37, Number 2, April 2019
Page(s) 315 - 322
Published online 05 August 2019
  1. Cohen M F, Greenberg D P. The Hemi-Cube:a Radiosity Solution for Complex Environments[J]. ACM Siggraph Computer Graphics, 1985, 19(3): 31–40 [Article] [CrossRef] [Google Scholar]
  2. Yoshio O, Yamazawa K, Takemura H et al. Telepresence by Real-Time View-Dependent Image Generation from Omnidirectional Video Streams[J]. Computer Vision and Image Understanding, 1998, 71(2): 154–165 [Article] [CrossRef] [Google Scholar]
  3. Krishnan A, Ahuja N. Panoramic Image Acquisition[C]//CVPR IEEE, 1996: 379 [Google Scholar]
  4. Lawrence S, Giles C L, Tsoi A C et al. Face Recognition:a Convolutional Neural-Network Approach[J]. IEEE Trans on Neural Networks, 1997, 8(1): 98–113 [Article] [CrossRef] [Google Scholar]
  5. Yann L, Bottou L, Bengio Y et al. Gradient-Based Learning Applied to Documentrecognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278–2324 [Article] [CrossRef] [Google Scholar]
  6. Yann L, Boser B, Denker J S et al. Backpropagation Applied to Handwritten Zip Code Recognition[J]. Neural Computation, 1989, 1(4): 541–551 [Article] [Google Scholar]
  7. Zhang L, Tam W J. Stereoscopic Image Generation Based on Depth Images for 3D TV[J]. IEEE Trans on Broadcasting, 2005, 51(2): 191–199 [Article] [CrossRef] [Google Scholar]
  8. Park J H, Baasantseren G, Kim N et al. View Image Generation in Perspective and Orthographic Projection Geometry Based on Integral Imaging[J]. Optics Express, 2008, 16(12): 8800–8813 [Article] [NASA ADS] [CrossRef] [Google Scholar]
  9. Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative Adversarial Nets[C]//Advances in Neural Information Processing Systems, 2014: 2672–2680 [Article] [Google Scholar]
  10. Bengio Y, Laufer E, Alain G, et al. Deep Generative Stochastic Networks Trainable By Backprop[C]//International Conference on Machine Learning, 2014: 226–234 [Article] [Google Scholar]
  11. Jarrett K, Kavukcuoglu K, Lecun Y. What is the Best Multi-Stage Architecture for Object Recognition?[C]//IEEE 12th International Conference on Computer Vision, 2009: 2146–2153 [Article] [Google Scholar]
  12. Chen X, Duan Y, Houthooft R, et al. Infogan: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets[C]//Advances in Neural Information Processing Systems, 2016: 2172–2180 [Google Scholar]
  13. Isola P, Zhu J Y, Zhou T, et al. Image-to-Image Translation with Conditional Adversarial Networks[C]//Proceddings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017 [Article] [Google Scholar]
  14. Karras T, Aila T, Laine S, et al. Progressive Growing of Gans for Improved Quality, Stability, and Variation[EB/OL]. (2017-10-27)[2018-05-09]. [Article][Article] [Google Scholar]
  15. Radford A, Metz L, Chintala S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks[EB/OL]. (2015-11-19)[2018-05-09].[Article] [Google Scholar]
  16. Liu M Y, Tuzel O. Coupled Generative Adversarial Networks[C]//Advances in Neural Information Processing Systems, 2016: 469–477 [Article] [Google Scholar]
  17. Denton E L, Chintala S, Fergus R. Deep Generative Image Models Using Anlaplacian Pyramid of Adversarial Networks[C]//Advances in Neural Information Processing Systems, 2015: 1486–1494 [Article] [Google Scholar]
  18. Arjovsky M, Chintala S, Bottou L. Wasserstein Generative Adversarial Networks[C]//International Conference on Machine Learning, 2017: 214–223 [Article] [Google Scholar]
  19. Mirza M, Osindero S. Conditional Generative Adversarial Nets[EB/OL]. (2014-11-6)[2018-05-09]. [Article] [Google Scholar]
  20. Dziugaite G K, Roy D M, Ghahramani Z. Training Generative Neural Networks Via Maximum Mean Discrepancy Optimization[EB/OL]. (2015-05-14)[2018-05-09]. [Article] [Google Scholar]
  21. Ioffe S, Szegedy C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariateshift[EB/OL]. (2015-02-11)[2018-05-09]. [Article] [Google Scholar]
  22. Clevert D A, Unterthiner T, Hochreiter S. Fast and Accurate Deep Network Learning By Exponential Linear Units[EB/OL]. (2015-11-23)[2018-05-09]. [Article] [Google Scholar]
  23. Srivastava N, Hinton G, Krizhevsky A et al. Dropout:a Simple Way to Prevent Neural Networks from Overfitting[J]. The Journal of Machine Learning Research, 2014, 15(1): 1929–1958 [Article] [Google Scholar]
  24. York D G, Adelman J, Anderson JR J E et al. The Sloan Digital Sky Survey:Technical Summary[J]. The Astronomical Journal, 2000, 120(3): 1579–1587 [Article] [NASA ADS] [CrossRef] [Google Scholar]
  25. Smith J A, Tucker D L, Kent S et al. The Ugriz Standard-Star System[J]. The Astronomical Journal, 2002, 1232121–2144 [Article] [Google Scholar]
  26. Kingma D P, Ba J. Adam: A Method for Stochastic Optimization[EB/OL]. (2014-12-22)[2018-05-09]. [Article] [Google Scholar]
  27. LIN J. Divergence Measures Based on the Shannon Entropy[J]. IEEE Trans on Lnformation Theory, 1991, 37(1): 145–151 [Article] [CrossRef] [Google Scholar]
  28. JOYCE J M. Kullback-Leiblerdivergence Berlin, International Encyclopedia of Statistical Science Springer2011720–722 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.