Open Access
Volume 41, Number 4, August 2023
Page(s) 784 - 793
Published online 08 December 2023
  1. YOU Q, LUO J, JIN H, et al. Cross-modality consistent regression for joint visual-textual sentiment analysis of social multimedia[C]//Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, 2016: 13–221 [Google Scholar]
  2. ZHANG Y, LAI G, ZHANG M, et al. Explicit factor models for explainable recommendation based on phrase-level sentiment analysis[C]//Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval, 2014: 83–92 [Google Scholar]
  3. ZHAO S, YAO X, YANG J, et al. Affective image content analysis: two decades review and new perspectives[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 2021, 44(10): 6729–6751 [Google Scholar]
  4. OLKIEWICZ K A, MARKOWSKA-KACZMAR U. Emotion-based image retrieval-an artificial neural network approach[C]//Proceedings of the International Multi Conference on Computer Science and Information Technology, 2010: 89–96 [Google Scholar]
  5. ZHANG H, YANG Z, GÖNEN M, et al. Affective abstract image classification and retrieval using multiple kernel learning[C]//International Conference on Neural Information Processing, 2013: 166–175 [Google Scholar]
  6. ZHAO S, YAO H, YANG Y, et al. Affective image retrieval via multi-graph learning[C]//Proceedings of the 22nd ACM International Conference on Multimedia, 2014: 1025–1028 [Google Scholar]
  7. YOU Q, LUO J, JIN H, et al. Robust image sentiment analysis using progressively trained and domain transferred deep networks[C]//Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015 [Google Scholar]
  8. YANG J, SHE D, SUN M. Joint image emotion classification and distribution learning via deep convolutional neural network[C]//International Joint Conference on Artiticatl Intetttgcnce, 2017: 3266–3272 [Google Scholar]
  9. YANG J, SHE D, LAI Y K, et al. Retrieving and classifying affective images via deep metric learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2018 [Google Scholar]
  10. YAO X, SHE D, ZHANG H, et al. Adaptive deep metric learning for affective image retrieval and classification[J]. IEEE Trans on Multimedia, 2020, 23: 1640–1653 [Google Scholar]
  11. YAO X, SHE D, ZHAO S, et al. Attention-aware polarity sensitive embedding for affective image retrieval[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 1140–1150 [Google Scholar]
  12. YAO X, ZHAO S, LAI Y K, et al. APSE: Attention-aware polarity-sensitive embedding for emotion-based image retrieval[J]. IEEE Trans on Multimedia, 2020, 23: 4469–4482 [Google Scholar]
  13. GOI M T, KALIDAS V, YUNUS N. Mediating roles of emotion and experience in the stimulus-organism-response framework in higher education institutions[J]. Journal of Marketing for Higher Education, 2018, 28(1): 90–112. [Article] [CrossRef] [Google Scholar]
  14. CHEN T, BORTH D, DARRELL T, et al. Deepsentibank: Visual sentiment concept classification with deep convolutional neural networks[J/OL]. (2014-10-30)[2022-08-15]. [Article] [Google Scholar]
  15. BORTH D, JI R, CHEN T, et al. Large-scale visual sentiment ontology and detectors using adjective noun pairs[C]//Proceedings of the 21st ACM International Conference on Multimedia, 2013: 223–232 [Google Scholar]
  16. HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770–778 [Google Scholar]
  17. GROVER A, LESKOVEC J. Node2vec: scalable feature learning for networks[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016: 855–864 [Google Scholar]
  18. XU H, QI G, LI J, et al. Fine-grained image classification by visual-semantic embedding[C]//International Joint Conference on Artiticatl Intetttgcnce, 2018: 1043–1049 [Google Scholar]
  19. YOU Q, LUO J, JIN H, et al. Building a large scale dataset for image emotion recognition: the fine print and the benchmark[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2016 [Google Scholar]
  20. MIKELS J A, FREDRICKSON B L, LARKIN G R, et al. Emotional category data on images from the International Affective Picture System[J]. Behavior research methods, 2005, 37(4): 626–630. [Article] [CrossRef] [Google Scholar]
  21. PENG K C, CHEN T, SADOVNIK A, et al. A mixed bag of emotions: model, predict, and transfer emotion distributions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015: 860–868 [Google Scholar]
  22. EKMAN P. What emotion categories or dimensions can observers judge from facial behavior?[J/OL]. (2013-11-17)[2022-08-15]. [Article] [Google Scholar]
  23. GAO Y, WANG M, TAO D, et al. 3-D object retrieval and recognition with hypergraph analysis[J]. IEEE Trans on Image Processing, 2012, 21(9): 4290–4303. [Article] [NASA ADS] [CrossRef] [Google Scholar]
  24. MANJUNATH B S, OHM J R, VASUDEVAN V V, et al. Color and texture descriptors[J]. IEEE Trans on Circuits and Systems for Video Technology, 2001, 11(6): 703–715. [Article] [CrossRef] [Google Scholar]
  25. DENG J, DONG W, SOCHER R, et al. Imagenet: a large-scale hierarchical image database[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009: 248–255 [Google Scholar]
  26. SCHROFF F, KALENICHENKO D, PHILBIN J. Facenet: A unified embedding for face recognition and clustering[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015: 815–823 [Google Scholar]
  27. KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 83–90 [Google Scholar]
  28. SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]//3rd International Conference on Learning Representations, San Diego, 2015 [Google Scholar]
  29. VAN DER MAATEN L, HINTON G. Visualizing data using t-SNE[J]. Journal of Machine Learning Research, 2008, 9(11): 2579–2605 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.