Open Access
Issue
JNWPU
Volume 43, Number 2, April 2025
Page(s) 388 - 397
DOI https://doi.org/10.1051/jnwpu/20254320388
Published online 04 June 2025
  1. MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[C]//Advances in Neural Information Processing Systems, 2013 [Google Scholar]
  2. PENNINGTON J, SOCHER R, MANNING C D. Glove: global vectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, 2014: 1532–1543 [Google Scholar]
  3. JOULIN A, GRAVE E, BOJANOWSKI P, et al. Bag of tricks for efficient text classification[C]//Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, 2017: 427–431 [Google Scholar]
  4. BARKAN O, KOENIGSTEIN N. Item2vec: neural item embedding for collaborative filtering[C]//2016 IEEE 26th International Workshop on Machine Learning for Signal Processing, 2016: 1–6 [Google Scholar]
  5. CONG S, ZHOU Y. A review of convolutional neural network architectures and their optimizations[J]. Artificial Intelligence Review, 2023, 56: 1905–1969. [Article] [Google Scholar]
  6. ORVIETO A, SMITH S L, GU A, et al. Resurrecting recurrent neural networks for long sequences[C]//International Conference on Machine Learning, 2023: 26670–26698 [Google Scholar]
  7. KIM Y. Convolutional neural networks for sentence classification[C]//Proceedings of the 2014 Conference on Empirical Meth-ods in Natural Language Processing, 2014: 1746–1751 [Google Scholar]
  8. BANSAL M, GOYAL A, CHOUDHARY A. A comparative analysis of K-nearest neighbor, genetic, support vector machine, decision tree, and long short term memory algorithms in machine learning[J]. Decision Analytics Journal, 2022, 3: 100071. [Article] [CrossRef] [Google Scholar]
  9. WEERAKODY P B, WONG K W, WANG G, et al. A review of irregular time series data handling with gated recurrent neural networks[J]. Neurocomputing, 2021, 441: 161–178. [Article] [CrossRef] [Google Scholar]
  10. YANG Z, YANG D, DYER C, et al. Hierarchical attention networks for document classification[C]//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016: 1480–1489 [Google Scholar]
  11. HAN K, WANG Y, CHEN H, et al. A survey on vision transformer[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 2022, 45(1): 87–110 [Google Scholar]
  12. OYEDOTUN O K, KONSTANTINOS P, DJAMILA A. A new perspective for understanding generalization gap of deep neural networks trained with large batch sizes[J]. Applied Intelligence, 2023, 53(12): 15621–15637. [Article] [Google Scholar]
  13. BARTOLDSON B R, KAILKHURA B, BLALOCK D. Compute-efficient deep learning: algorithmic trends and opportunities[J]. Journal of Machine Learning Research, 2023, 24(1): 77 [Google Scholar]
  14. NOKHWAL S, CHILAKALAPUDI P, DONEKAL P, et al. Accelerating neural network training: a brief review[C]//Proceedings of the 2024 8th International Conference on Intelligent Systems, 2024: 31–35 [Google Scholar]
  15. MARTIN A, AGARWAL A, BARHAM P, et al. Tensorflow: large-scale machine learning on heterogeneous distributed systems[J/OL]. (2016-03-16)[2024-03-21]. [Article] [Google Scholar]
  16. PASZKE A, GROSS S, MASSA F, et al. Pytorch: an imperative style, high-performance deep learning libraryry[C]//Advances in Neural Information Processing Systems, 2019 [Google Scholar]
  17. MIKLOS R, MICHAELl S, MIKLSS R. John von Neumann and the foundations of quantum physics[M]. Berlin: Springer, 2003 [Google Scholar]
  18. MAAS A, DALY R E, PHAM P T, et al. Learning word vectors for sentiment analysis[C]//Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 2011: 142–150 [Google Scholar]
  19. PANG B, LEE L, VAITHYANATHAN S. Thumbs up? Sentiment classification using machine learning techniques[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2002: 79–86 [Google Scholar]
  20. SOCHER R, PERELYGIN A, WU J, et al. Recursive deep models for semantic compositionality over a sentiment treebank[C]//Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, 2013: 1631–1642 [Google Scholar]
  21. ZHANG X, ZHAO J, LECUN Y. Character-level convolutional networks for text classification[C]//International Conference on Neural Information Processing Systems, 2015 [Google Scholar]
  22. WANG Y, SUN A, HAN J, et al. Sentiment analysis by capsules[C]//Proceedings of the 2018 World Wide Web Conference, 2018: 1165–1174 [Google Scholar]
  23. ALBAWI S, MOHAMMED T A, AL-ZAWI S. Understanding of a convolutional neural network[C]//2017 International Conference on Engineering and Technology, 2017: 1–6 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.