Open Access
Volume 39, Number 5, October 2021
Page(s) 1077 - 1086
Published online 14 December 2021
  1. Bao J J, Ji L J. Frequency hopping sequences with optimal partial hamming correlation[J]. IEEE Trans on Information Theory, 2016, 62(6) : 3768–3783. [Article] [CrossRef] [Google Scholar]
  2. Wang X J, Lei M J, Zhao M J, et al. Cooperative anti-jamming strategy and outage probability optimization for multi-hop ad-hoc networks[C]//2017 IEEE 86th Vehicular Technology Conference, 2017: 24–27 [Google Scholar]
  3. Sun J, Li X. Carrier frequency offset synchronization algorithm for short burst communication system[C]//Proceedings of 2016 IEEE 13th International Conference on Signal Processing, 2016: 6–10 [Google Scholar]
  4. Li Dongsheng, Gao Yang, Yong Aixia. Jamming resource allocation via improved discrete cuckoo search algorithm[J]. Journal of Electronics & Information Technology, 2016, 38(4) : 899–905 [Article] (in Chinese) [Google Scholar]
  5. Liu Yian, Ni Tianquan, Zhang Xiuhui, et al. Application of simulated annealing algorithm in optimizing allocation of radar jamming resources[J]. Systems Engineering and Electronics, 2009, 31(8) : 1914–1917. [Article] (in Chinese) [Google Scholar]
  6. Yuan Jianguo, Nan Shuchong, Zhang Fang, et al. Adaptive resource allocation for multi-user OFDM based on bee colony algorithm[J]. Journal of Jilin University, 2019, 49(2) : 624–630 [Article] (in Chinese) [Google Scholar]
  7. Luong N C, Hoang D T, Gong S, et al. Applications of deep reinforcement learning in communications and networking: a survey[J]. IEEE Communications Surveys & Tutorials, 2019, 21(4) : 3133–3174 [CrossRef] [Google Scholar]
  8. Wang S, Liu H, Gomes P H, et al. Deep reinforcement learning for dynamic multichannel access in wireless networks[J]. IEEE Trans on Cognitive Communications and Networking, 2018, 4(2) : 257–265. [Article] [Google Scholar]
  9. Xu Z, Wang Y, Tang J, et al. A Deep reinforcement learning based framework for power-efficient resource allocation in cloud RANs[C]//2017 IEEE International Conference on Communications, 2017: 1–6 [Google Scholar]
  10. Liao Xiaomin, Yan Shaohu, Shi Jia, et al. Deep Reinforcement learning based resource allocation algorithms in cellular networks[J]. Journal on Communications, 2019, 40(2) : 11–18 [Article] (in Chinese) [Google Scholar]
  11. Fan Meng, Chen Peng, Wu Lenan, et al. Power allocation in multi-user cellular networks: deep reinforcement learning approaches[J]. IEEE Trans on Wireless Communications, 2020, 19(10) : 6255–6267. [Article] [Google Scholar]
  12. Zhao D, Qin H, Song B, et al. A graph convolutional network-based deep reinforcement learning approach for resource allocation in a cognitive radio network[J]. Sensors, 2020, 20(18) : 5216–5239. [Article] [NASA ADS] [CrossRef] [Google Scholar]
  13. Kaur A, Kumar K. Energy-efficient resource allocation in cognitive radio networks under cooperative multi-agent model-free reinforcement learning schemes[J]. IEEE Trans on Network and Service Management, 2020, 17(3) : 1337–1348. [Article] [CrossRef] [Google Scholar]
  14. Zhang H, Yang N, Long K, et al. Power control based on deep reinforcement learning for spectrum sharing[J]. IEEE Trans on Wireless Communications, 2020, 19(6) : 4209–4219. [Article] [Google Scholar]
  15. Xu Y, Yang C, Hua M, et al. Deep deterministic policy gradient(DDPG)-based resource allocation scheme for NOMA vehicular communications[J]. IEEE Access, 2020, 8 : 18797–18807. [Article] [CrossRef] [Google Scholar]
  16. Zhao N, Liang Y, Niyato D, et al. Deep reinforcement learning for user association and resource allocation in heterogeneous cellular networks[J]. IEEE Trans on Wireless Communications, 2019, 18(11) : 5141–5152. [Article] [Google Scholar]
  17. Amuru S, Tekin C, Schaar M, et al. Jamming bandits-a novel learning method for optimal jamming[J]. IEEE Trans on Wireless Communications, 2016, 15(4) : 2792–2808. [Article] [Google Scholar]
  18. Luo Z, Zhang S. Dynamic spectrum management: complexity and duality[J]. IEEE Journal of Selected Topics in Signal Processing, 2008, 2(1) : 57–73. [Article] [NASA ADS] [CrossRef] [Google Scholar]
  19. Haarnoja T, Tang H, Abbeel P, et al. Reinforcement learning with deep energy-based policies[C]//Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 2017: 1352–1361 [Google Scholar]
  20. Haarnoja T, Zhou A, Abbeel P, et al. Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor[C]//Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 2018: 1861–1870 [Google Scholar]
  21. Mnihl V, Kavukcuoglul K, Sliver D, et al. Human-level control through deep reinforcement learning[J]. Nature, 2015, 518(7540) : 529–540. [Article] [NASA ADS] [CrossRef] [Google Scholar]
  22. Fujimoto S, Hoof H, Meger M. Addressing function approximation error in actor-critic methods[C]//Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 2018: 1587–1596 [Google Scholar]
  23. Lillicrap T, Hunt J, Pritzel A, et al. Continuous control with deep reinforcement learning[C]//Proceedings of the 32th International Conference on Machine Learning, Lille, France, 2015: 2361–2369 [Google Scholar]
  24. Durk K, Salimans T, Welling M. Variational dropout and the local reparameterization trick[C]//Advances in Neural Information Processing Systems, Montreal, Canada, 2015: 2575–2583 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.