Open Access
Issue
JNWPU
Volume 39, Number 3, June 2021
Page(s) 641 - 649
DOI https://doi.org/10.1051/jnwpu/20213930641
Published online 09 August 2021
  1. Vardhan S. Information jamming in electronic warfare: operational requirements and techniques[C]//International Conference on Electronics, 2015: 49–54 [Google Scholar]
  2. Xiao L, Chen T, Liu J, et al. Anti-jamming transmission stackelberg game with observation errors[J]. IEEE Communications Letters, 2015, 19 (6): 949– 952 [Article] [Google Scholar]
  3. Xiao L, Liu J, Li Q, et al. User-centric view of jamming games in cognitive radio networks[J]. IEEE Trans on Information Forensics & Security, 2015, 10 (12): 2578– 2590 [Article] [CrossRef] [Google Scholar]
  4. Xia T, Piny R, Qingh D, et al. Securing wireless transmission against reactive jamming: a stackelberg game framework[C]//IEEE Global Communications Conference, 2016: 1–6 [Google Scholar]
  5. Jia L, Xu Y, S UN, et al. A game-theoretic learning approach for anti-jamming dynamic spectrum access in dense wireless networks[J]. IEEE Trans on Vehicular Technology, 2018, 68 (2): 1646– 1656 [Article] [CrossRef] [Google Scholar]
  6. Yang D, Xue G, Zhang J, et al. Coping with a smart jammer in wireless networks: a stackelberg game approach[J]. IEEE Trans on Wireless Communications, 2013, 12 (8): 4038– 4047 [Article] [CrossRef] [Google Scholar]
  7. Jia L, Yao F, S UN, et al. A hierarchical learning solution for anti-jamming stackelberg game with discrete power strategies[J]. IEEE Wireless Communications Letters, 2017, 6 (6): 818– 821 [Article] [CrossRef] [Google Scholar]
  8. Nasir Y S, Guo D. Multi-agent deep reinforcement learning for dynamic power allocation in wireless networks[J]. IEEE Journal on Selected Areas in Communications, 2019, 37 (10): 2239– 2250 [Article] [CrossRef] [Google Scholar]
  9. Liu X, Xu Y, Jia L, et al. Anti-jamming communications using spectrum waterfall: a deep reinforcement learning approach[J]. IEEE Communications Letters, 2017, 22 (5): 998– 1001 [Article] [CrossRef] [Google Scholar]
  10. Liu S, Xu Y, Chen X, et al. Pattern-aware intelligent anti-jamming communication: a sequential deep reinforcement learning approach[J]. IEEE Access, 2019, 7: 169204– 169216 [Article] [CrossRef] [Google Scholar]
  11. Yao F, Jia L. A collaborative multi-agent reinforcement learning anti-jamming algorithm in wireless networks[J]. IEEE Wireless Communication Letters, 2019, 8 (4): 1024– 1027 [Article] [CrossRef] [Google Scholar]
  12. Machuzak S, Jayaweera S K. Reinforcement learning based anti-jamming with wideband autonomous cognitive radios[C]//IEEE/CIC International Conference on Communications, 2016: 1–5 [Google Scholar]
  13. Aref M A, Jayaweera S K, Machuzak S. Multi-agent reinforcement learning based cognitive anti-jamming[C]//2017 IEEE Wireless Communications and Networking Conference(WCNC), 2017: 1–6 [Google Scholar]
  14. Lopez-Benitez M, Casadevall F. Improved energy detection spectrum sensing for cognitive radio[J]. IET Communications, 2012, 6 (8): 785– 796 [Article] [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  15. Zhang Lin, Tan Junjie, Liang Yingchang, et al. Deep reinforcement learning-based modulation and coding scheme selection in cognitive heterogeneous networks[J]. IEEE Trans on Wireless Communications, 2019, 18 (6): 3281– 3294 [Article] [CrossRef] [Google Scholar]
  16. He Ying, Zhang Zheng, Yu Richard, et al. Deep-reinforcement-learning-based optimization for cache-enabled opportunistic interference alignment wireless networks[J]. IEEE Trans on Vehicular Technology, 2017, 66 (11): 10433– 10445 [Article] [CrossRef] [Google Scholar]
  17. Yang Bo, L EI, et al. A novel multi-agent decentralized win or learn fast policy hill-climbing with eligibility trace algorithm for smart generation control of interconnected complex power grids[J]. Elsevier Energy Conversion & Management, 2015, 103: 82– 93 [Article] [CrossRef] [Google Scholar]
  18. Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning[J]. Nature, 2015, 518: 529– 533 [Article] [CrossRef] [PubMed] [Google Scholar]
  19. Chi Jin, Zeyuan Allenzhu, Sebastien Bubeck, et al. Is Q-learning Provably Efficient?[C]//32nd International Conference on Neural Information Processing Systems, 2018: 4868–4878 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.