Open Access
Issue
JNWPU
Volume 40, Number 5, October 2022
Page(s) 970 - 979
DOI https://doi.org/10.1051/jnwpu/20224050970
Published online 28 November 2022
  1. LYU Qian, TAO Peng, WU Hong, et al. Autonomous flight and obstacle avoidance of mav in indoor environment[J]. Software Guide, 2021, 20(2): 114–118 [Article] (in Chinese) [Google Scholar]
  2. LAI Wugang, LIU Zongwen, JI Kaifei, et al. System design of autonomous patrol UAV based on lidar[J]. Microcontrollers and Embedded System Applications, 2020, 20(5): 4[Article] (in Chinese) [Google Scholar]
  3. LI Bohao, LUO Yonghan, PENG Keqin. Design of indoor mobile robot based on laser SLAM[J]. Digital World, 2020, 12: 16–18 [Article] (in Chinese) [Google Scholar]
  4. YANG Wei, ZHU Wenqiu, ZHANG Changlong. UAV rapid and autonomous obstacle avoidance based on RGB-D camera[J]. Journal of Hunan University of Technology, 2015, 29(6): 6[Article] (in Chinese) [Google Scholar]
  5. GIUSTI A, GUZZI J, DAN C C, et al. A machine learning approach to visual perception of forest trails for mobile robots[J]. IEEE Robotics & Automation Letters, 2017, 1(2): 661–667 [Google Scholar]
  6. HADSELL R, ERKAN A, SERMANET P, et al. Deep belief net learning in a long-range vision system for autonomous off-road driving[C]//Proceedings of the 2008 IEEE Conference on Intelligent Robots and Systems, New York, 2008: 628-633 [Google Scholar]
  7. XING Guansheng, ZHANG Kaiwen, DU Chunyan. Mobile robot obstacle avoidance algorithm based on deep reinforcement learning[C]//The 30th China Process Control Conference, 2019(in Chinese) [Google Scholar]
  8. MNIHVKAVUKCUOGLUKSILVERDHuman-level control through deep reinforcement learningNature 20155187540529 [NASA ADS] [CrossRef] [Google Scholar]
  9. KULKARNI T D, NARASIMHAN K R, SAEEDI A, et al. Hierarchical deep reinforcement learning: integrating temporal abstraction and intrinsic[J/OL]. (2016-05-31)[2021-12-02]. https://arxiv.org/pdf/1604.06057.pdf [Google Scholar]
  10. SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of Go with deep neural networks and tree search[J]. Nature, 2016, 529(7587): 484–489 [CrossRef] [Google Scholar]
  11. SADEGHI F, LEVINE S. CAD2RL: Real single-image flight without asingle real image[J/OL]. (2016-06-08)[2021-12-02]. https://arxiv.org/pdf/1611.04201v4.pdf [Google Scholar]
  12. SINGLA A, PADAKANDLA S, BHATNAGAR S. Memory-based deep reinforcement learning for obstacle avoidance in UAV with limited environment knowledge[J]. IEEE Trans on Intelligent Transportation Systems, 2019, 99: 1–12 [Google Scholar]
  13. HOWARD R A. Dynamic programming and Markov processes[J]. Mathematical Gazette, 1960, 3(358): 120 [Google Scholar]
  14. BERTSEKAS D P, BERTSEKAS D P, BERTSEKAS D P, et al. Dynamic programming and optimal control[M]. Belmont, MA: Athena Scientific, 1995: 20–25 [Google Scholar]
  15. SCHMIDHUBER J. Multi-column deep neural networks for image classification[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition, New York, 2012: 3642-3649 [Google Scholar]
  16. LONG D, ZHAN R, MAO Y. Recurrent neural networks with finite memory length[J]. IEEE Access, 2019, 7: 3642–3649 [Google Scholar]
  17. CHUNG J, GULCEHRE C, CHO K H, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling[J/OL]. (2014-12-11)[2021-12-02]. https://arxiv.org/pdf/1412.3555.pdf [Google Scholar]
  18. SRDJAN J, VICTOR G, ANDREW B. Airway anatomy of airsim high-fidelity simulator[J]. Anesthesiology, 2013, 118(1): 229–230 [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.