Open Access
Volume 41, Number 6, Decembre 2023
Page(s) 1190 - 1197
Published online 26 February 2024
  1. LI Kequan, CHEN Yan, LIU Jiachen, et al. Survey of deep learning-based object detection algorithms[J]. Computer Engineering, 2022, 48(7): 1–12 (in Chinese) [Google Scholar]
  2. DONG Wenxuan, LIANG Hongtao, LIU Guozhu, et al. Review of deep convolution applied to target detection algorithms[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(5): 1025–1042 (in Chinese) [Google Scholar]
  3. VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//31st International Conference on Neural Information Processing Systems, New York, 2017: 6000–6010 [Google Scholar]
  4. KIRILLOV A, USUNIER N, CARION N, et al. End-to-end object detection with transformers[C]//2020 European Conference on Computer Vision, Cham, 2020: 213–229 [Google Scholar]
  5. ZHOU Quan, NI Yinghao, MO Yuwei, et al. FMA-DETR: a Transformer object detection method without encoder[J/OL]. (2023-10-16)[2023-11-30]. [Article] (in Chinese) [Google Scholar]
  6. LIAO Junshuang, TAN Qinghong. DETR with multi-granularity spatial attention and spatial prior supervision[J/OL]. (2023-09-26)[2023-11-30]. [Article] (in Chinese) [Google Scholar]
  7. YAO Z, AI J, LI B, et al. Efficient DETR: improving end-to-end object detector with dense prior[J]. (2021-08-03)[2023-01-09]. [Article] [Google Scholar]
  8. DUAN K, BAI S, XIE L, et al. CenterNet: keypoint triplets for object detection[C]//2019 IEEE/CVF International Confer-ence on Computer Vision, Piscataway, 2019: 6568–6577 [Google Scholar]
  9. ZHU X, SU W, LU L, et al. Deformable DETR: deformable transformers for end-to-end object detection[C]//International Conference on Learning Representations, Montreal, 2020 [Google Scholar]
  10. LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: common objects in context[C]//13th European Conference on Computer Vision, Piscataway, 2014: 740–755 [Google Scholar]
  11. REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137–1149. [Article] [CrossRef] [Google Scholar]
  12. ZHU Zhangli, RAO Yuan, WU Yuan, et al. Research progress of attention mechanism in deep learning[J]. Journal of Chinese Information Processing, 2019, 33(6): 1–11 (in Chinese) [Google Scholar]
  13. GAO P, ZHENG M, WANG X, et al. Fast convergence of DETR with spatially modulated co-attention[C]//2021 International Conference on Computer Vision, Piscataway, 2021: 3601–3610 [Google Scholar]
  14. LIU Qingwen. Construction of vectorized HD map based on transformer[D]. Shenyang: Liaoning University, 2023 (in Chinese) [Google Scholar]
  15. LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 2020, 42(2): 318–327 [CrossRef] [Google Scholar]
  16. ZHOU D, FANG J, SONG X, et al. IoU loss for 2D/3D object detection[C]//2019 International Conference on 3D Vision, Piscataway, 2019: 85–94 [Google Scholar]
  17. GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? The KITTI vision benchmark suite[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition, Piscataway, 2012: 3354–3361 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.