Open Access
Volume 39, Number 3, June 2021
Page(s) 510 - 520
Published online 09 August 2021
  1. Huang P, Liao G, Yang Z, et al. A fast SAR imaging method for ground moving target using a second-order WVD transform[J]. IEEE Trans on Geoence & Remote Sensing, 2016, 54(4): 1940–1956 [Article] [NASA ADS] [CrossRef] [Google Scholar]
  2. Wang Z, Liu M, Ai G, et al. Focusing of bistatic SAR with curved trajectory based on extended azimuth nonlinear chirp scaling[J]. IEEE Trans on Geoscience and Remote Sensing, 2020, 99: 1–20 [Article] [Google Scholar]
  3. Chen Q, Yu A, Sun Z, et al. A multi-mode space-borne SAR simulator based on SBRAS[C]//Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 2012: 4567–4570 [Google Scholar]
  4. Komiske P T, Metodiev E M, Schwartz M D. Deep learning in color: towards automated quark/gluon jet discrimination[J]. Journal of High Energy Physics, 2017(1): 110 10.1007/JHEP01(2017)110 [Google Scholar]
  5. Xu X, Li W, Ran Q, et al. Multisource remote sensing data classification based on convolutional neural network[J]. IEEE Trans on Geoence & Remote Sensing, 2018, 99: 1–13 [Article] [Google Scholar]
  6. Kratzert F, Klotz D, Brenner C, et al. Rainfall-runoff modelling using long short-term memory(LSTM) networks[J]. Hydrology & Earth System Sciences, 2018, 22(11): 6005–6022 [Article] [NASA ADS] [CrossRef] [Google Scholar]
  7. Awais M, Long X, Yin B, et al. A hybrid DCNN-SVM model for classifying neonatal sleep and wake states based on facial expression in video[J]. IEEE Journal of Biomedical and Health Informatics, 2021, 25(5): 1441–1449 [Article] [Google Scholar]
  8. Krizhevsky A, Sutskever I, Hinton G E, et al. Image net classification with deep convolutional neural networks[J]. Advances in Neural Information Proceesing Systems, 2012, 25(2): 1–9 [Article] [NASA ADS] [Google Scholar]
  9. Hennessy J L, Patterson D A. Computer architecture: a quantitative approach[M]. 6th Edition. Cambridge: Morgan Kaufmann Publishers Inc, 2018 [Google Scholar]
  10. Desoli G. 14.1 a 2.9 TOPS/W deep convolutional neural network SoC in FD-SOI 28 nm for intelligent embedded systems[C]//2017 IEEE International Solid-State Circuits Conference, San Francisco, CA, 2017: 238–239 [Google Scholar]
  11. Lou Y, Clark D, Marks P, et al. Onboard radar processor development for rapid response to natural hazards[J]. IEEE Journal of Selected Topics in Applied Earth Obaservations and Remote Sensing, 2016, 9(6): 2770–2776 10.1109/JSTARS.2016.2558505 [NASA ADS] [CrossRef] [Google Scholar]
  12. Moons B, Verhelst M. A 0.3-2.6 TOPS/W precision-scalable processor for real-time large-scale ConvNets[C]//2016 IEEE Symposium on VLSI Circuits, Honolulu, HI, 2016: 1–2 [Google Scholar]
  13. Bert Moons, Roel Uytterhoeven, Wim Dehaene, et al. 14.5 envision: a 0.26-to-10 TOPS/W subword-parallel dynamic-voltage-accuracy-frequency-scalable convolutional neural network processor in 28 nm FDSOI[C]//Solid-State Circuits Conference, 2017 [Google Scholar]
  14. Chen Y, Chen T, Xu Z, et al. DianNao family: energy-efficient hardware accelerators for machine learning[J]. Communications of the ACM, 2016, 59(11): 105–112 10.1145/2996864 [CrossRef] [Google Scholar]
  15. Shin D, Lee J, Lee A, et al. 14.2 DNPU: an 8.1 TOPS/W reconfigurable CNN-RNN processor for general-purpose deep neural networks[C]//IEEE International Solid-state Circuits Conference, 2017 [Google Scholar]
  16. Yin S. A high energy efficient reconfigurable hybrid neural network processor for deep learning applications[J]. IEEE Journal of Solid-State Circuits, 2018, 53(4): 968–982 10.1109/JSSC.2017.2778281 [NASA ADS] [CrossRef] [Google Scholar]
  17. Yang Chen, Li Bingyi, Chen Liang, et al. A spaceborne synthetic aperture radar partial fixed-point imaging system using a field-programmable gate array-application-specific integrated circuit hybrid heterogeneous parallel acceleration technique[J]. Sensors, 2017, 17(7): 1493–1516 10.3390/s17071493 [NASA ADS] [CrossRef] [Google Scholar]
  18. Li Lin, Zhang Shengbing, Wu Juan. Design of deep learning VLIW processor for image recognition[J]. Journal of Northwestern Polytechnical University, 2020, 38(1): 216–224 10.3969/j.issn.1000-2758.2020.01.027[Article] (in Chinese) [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.