Issue |
JNWPU
Volume 41, Number 4, August 2023
|
|
---|---|---|
Page(s) | 820 - 830 | |
DOI | https://doi.org/10.1051/jnwpu/20234140820 | |
Published online | 08 December 2023 |
Remote sensing target detection algorithm based on perceptual extension and anchor frame best-fit matching
基于感知延伸与锚框最适匹配的遥感图像目标检测算法
1
School of Ordnance Science and Technological, Xi'an Technological University, Xi'an 710021, China
2
School of Electronic Information Engineering, Xi'an Technological University, Xi'an 710021, China
3
Development Planning Service, Xi'an Technological University, Xi'an 710021, China
Received:
26
July
2022
Aiming at the small imaging, complex background and crowded distribution of remote sensing image targets, a remote sensing image target detection algorithm (HQ-S2ANet) based on perceptual extension and anchor frame optimal matching is proposed by using the rotating target detection method S2ANet as a baseline network. Firstly, a cooperative attention(SEA) module is built to capture the relationship among the feature pixels when extending the model perception area to realize the relationship modeling between the target and the global. Secondly, the feature pyramid (FPN) feature fusion process is improved to form a perceptual extension feature pyramid module (HQFPN), which guarantees the low-level detail position information in the down sampling process when extending the perception area to enhance the model information capturing capability. Finally, a high-quality anchor frame is used to detect the target by using the high quality anchor frame as the baseline network. The high-quality anchor frame matching method (MaxIoUAssigner_HQ) is used to control the anchor frame truth value assignment by using a constant factor to ensure the recall rate while preventing the generation of low-quality anchor frame matching. The experimental results show that, under the DOTA dataset, the average accuracy(mAP) of HQ-S2ANet is improved by 3.1%, the parameters number increased by only 2.61M and the average recall(recall) is improved by 1.6% compared with the S2ANet algorithm, and the present algorithm effectively enhances the detection capability of the remote sensing image target.
摘要
针对遥感图像目标成像小、背景复杂、分布拥挤的问题, 将旋转目标检测方法S2ANet作为基线网络, 提出一种基于感知延伸与锚框最适匹配的遥感图像目标检测算法(HQ-S2ANet)。构建协同注意力模块(SEA), 捕获特征像素间关系的同时扩展模型感知区域, 实现目标与全局的关系建模; 针对遥感图像背景复杂问题, 改进特征金字塔(FPN)特征融合过程, 在特征融合下采样过程中将感知延伸卷积模块与常规卷积交替堆叠形成感知延伸特征金字塔模块(HQFPN), 保证低层细节位置信息的同时, 延伸感知范围以增强模型信息捕捉能力; 为解决遥感目标图像分布拥挤的问题, 利用高质量锚框匹配方法(MaxIoUAssigner_HQ), 通过常数因子控制锚框真值分配, 在保证召回率的同时, 防止低质量锚框匹配产生。实验结果表明, 在DOTA数据集下, 与S2ANet算法相比, HQ-S2ANet平均精度(mAP)提高3.1%, 召回率(Recall)均值提高1.6%, 而参数量仅增加2.61M, 所提算法有效增强了遥感图像目标检测能力。
Key words: remote sensing image / feature fusion / anchor frame / rotation detection
关键字 : 遥感图像 / 特征融合 / 锚框匹配 / 旋转检测
© 2023 Journal of Northwestern Polytechnical University. All rights reserved.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.