QBox: Learning by partial transfer with active request for object detection
This article was originally published here
IEEE Trans Neural Netw Learning System. Sep 27, 2021; PP. doi: 10.1109 / TNNLS.2021.3111621. Online ahead of print.
Object detection requires a lot of data annotated with bounding boxes to train the model. However, in many applications it is difficult, if not impossible, to acquire a large number of tagged examples for the target task due to the privacy issue or lack of reliable annotators. On the other hand, thanks to high-quality image search engines, such as Flickr and Google, it is relatively easy to obtain resource-rich untagged datasets, whose categories are a superset of those. target data. In this paper, to improve the target model with cost-effective monitoring from the source data, we propose a QBox partial transfer learning approach to actively query the labels of the bounding boxes of the source images. More precisely, we design two criteria, namely informativeness and transferability, to measure the potential utility of a bounding box to improve the target model. Based on these criteria, QBox actively queries the labels of the most useful boxes in the source domain and, therefore, requires fewer training examples to save the cost of labeling. In addition, the proposed query strategy allows annotators to simply label a specific region, instead of the entire image, and thus greatly reduces the difficulty of labeling. Extensive experiments are performed on various partial transfer benchmarks and a real COVID-19 detection task. The results confirm that QBox improves detection accuracy with lower tagging cost compared to advanced query strategies for object detection.