Graph-Induced Contrastive Learning for Intra-Camera Supervised Person Re-Identification

Intra-camera supervision (ICS) for person re-identification (Re-ID) assumes that identity labels are independently annotated within each camera view and no inter-camera identity association is labeled. It is a new setting proposed recently to reduce the burden of annotation while expect to maintain...

Full description

Bibliographic Details
Main Authors: Menglin Wang, Baisheng Lai, Jianqiang Huang, Xiaojin Gong, Xian-Sheng Hua
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9337911/
Description
Summary:Intra-camera supervision (ICS) for person re-identification (Re-ID) assumes that identity labels are independently annotated within each camera view and no inter-camera identity association is labeled. It is a new setting proposed recently to reduce the burden of annotation while expect to maintain desirable Re-ID performance. However, the lack of inter-camera labels makes the ICS Re-ID problem much more challenging than the fully supervised counterpart. By investigating the characteristics of ICS, this article proposes a graph-induced contrastive learning (GCL) approach to address this issue. More specifically, we first formulate the cross-camera ID association task as a graph partitioning problem subjected to ICS-specific constraints and design a greedy agglomeration algorithm to solve it. Then, we propose a graph-induced contrastive loss that unifies both intra- and inter-camera learning into a contrastive learning framework to learn a Re-ID model. The cross-camera ID association step and the Re-ID model contrastive learning step are alternatively iterated, by which we progressively obtain a highly discriminative Re-ID model. Extensive experiments on three large-scale datasets show that our approach outperforms all previous ICS works. Especially, it gains 15.7% Rank-1 and 14.3% mAP improvements on the challenging MSMT17 dataset. Moreover, our approach performs even comparable to state-of-the-art fully supervised methods on all of the three datasets.
ISSN:2169-3536