Combined visual and spatial-temporal information for appearance change person re-identification

Document Type

Article

Publication Title

Cogent Engineering

Abstract

Person re-identification (ReID) seeks to identify the same individual across different cameras by matching their corresponding images. The current ReID datasets are limited in size and diversity, especially in terms of clothing changes, making traditional techniques vulnerable to appearance variations. Further, current approaches rely heavily on appearance features for discrimination, which is unreliable when a person’s appearance changes. We hypothesize that the ReID accuracy can be enhanced by training the ReID model on a large volume of diversified training data and combining multiple features for discrimination. We use the image channel shuffling data augmentation method, producing a large volume of diversified training data. Also, a two-stream visual and spatial-temporal method is proposed to learn the feasible features for appearance change scenarios. The appearance features obtained from the visual stream are combined with spatio-temporal information to discriminate between two people. The proposed approach is evaluated for its robustness on short-term and on long-term datasets. The presented two-stream approach outperforms earlier methods by achieving Rank1 accuracy of 98.6% on Market1501, 95.52% on DukeMTMC-reID, 76.21% on LTCC, and 91.76% on VC-Clothes, respectively.

DOI

10.1080/23311916.2023.2197695

Publication Date

1-1-2023

This document is currently not available here.

Share

COinS