This AI Paper Raises a Rarely Studied Privacy Risk of the Training Data of Person Re-Identification

Person re-identification (Re-ID) is an image retrieval task that identifies a specific person in different images or video sequences. However, leaking information from the Re-ID training set can cause serious social security and ethical risks. Therefore, addressing the privacy risks associated with the training set of Re-ID models is important. Membership inference (MI) attacks can reveal whether a particular individual was present in the training dataset used to train the Re-ID model. This can reveal sensitive information about the individual’s whereabouts, movements, and activities. The main challenge of MI attacks for the Re-ID task is that traditional MI attack methods that rely on logits or loss are not applicable because the Re-ID task follows a different training and inference paradigm.

The state-of-the-art Re-ID methods extract visual features from each pedestrian image and then conduct recognition by retrieving images based on the relative similarity between image pairs. The commonly used logits or loss for MI attack on classification are unavailable in the Re-ID task. Furthermore, the Re-ID task is a more challenging fine-grained recognition task, leading to a more complex and less discriminative feature distribution for MI attacks.

Recently a Chinese research team introduced an article that presents a novel MI attack method called similarity distribution-based MI attack (SDMI attack) specifically designed for the Re-ID task. The proposed method uses the inter-sample similarity distribution between different images to infer the membership of a target image in the training set. It uses a set of anchor images selected by an attention-based neural module to represent the similarity distribution conditioned on the target image. The target image’s membership is inferred based on its similarity with the anchors within the reference set using a neural network. The contributions of this work are:


👉 Read our latest Newsletter: Google AI Open-Sources Flan-T5; Can You Label Less by Using Out-of-Domain Data?; Reddit users Jailbroke ChatGPT; Salesforce AI Research Introduces BLIP-2….

1) raising awareness about the privacy risk of the training set in the Re-ID task.

2) proposing the first MI attack algorithm on person re-identification,.

3) demonstrating that the proposed method outperforms existing MI attack approaches on Re-ID models.

Concretely, the SDMI attack is performed in two stages: Obtaining Similarity Distribution: Given a target image, the method obtains a feature vector that represents the conditional distribution of the similarity between the target images and other images in the data distribution. This is done by sampling a set of anchor images from the Re-ID data distribution and computing the euclidean distance between the target image and each anchor image. Membership Inference: In the second stage, the membership of the target image is inferred based on the similarity distribution with a novel neural network structure. The similarity distribution vector is fed into a neural network that predicts the membership of the target sample.

To evaluate the performance of the novel approach, an experimental study was carried out against several baselines on two datasets (Market1501 and DukeMTMC) using three different Re-ID models with different backbones (ResNet50, MobileNetV2, and Xception). The authors use attack success rate as the evaluation metric and show that their proposed method outperforms existing methods. They also conducted an ablation study to show the influence of different components and hyperparameters on the performance of their proposed method. In addition, they show that their new technique can be applied to other tasks such as classification and report the results for this as well.

In summary, this article presents a novel MI attack method specifically designed for Re-ID task, which is a privacy-sensitive image retrieval task. The SDMI attack uses the inter-sample similarity distribution between different images to infer the membership of a target image in the training set. The authors claim that this method outperforms existing MI attack algorithms on general Re-ID models and raises awareness about the privacy risk of the training set in the Re-ID task. They performed experiments on two datasets and three Re-ID models and showed that their new approach has a higher attack success rate than existing methods.


Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 13k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.


Mahmoud is a PhD researcher in machine learning. He also holds a
bachelor’s degree in physical science and a master’s degree in
telecommunications and networking systems. His current areas of
research concern computer vision, stock market prediction and deep
learning. He produced several scientific articles about person re-
identification and the study of the robustness and stability of deep
networks.


Credit: Source link

Comments are closed.