Researchers propose a new “FaceMAE” framework, where privacy and face recognition performance are considered simultaneously

Source: https://arxiv.org/pdf/2205.11090v1.pdf
This Article is written as a summay by Marktechpost Staff based on the Research Paper 'FaceMAE: Privacy-Preserving Face Recognition
via Masked Autoencoders'. All Credit For This Research Goes To The Researchers of This Project. Check out the paper and Github.

Please Don't Forget To Join Our ML Subreddit

Facial recognition has made significant and steady progress in improving recognition accuracy, and it is now widely used in daily activities such as online payment and identification security. More advanced facial recognition algorithms and large-scale public facial databases are two major components of this improvement.

In recent years, concerns about the leaking of confidentiality of identified members and characteristics of training samples have increased due to the collection and distribution of large-scale facial datasets. The facial recognition community has an important and challenging task in generating large-scale facial databases that preserve privacy for downstream tasks.

Face photographs have typically been subjected to distortions such as blurring, noise, and masking to increase privacy. These basic distortion approaches degrade the privacy and semantics of a face image, leading to unsatisfactory recognition results. Recent research attempts to synthesize identity-ambiguous faces, dis-identification faces, or attribute-free faces with modified adversarial generative networks to create a privacy-preserving face dataset.

Although these GAN-based approaches can generate realistic faces, their usefulness in deep model training is not assured. The domain gap between the generated and original faces is large, resulting in poor face recognition performance for GAN-based systems. Another disadvantage of GAN-based approaches is that generators that have been trained on a single dataset cannot be used on unknown identities. Additionally, all raw face photos from the target dataset must be used to train the generative models, which poses additional privacy concerns.

In a recent study, InsightFace researchers proposed FaceMAE, a breakthrough framework that takes into account both face privacy and recognition performance. FaceMAE is divided into two stages, namely training and deployment. The researchers used masked autoencoders (MAEs) to reconstruct a new dataset from a randomly masked face dataset during the training phase. The goal of FaceMAE is to generate useful faces for facial recognition training.

Source: https://arxiv.org/pdf/2205.11090v1.pdf

The Instance Relationship Matching Module (IRM) was used by researchers to reduce the disparity between the relationship graphs of the original and recreated faces. Rather than inheriting MSE-Loss for MAE, the team was the first to create a separate optimization object. After completing the training phase, the team used the trained FaceMAE to produce a reconstructed dataset and train the facial recognition backbone on it.

According to studies, FaceMAE significantly outperforms state-of-the-art synthetic face dataset generation approaches in recognition accuracy in many large-scale face datasets. FaceMAE significantly reduced the error rate of the finalist approach by at least 50% when trained on images reconstructed from 75% masked faces from CASIA-WebFace. The strategy was found to create more useful facial images to preserve privacy.

Conclusion

In a recent study, InsightFace researchers tackled the pressing and difficult problem of privacy-preserving facial recognition. FaceMAE has been proposed as a method for creating synthetic samples that minimize privacy leaks while maintaining recognition performance. The Instance Relationship Matching module was created to allow generated samples to be used to efficiently train deep models. Trials have shown that the suggested MAE method outperforms the second-tier method by reducing recognition error by at least 50% on a popular face dataset. When FaceMAE was used, the risk of privacy leakage was reduced by approximately 20%.

About Cecil Cobb

Check Also

Graphic Novel Review: “Tristan Strong Punches a Hole in the Sky”

Kwame Mbalia’s dazzling story of Tristan Strong returns to book format in a stunning visual …