Deepmind Researchers Propose Fair Normalizing Flows (FNF): A Rigorous Approach For Learning Fair Representations

This Article Is Based On The Research Paper 'FAIR NORMALIZING FLOWS'. All Credit For This Research Goes To The Researchers Of This Paper đź‘Źđź‘Źđź‘Ź

Please Don't Forget To Join Our ML Subreddit

Fair representation learning has emerged as one of the most promising techniques to encode data into new, impartial representations with high utility as machine learning is increasingly utilized in settings that potentially harm humans.

Fair representation means presenting data without regard to gender, color, or other factors. Due to human-introduced bias, these biases are found in the word vector representations in language models. The goal of Learning Fair Representation is to reduce bias by decreasing the semantic distance between biassed terms.

The goal of fair representation learning is to guarantee that representations are useful for a variety of prediction tasks and that sensitive aspects of the original data cannot be extracted from them.

Adversarial training, which combines an encoder aiming to turn data into a fair representation with an adversary attempting to recover sensitive features from the representation, is the most used method for learning fair representations.

However, recent research has discovered that these methods do not provide completely fair representations: stronger opponents can recover sensitive features. This could allow malicious or uneducated users to discriminate using the available representations. The issue of fair representation has lately risen in prominence as regulators draught guidelines on the ethical use of AI, indicating that any company that cannot ensure non-discrimination would be held liable for the data produced.

Researchers at Deepmind recently proposed Fair Normalizing Flows (FNF), a new method for learning fair representations with guarantees, to address the aforementioned challenges. The researchers model the encoder as a normalizing flow, as opposed to other approaches that use standard feed-forward neural networks as encoders.

Source: https://openreview.net/pdf?id=BrFIKuxrZE

While using raw inputs helps the team to train high-utility classifiers, it does not protect against the possibility of a hostile adversary predicting a sensitive attribute based on the features in the input. The architecture consists of two flow-based encoders, and the training procedure’s purpose is to learn the parameters so that the adversary cannot tell the representations apart.

The findings of the team’s comprehensive studies show that FNF successfully lowers the statistical distance between sensitive group representations while retaining excellent accuracy. They find that ensuring fairness just marginally reduces accuracy for some datasets, whereas it significantly reduces accuracy for others. They couldn’t accomplish fairness and good accuracy in such datasets when the label and sensitive attribute are substantially connected. Overall, FNF was found to be a reliable enforcer of fairness.

Conclusion

Deepmind researchers recently published a paper introducing Fair Normalizing Flows (FNF), a new method for learning representations that ensure no adversary can forecast critical features at the cost of a tiny accuracy loss. FNF effectively maintains fairness without sacrificing utility, according to an experimental study on numerous datasets.

Source: https://www.deepmind.com/publications/fair-normalizing-flows

Paper: https://openreview.net/pdf?id=BrFIKuxrZE

Github: https://github.com/eth-sri/fnf

Credit: Source link

Comments are closed.