Researchers Introduce ‘AugMax’: An Open-Sourced Data Augmentation Framework To Unify The Two Aspects Of Diversity And Hardness

Source: https://arxiv.org/pdf/2110.13771.pdf

Data augmentation

Data augmentation in machine learning is a technique that helps reduce overfitting. It increases the amount of data by adding slightly modified copies or synthetic versions from existing ones and acts as an effective regularizer for deep neural networks (DNNs).

Categories of Data Augmentation

Category 1:  As standard data augmentation methods are ineffective, more aggressive combinations of multiple augmentations may solve the problem. To diversify the training data, a series of random transformations are composed using this category (Example: AugMix)

Category 2: The category 2 is designed to make the training data harder by sampling from worst-case augmentations that are supposed to fool AI into misclassifying samples. One such example is an adversarial perturbation. Training models with worst-case data allows them to improve generalization and robustness.

Proposed ‘AugMax’ Framework

Researchers from NVIDIA, Arizona State University, and California Institute of Technology propose ‘AugMax‘ by combining the above two categories of the data augmentation principles to improve robustness. Through this research, the researchers tried to show that diversity and hardness are both necessary to achieve robustness.

AugMax provides a novel augmentation platform that achieves robustness by unifying diversity and hardness by searching for “the worst-case mixing strategy.”

With a stronger form of data augmentation, AugMax leads to an augmented and more heterogeneous input distribution, making model training challenging. To solve this issue, the research team developed ‘DuBIN’, a new normalization strategy.

Through the given research, the researchers tried to prove that by combining AugMax and DuBIN (AugMax-DuBIN), it is possible to achieve state-of-the-art robustness against corruption.

Source: https://arxiv.org/pdf/2110.13771.pdf

Structure

Although AugMax is built on top of the AugMix framework, which allows for mixing multiple data augmentation operators in an exciting, layered pipeline. However, it is different from AugMix where augmentation operators and mixing weights are both randomly selected sampled.

The AugMax generates much more adversarially “hard” samples than other methods (AugMix), but still keeps a good amount of diversity.

AugMax-DuBIN Framework

Augmax is a stronger form of data augmentation, especially with adversarial sample generation. This led the research group to propose Dual-Batchand-Instance Normalization (DuBIN). AugMax-DuBIN‘s training uses clean images to improve model robustness against other standard distribution shifts. It also achieves state of the art results on benchmarks that are affected by natural corruption.

For more details, please read the research paper. The source links to the paper and Github are given below.

Paper: https://arxiv.org/pdf/2110.13771.pdf

Github: https://github.com/VITA-Group/AugMax

Credit: Source link

Comments are closed.