Comprehensive Overview and Dedicated Benchmark for Deep Learning from Small Datasets in the Context of Image Classification

Many recent advances in computer vision and machine learning, in general, have been achieved by large-scale pre-training on massive datasets. However, reaching high recognition performance by training on large-scale datasets is strictly connected to the laborious process of collecting and labeling large quantities of samples. Application scenarios in which the usual pre-training on large web-sourced image datasets is useless due to a strong domain shift (e.g., document style classification) or even impossible due to different data modalities (e.g., multi-channel spectral data from satellites) strictly depend on methods for learning directly from the limited amounts of data available. 

The research area of deep learning from small data/datasets or data-efficient deep learning aims at developing neural classifiers that maximize their performance when the dataset is restricted (usually tens or hundreds of samples per category) and pre-trained models are not available (lack of community efforts in given domains or strong domain shifts). To this end, deep learning from small datasets differs in terms of assumptions with related research areas such as transfer/few-shot learning. Most of the works published so far regard image classification problems probably because image classification remains among the most established tasks for artificial neural networks. 

Small-data learning problems are challenging because data scarcity exacerbates the well-known weaknesses of modern classifiers, i.e., being prone to memorizing spurious correlations among training features instead of actually learning a general function for the requested task. The community has rapidly increased its interest in learning problems with small datasets (Figure 3 from the paper) given the importance of investigating approaches to train reliable classifiers from scratch in low-data regimes.  

However, a key missing piece of the current literature is an objective comparison of proposed methods due to the lack of a common benchmark. Despite recent efforts to organize challenges, evaluation setups are often limited to natural-image data which are hardly encountered in small-data scenarios. Additionally, most existing works compare their proposed methods against a few insufficiently tuned baselines also because of the fragmentation of the relevant literature.

The latest work proposes i) the first overview of recent literature concerning image classification with small datasets and ii) a dedicated benchmark consisting of five datasets from different domains (natural image, fine-grained classification, medical imagery, satellite images, and handwritten documents).

The paper then re-evaluates ten state-of-the-art methods following a strict experimental pipeline including hyper-parameter optimization on held-out validation data.

This study discovers two key takeaways summarized in Figure 1 from the paper: 

  • Hyper-parameter optimization makes the categorical cross-entropy loss a strong baseline that is outperformed by only one of the ten specialized methods evaluated.
  •  There is no clear performance progress considering the recently published approaches. Default hyper-parameters found in the image classification literature are inadequate in data-deficient scenarios and should be substituted with properly tuned configurations (e.g., more aggressive regularization for baseline classifiers).

Furthermore, the paper performs additional comparisons among models in terms of computational requirements, i.e., training speed and GPU memory usage. Also considering such metrics, the standard cross-entropy classifier compares favorably against most existing methods. Other insights reported concern the impact of domain shift for transfer learning and a high-resolution fine-tuning step that significantly boosts the recognition performance on fine-grained small-scale datasets. 

The dataset splits, implementations of all compared methods, and code for reproducing and potentially extending the experiments are publicly available under: https://github.com/lorenzobrigato/gem

This Article is written as a research summary article by the author of the paper 'Image Classification With Small Datasets:
Overview and Benchmark'. All Credit For This Research Goes To Researchers on This Project. Check out the paper 1 and paper 2 and github link.

Please Don't Forget To Join Our ML Subreddit


Lorenzo Brigato is a Postdoctoral Researcher at the ARTORG center, a research institution affiliated with the University of Bern, and is currently involved in the application of AI to health and nutrition. He holds a Ph.D. degree in Computer Science from the Sapienza University of Rome, Italy. His Ph.D. thesis focused on image classification problems with sample- and label-deficient data distributions.


Credit: Source link

Comments are closed.