Microsoft AI Releases ‘DeepSpeed Compression’: A Python-based Composable Library for Extreme Compression and Zero-Cost Quantization to Make Deep Learning Model Size Smaller and Inference Speed Faster

Research in deep learning and AI is being revolutionized by large-scale models, which has resulted in significant advancements in numerous areas, including multilingual translation, creative text generation, and language interpretation. Nevertheless, the models’ vast size results in latency and cost limits that make installing applications on top of them difficult, despite their impressive capabilities. The DeepSpeed team at Microsoft AI has been investigating system optimization and model compression advancements to meet these deployment problems. The DeepSpeed inference system was previously made available by the researchers as part of the Scale initiative. This system uses a variety of optimizations to increase the speed of model inference, such as highly optimized CUDA kernels and inference-adapted parallelism. These improvements aim to increase the inference system’s effectiveness while maintaining model correctness and other factors like model sizes and computing load. To summarise, the amount of work is unchanged, but processing speed and capacity have increased. Newly developed compression algorithms have much promise for lowering the model size and inference processing. These approaches reduce the amount of work required for inference with minimal to no compromise in accuracy by computing DNN models in a condensed format. System improvements and model compression are complementary and can be used in conjunction to reduce inference latency and cost multiplicatively. This motivation to combine the best of both worlds led to the development of DeepSpeed Compression. It is a composable library that combines cutting-edge compression techniques with very effective system improvements to shrink DL models and speed up inference while maintaining significantly lower compression costs.

The use of current compression approaches to large-scale models still faces several practical difficulties despite various attempts to reduce model sizes and inference calculation. The main problem is a complicated pipeline for reaching a high compression ratio. When compressing large models, several solutions have been put forth to combat the complexity of optimization and accuracy degradation. However, optimum practices for high compressions, such as aggressive quantization techniques and layer reduction, have not been thoroughly studied. Large models can be compressed using current techniques, but training costs are substantial. The absence of specialized system optimizations for compressed models is also another problem. System optimizations for a specific type of system are frequently needed to exploit the advantages of compressed models. The best inference latency reduction can often be achieved by customized system optimizations for the compressed models, whereas existing solutions frequently concentrate on lowering theoretical compute overhead. Current methodologies constrain composability between different compression algorithms and system optimizations. These issues are overcome by DeepSpeed Compression, which adopts an end-to-end strategy to increase the computing efficiency of compressed models using a highly tuned inference engine. The library also includes many state-of-the-art compression techniques that may be combined with system optimizations to provide the best of both worlds while enabling an efficient and simple pipeline for DL model inference. Model size may be effectively decreased by 32x with nearly no accuracy loss and by 50x while keeping 97 percent of the accuracy thanks to DeepSpeed Compression. The two main methods used are layer reduction and excessive quantization. 

The paucity of training resources makes large-scale transformer models typically challenging to quantize. Hence, the researchers also suggested ZeroQuant, which quantizes large-scale models with a minimal fine-tuning expense. It primarily comprises a fine-grained quantization approach that is hardware friendly and enables researchers to quantize weights and activations into low-bit values while still enabling quick inference performance. A layer-by-layer knowledge distillation pipeline, the second element, is used to fine-tune the quantized model and close the accuracy gap caused by low-precision quantization. DeepSpeed Compression, while only recently released, has already been used to successfully optimize several sizable open-source models and Microsoft production workloads. It significantly reduces latency and costs and is extensively applicable to various NLP and CV activities. The core DeepSpeed Compression components from Microsoft AI have recently been made available for public use. This includes the compression composer, which supports some compression techniques for NLP and computer vision models, such as lightweight layer reduction, pretraining and task-specific knowledge distillation, head pruning, row pruning, and channel pruning. The team also intends to add other compression techniques to the library, such as specialized compressed model kernels and an optimization module that can automatically select the most effective compression schemes.

This Article is written as a research summary article by Marktechpost Staff based on the research article 'DeepSpeed Compression: A composable library for extreme compression and zero-cost quantization'. All Credit For This Research Goes To Researchers on This Project. Checkout the blog, github and website.

Please Don't Forget To Join Our ML Subreddit

Credit: Source link

Comments are closed.