Type something to search...
DeepSpeed

DeepSpeed

DeepSpeed

32.8k 3.9k
03 May, 2024
  Python

What is Deepspeed ?

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.


Deepspeed Features

Extreme Speed and Scale for DL Training and Inference

DeepSpeed enables world’s most powerful language models like MT-530B and BLOOM. It is an easy-to-use deep learning optimization software suite that powers unprecedented scale and speed for both training and inference. With DeepSpeed you can:

  • Train/Inference dense or sparse models with billions or trillions of parameters

  • Achieve excellent system throughput and efficiently scale to thousands of GPUs

  • Train/Inference on resource constrained GPU systems

  • Achieve unprecedented low latency and high throughput for inference

  • Achieve extreme compression for an unparalleled inference latency and model size reduction with low costs


DeepSpeed’s four innovation pillars

deepspeed-pillars

DeepSpeed-Training

DeepSpeed offers a confluence of system innovations, that has made large scale DL training effective, and efficient, greatly improved ease of use, and redefined the DL training landscape in terms of scale that is possible. These innovations such as ZeRO, 3D-Parallelism, DeepSpeed-MoE, ZeRO-Infinity, etc. fall under the training pillar. Learn more: DeepSpeed-Training

DeepSpeed-Inference

DeepSpeed brings together innovations in parallelism technology such as tensor, pipeline, expert and ZeRO-parallelism, and combines them with high performance custom inference kernels, communication optimizations and heterogeneous memory technologies to enable inference at an unprecedented scale, while achieving unparalleled latency, throughput and cost reduction. This systematic composition of system technologies for inference falls under the inference pillar. Learn more: DeepSpeed-Inference

DeepSpeed-Compression

To further increase the inference efficiency, DeepSpeed offers easy-to-use and flexible-to-compose compression techniques for researchers and practitioners to compress their models while delivering faster speed, smaller model size, and significantly reduced compression cost. Moreover, SoTA innovations on compression like ZeroQuant and XTC are included under the compression pillar. Learn more: DeepSpeed-Compression

DeepSpeed4Science

In line with Microsoft’s mission to solve humanity’s most pressing challenges, the DeepSpeed team at Microsoft is responding to this opportunity by launching a new initiative called DeepSpeed4Science, aiming to build unique capabilities through AI system technology innovations to help domain experts to unlock today’s biggest science mysteries. Learn more: DeepSpeed4Science website and tutorials


DeepSpeed Software Suite

DeepSpeed Library

The DeepSpeed library (this repository) implements and packages the innovations and technologies in DeepSpeed Training, Inference and Compression Pillars into a single easy-to-use, open-sourced repository. It allows for easy composition of multitude of features within a single training, inference or compression pipeline. The DeepSpeed Library is heavily adopted by the DL community, and has been used to enable some of the most powerful models (see DeepSpeed Adoption).

Model Implementations for Inference (MII)

Model Implementations for Inference (MII) is an open-sourced repository for making low-latency and high-throughput inference accessible to all data scientists by alleviating the need to apply complex system optimization techniques themselves. Out-of-box, MII offers support for thousands of widely used DL models, optimized using DeepSpeed-Inference, that can be deployed with a few lines of code, while achieving significant latency reduction compared to their vanilla open-sourced versions.

DeepSpeed on Azure

DeepSpeed users are diverse and have access to different environments. We recommend to try DeepSpeed on Azure as it is the simplest and easiest method. The recommended method to try DeepSpeed on Azure is through AzureML recipes. The job submission and data preparation scripts have been made available here. For more details on how to use DeepSpeed on Azure, please follow the Azure tutorial.


DeepSpeed Adoption

DeepSpeed is an important part of Microsoft’s new

AI at Scale

initiative to enable next-generation AI capabilities at scale, where you can find more

information here.

DeepSpeed has been used to train many different large-scale models, below is a list of several examples that we are aware of (if you’d like to include your model please submit a PR):

DeepSpeed has been integrated with several different popular open-source DL frameworks such as:

Documentation

| Transformers with DeepSpeed | | | Accelerate with DeepSpeed | | | Lightning with DeepSpeed | | | MosaicML with DeepSpeed | | | Determined with DeepSpeed | | | MMEngine with DeepSpeed |


Build Pipeline Status

DescriptionStatus
NVIDIA!nv-torch110-p40 !nv-torch110-v100 !nv-torch-latest-v100 !nv-h100 !nv-inference !nv-nightly
AMD!amd-mi100 !amd-mi200
CPU!nv-torch-latest-cpu
PyTorch Nightly!nv-torch-nightly-v100
Integrations!nv-transformers-v100 !nv-lightning-v100 !nv-accelerate-v100 !nv-megatron !nv-mii !nv-ds-chat !nv-sd
Misc!Formatting !pages-build-deployment !Documentation Status!python

Install Deepspeed

The quickest way to get started with DeepSpeed is via pip, this will install

the latest release of DeepSpeed which is not tied to specific PyTorch or CUDA

versions. DeepSpeed includes several C++/CUDA extensions that we commonly refer

to as our ‘ops’. By default, all of these extensions/ops will be built

just-in-time (JIT) using [torch’s JIT C++ extension loader that relies on

ninja](https://pytorch.org/docs/stable/cpp_extension.html) to build and

dynamically link them at runtime.

Requirements

  • PyTorch must be installed before installing DeepSpeed.

  • For full feature support we recommend a version of PyTorch that is >= 1.9 and ideally the latest PyTorch stable release.

  • A CUDA or ROCm compiler such as nvcc or hipcc used to compile C++/CUDA/HIP extensions.

  • Specific GPUs we develop and test against are listed below, this doesn’t mean your GPU will not work if it doesn’t fall into this category it’s just DeepSpeed is most well tested on the following:

  • NVIDIA: Pascal, Volta, Ampere, and Hopper architectures

  • AMD: MI100 and MI200

PyPI

We regularly push releases to PyPI and encourage users to install from there in most cases.

Terminal window
pip install deepspeed

After installation, you can validate your install and see which extensions/ops

your machine is compatible with via the DeepSpeed environment report.

Terminal window
ds_report

If you would like to pre-install any of the DeepSpeed extensions/ops (instead

of JIT compiling) or install pre-compiled ops via PyPI please see our [advanced

installation instructions](https://www.deepspeed.ai/tutorials/advanced-install/).

Windows

Windows support is partially supported with DeepSpeed. On Windows you can build wheel with following steps, currently only inference mode is supported.

  1. Install pytorch, such as pytorch 1.8 + cuda 11.1

  2. Install visual cpp build tools, such as VS2019 C++ x64/x86 build tools

  3. Launch cmd console with Administrator privilege for creating required symlink folders

  4. Run python setup.py bdist_wheel to build wheel in dist folder