DeepSpeed VS gpt-neox

Compare DeepSpeed vs gpt-neox and see what are their differences.

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. (by microsoft)

gpt-neox

An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library. (by EleutherAI)
Our great sponsors
  • Sonar - Write Clean Python Code. Always.
  • CodiumAI - TestGPT | Generating meaningful tests for busy devs
  • ONLYOFFICE ONLYOFFICE Docs — document collaboration in your environment
  • InfluxDB - Access the most powerful time series database as a service
DeepSpeed gpt-neox
41 49
25,088 5,470
61.0% 16.6%
9.6 6.7
2 days ago 5 days ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

DeepSpeed

Posts with mentions or reviews of DeepSpeed. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-11.

gpt-neox

Posts with mentions or reviews of gpt-neox. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-21.

What are some alternatives?

When comparing DeepSpeed and gpt-neox you can also consider the following projects:

ColossalAI - Making large AI models cheaper, faster and more accessible

fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

fairscale - PyTorch extensions for high performance and large scale training.

gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.

TensorRT - NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.

Megatron-LM - Ongoing research training transformer models at scale

mesh-transformer-jax - Model parallel transformers in JAX and Haiku

llama - Inference code for LLaMA models

server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.

YaLM-100B - Pretrained language model with 100B parameters

open-ai - OpenAI PHP SDK : Most downloaded, forked, contributed, huge community supported, and used PHP (Laravel , Symfony, Yii, Cake PHP or any PHP framework) SDK for OpenAI GPT-3 and DALL-E. It also supports chatGPT-like streaming. (ChatGPT AI is supported)