DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-Like Models

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • Sonar - Write Clean Python Code. Always.
  • InfluxDB - Collect and Analyze Billions of Data Points in Real Time
  • Revelo Payroll - Free Global Payroll designed for tech teams
  • Onboard AI - Learn any GitHub repo in 59 seconds
  • DeepSpeed

    DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

  • DeepSpeedExamples

    Example models using DeepSpeed

    Also see the example repo README: https://github.com/microsoft/DeepSpeedExamples/tree/master/a...

    > With just one click, you can train, generate and serve a 1.3 billion parameter ChatGPT model within 1.36 hours on a single consumer-grade NVIDIA A6000 GPU with 48GB memory. On a single DGX node with 8 NVIDIA A100-40G GPUs, DeepSpeed-Chat enables training for a 13 billion parameter ChatGPT model in 13.6 hours. On multi-GPU multi-node systems (cloud scenarios),i.e., 8 DGX nodes with 8 NVIDIA A100 GPUs/node, DeepSpeed-Chat can train a 66 billion parameter ChatGPT model under 9 hours. Finally, it enables 15X faster training over the existing RLHF systems

    > The following are some of the open-source examples that are powered by DeepSpeed: Databricks Dolly, LMFlow, CarperAI-TRLX, Huggingface-PEFT

    (disclaimer: MSFT/GH employee, not affiliated with this project)

  • Sonar

    Write Clean Python Code. Always.. Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts