DeepSpeed Investigation: What I Learned

This page summarizes the projects mentioned and recommended in the original post on dev.to

Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
  • deepspeed_testing

    Repo for testing out the awesome DeepSpeed library. Used https://huggingface.co/transformers/master/main_classes/trainer.html#deepspeed for guidance.

  • To test out DeepSpeed, I used the awesome HuggingFace transformers library, which supports using DeepSpeed on their non-stable branch (though support is coming to the stable branch in 4.6 🤓). I followed these awesome instructions on the HuggingFace’s website for getting started with DeepSpeed and HuggingFace. If you want to follow along at home, I created a Github repository with the Dockerfile (I’m addicted to docker and will probably make a blog post on docker too :)) and the test script I used to run my experiments on. I tried training the different versions of the awesome T5 model that ranged from smallish ~60 million parameters to humungous 3 billion parameters. And here are my results:

  • Scout Monitoring

    Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.

    Scout Monitoring logo
  • text-to-text-transfer-transformer

    Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

  • To test out DeepSpeed, I used the awesome HuggingFace transformers library, which supports using DeepSpeed on their non-stable branch (though support is coming to the stable branch in 4.6 🤓). I followed these awesome instructions on the HuggingFace’s website for getting started with DeepSpeed and HuggingFace. If you want to follow along at home, I created a Github repository with the Dockerfile (I’m addicted to docker and will probably make a blog post on docker too :)) and the test script I used to run my experiments on. I tried training the different versions of the awesome T5 model that ranged from smallish ~60 million parameters to humungous 3 billion parameters. And here are my results:

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Perplexity AI Is Lying about Their User Agent

    1 project | news.ycombinator.com | 15 Jun 2024
  • Show HN: Collate multiple PDFs into one with cover and automatic TOC

    1 project | news.ycombinator.com | 15 Jun 2024
  • New Linux distro: PyTermOS

    1 project | dev.to | 15 Jun 2024
  • Tesla's FSD – A Useless Technology Demo

    1 project | news.ycombinator.com | 15 Jun 2024
  • Jax, NumPy, PyTorch or ONNX export with no code duplication

    1 project | news.ycombinator.com | 15 Jun 2024