DeepSpeed
fairseq
Our great sponsors
- ONLYOFFICE ONLYOFFICE Docs — document collaboration in your environment
- InfluxDB - Access the most powerful time series database as a service
- CodiumAI - TestGPT | Generating meaningful tests for busy devs
- Sonar - Write Clean Python Code. Always.
DeepSpeed | fairseq | |
---|---|---|
41 | 80 | |
25,088 | 25,547 | |
61.0% | 16.0% | |
9.6 | 9.0 | |
2 days ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeepSpeed
-
Using --deepspeed requires lots of manual tweaking
Filed a discussion item on the deepspeed project: https://github.com/microsoft/DeepSpeed/discussions/3531
Solution: I don't know; this is where I am stuck. https://github.com/microsoft/DeepSpeed/issues/1037 suggests that I just need to 'apt install libaio-dev', but I've done that and it doesn't help.
-
Whether the ML computation engineering expertise will be valuable, is the question.
There could be some spectrum of this expertise. For instance, https://github.com/NVIDIA/FasterTransformer, https://github.com/microsoft/DeepSpeed
- FLiPN-FLaNK Stack Weekly for 17 April 2023
- DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-Like Models
- DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-Like Models
-
12-Apr-2023 AI Summary
DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales (https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat)
- Microsoft DeepSpeed
-
Apple: Transformer architecture optimized for Apple Silicon
I'm following this closely, together with other efforts like GPTQ Quantization and Microsoft's DeepSpeed, all of which are bringing down the hardware requirements of these advanced AI models.
-
Facebook LLAMA is being openly distributed via torrents
- https://github.com/microsoft/DeepSpeed
Anything that could bring this to a 10GB 3080 or 24GB 3090 without 60s/it per token?
fairseq
-
AI — weekly megathread!
Meta released a new open-source model, Massively Multilingual Speech (MMS) that can do both speech-to-text and text-to-speech in 1,107 languages and can also recognize 4,000+ spoken languages. Existing speech recognition models only cover approximately 100 languages out of the 7,000+ known spoken languages. [Details | Research Paper | GitHub].
-
Meta AI announces Massive Multilingual Speech code, models for 1000+ languages
Code: https://github.com/facebookresearch/fairseq/tree/main/examples/mms
-
Why is GPT-3 15.77x more expensive for certain languages?
https://github.com/facebookresearch/fairseq/blob/nllb/LICENS...
The model is CC. Because models aren't software.
-
Using oneAPI AI Toolkits from Intel and Accenture Part 2
The high level overview of this implementation is something like this. The conversion from speech to text is achieved by using a sequence-to-sequence framework called Fairseq. Sequence-to-sequence modeling is a type of machine learning that is commonly built to create summaries, text translations and so on. It was initially conceived by Google. Fairseq is an open source sequence-to-sequence framework from Facebook.
- New extension: Prompt Translator
-
How worried are you about AI taking over music?
Fairseq 1.1k contributors
-
[P] BART denoising language modeling in JAX/Flax
Due to the high demand in implementation for pretraining BART. I created an pretraining script for BART in JAX/Flax. Got approvals to merge into huggingface/transformers. I will archive this repo once it is merged.
-
[D] Hey Reddit! We're a bunch of research scientists and software engineers and we just open sourced a new state-of-the-art AI model that can translate between 200 different languages. We're excited to hear your thoughts so we're hosting an AMA on 07/21/2022 @ 9:00AM PT. Ask Us Anything!
all 202 languages covered by NLLB are already available (models: https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/modeling, FLORES and all of the other datasets we created: https://github.com/facebookresearch/flores), including Zulu. You can also try our Zulu translation in the Content Translation tool live on Wikipedia! For the "coming soon" part here, I guess you are talking about the demo? New languages rolling out and will be live in the coming weeks. [angela]
We have a bunch! The model and data are available here: https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/modeling , LASER3 here: https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/laser\_distillation , training data here: https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/data , FLORES and our other human translated datasets here: https://github.com/facebookresearch/flores , and an entire modular pipeline for data cleaning here: https://github.com/facebookresearch/stopes. It's also available on HuggingFace! [angela]
What are some alternatives?
ColossalAI - Making large AI models cheaper, faster and more accessible
gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
fairscale - PyTorch extensions for high performance and large scale training.
TensorRT - NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.
Megatron-LM - Ongoing research training transformer models at scale
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
text-to-text-transfer-transformer - Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
llama - Inference code for LLaMA models
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
espnet - End-to-End Speech Processing Toolkit