Our great sponsors
- ONLYOFFICE ONLYOFFICE Docs — document collaboration in your environment
- InfluxDB - Access the most powerful time series database as a service
- CodiumAI - TestGPT | Generating meaningful tests for busy devs
- Sonar - Write Clean Python Code. Always.
|2 days ago||3 days ago|
|Apache License 2.0||MIT License|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Using --deepspeed requires lots of manual tweaking
3 projects | reddit.com/r/Oobabooga | 11 May 2023
Filed a discussion item on the deepspeed project: https://github.com/microsoft/DeepSpeed/discussions/35313 projects | reddit.com/r/Oobabooga | 11 May 2023
Solution: I don't know; this is where I am stuck. https://github.com/microsoft/DeepSpeed/issues/1037 suggests that I just need to 'apt install libaio-dev', but I've done that and it doesn't help.
Whether the ML computation engineering expertise will be valuable, is the question.
2 projects | reddit.com/r/LanguageTechnology | 21 Apr 2023
There could be some spectrum of this expertise. For instance, https://github.com/NVIDIA/FasterTransformer, https://github.com/microsoft/DeepSpeed
FLiPN-FLaNK Stack Weekly for 17 April 2023
12 projects | dev.to | 17 Apr 2023
DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-Like Models
2 projects | news.ycombinator.com | 12 Apr 2023
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-Like Models
2 projects | news.ycombinator.com | 12 Apr 2023
12-Apr-2023 AI Summary
2 projects | reddit.com/r/u_sann540 | 11 Apr 2023
DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales (https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat)
2 projects | news.ycombinator.com | 11 Apr 2023
Apple: Transformer architecture optimized for Apple Silicon
2 projects | reddit.com/r/apple | 23 Mar 2023
I'm following this closely, together with other efforts like GPTQ Quantization and Microsoft's DeepSpeed, all of which are bringing down the hardware requirements of these advanced AI models.
Facebook LLAMA is being openly distributed via torrents
15 projects | news.ycombinator.com | 3 Mar 2023
Anything that could bring this to a 10GB 3080 or 24GB 3090 without 60s/it per token?
AI — weekly megathread!
2 projects | reddit.com/r/artificial | 26 May 2023
Meta released a new open-source model, Massively Multilingual Speech (MMS) that can do both speech-to-text and text-to-speech in 1,107 languages and can also recognize 4,000+ spoken languages. Existing speech recognition models only cover approximately 100 languages out of the 7,000+ known spoken languages. [Details | Research Paper | GitHub].
Meta AI announces Massive Multilingual Speech code, models for 1000+ languages
2 projects | reddit.com/r/LocalLLaMA | 22 May 2023
Code: https://github.com/facebookresearch/fairseq/tree/main/examples/mms2 projects | reddit.com/r/LocalLLaMA | 22 May 2023
Why is GPT-3 15.77x more expensive for certain languages?
2 projects | news.ycombinator.com | 10 Apr 2023
The model is CC. Because models aren't software.
Using oneAPI AI Toolkits from Intel and Accenture Part 2
3 projects | dev.to | 1 Apr 2023
The high level overview of this implementation is something like this. The conversion from speech to text is achieved by using a sequence-to-sequence framework called Fairseq. Sequence-to-sequence modeling is a type of machine learning that is commonly built to create summaries, text translations and so on. It was initially conceived by Google. Fairseq is an open source sequence-to-sequence framework from Facebook.
New extension: Prompt Translator
2 projects | reddit.com/r/StableDiffusion | 11 Feb 2023
How worried are you about AI taking over music?
13 projects | reddit.com/r/WeAreTheMusicMakers | 3 Feb 2023
Fairseq 1.1k contributors
[P] BART denoising language modeling in JAX/Flax
3 projects | reddit.com/r/MachineLearning | 1 Aug 2022
Due to the high demand in implementation for pretraining BART. I created an pretraining script for BART in JAX/Flax. Got approvals to merge into huggingface/transformers. I will archive this repo once it is merged.
[D] Hey Reddit! We're a bunch of research scientists and software engineers and we just open sourced a new state-of-the-art AI model that can translate between 200 different languages. We're excited to hear your thoughts so we're hosting an AMA on 07/21/2022 @ 9:00AM PT. Ask Us Anything!
10 projects | reddit.com/r/MachineLearning | 21 Jul 2022
all 202 languages covered by NLLB are already available (models: https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/modeling, FLORES and all of the other datasets we created: https://github.com/facebookresearch/flores), including Zulu. You can also try our Zulu translation in the Content Translation tool live on Wikipedia! For the "coming soon" part here, I guess you are talking about the demo? New languages rolling out and will be live in the coming weeks. [angela]10 projects | reddit.com/r/MachineLearning | 21 Jul 2022
We have a bunch! The model and data are available here: https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/modeling , LASER3 here: https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/laser\_distillation , training data here: https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/data , FLORES and our other human translated datasets here: https://github.com/facebookresearch/flores , and an entire modular pipeline for data cleaning here: https://github.com/facebookresearch/stopes. It's also available on HuggingFace! [angela]
What are some alternatives?
ColossalAI - Making large AI models cheaper, faster and more accessible
gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
fairscale - PyTorch extensions for high performance and large scale training.
TensorRT - NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.
Megatron-LM - Ongoing research training transformer models at scale
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
text-to-text-transfer-transformer - Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
llama - Inference code for LLaMA models
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
espnet - End-to-End Speech Processing Toolkit