DeepSpeed
gpt-neox
Our great sponsors
- Sonar - Write Clean Python Code. Always.
- CodiumAI - TestGPT | Generating meaningful tests for busy devs
- ONLYOFFICE ONLYOFFICE Docs — document collaboration in your environment
- InfluxDB - Access the most powerful time series database as a service
DeepSpeed | gpt-neox | |
---|---|---|
41 | 49 | |
25,088 | 5,470 | |
61.0% | 16.6% | |
9.6 | 6.7 | |
2 days ago | 5 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeepSpeed
-
Using --deepspeed requires lots of manual tweaking
Filed a discussion item on the deepspeed project: https://github.com/microsoft/DeepSpeed/discussions/3531
Solution: I don't know; this is where I am stuck. https://github.com/microsoft/DeepSpeed/issues/1037 suggests that I just need to 'apt install libaio-dev', but I've done that and it doesn't help.
-
Whether the ML computation engineering expertise will be valuable, is the question.
There could be some spectrum of this expertise. For instance, https://github.com/NVIDIA/FasterTransformer, https://github.com/microsoft/DeepSpeed
- FLiPN-FLaNK Stack Weekly for 17 April 2023
- DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-Like Models
- DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-Like Models
-
12-Apr-2023 AI Summary
DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales (https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat)
- Microsoft DeepSpeed
-
Apple: Transformer architecture optimized for Apple Silicon
I'm following this closely, together with other efforts like GPTQ Quantization and Microsoft's DeepSpeed, all of which are bringing down the hardware requirements of these advanced AI models.
-
Facebook LLAMA is being openly distributed via torrents
- https://github.com/microsoft/DeepSpeed
Anything that could bring this to a 10GB 3080 or 24GB 3090 without 60s/it per token?
gpt-neox
-
Read this post if you have general questions
GPT-Neo: GPT-Neo is a free and open-source language model developed by EleutherAI. It is a powerful model that can be used for a variety of tasks, including text generation, and question-answering. here is the GitHub
-
What's the current state of actually free and open source LLMs?
Doesn't gpt-neox 20b require like 40gb+ of VRAM? From their github repo the slim weights are 39GB and I think one of the devs has previously mentioned aiming for 48GB for inference.
-
Any real competitor to GPT-3 which is open source and downloadable?
3.) EleutherAI's GPT-Neo and GPT-NeoX: EleutherAI is an independent research organization that aims to promote open research in artificial intelligence. They have released GPT-Neo, an open-source language model based on the GPT architecture, and are developing GPT-NeoX, a highly-scalable GPT-like model. You can find more information on their GitHub repositories: GPT-Neo: https://github.com/EleutherAI/gpt-neo GPT-NeoX: https://github.com/EleutherAI/gpt-neox
- Behold, ChatGPT from the year 2025
-
Will there ever be a "Stable Diffusion chat AI" that we can run at home like one can do with Stable Diffusion? A "roll-your-own at home ChatGPT"?
GitHub - EleutherAI/gpt-neox: An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
-
First Open Source Alternative to ChatGPT Has Arrived
For context, they're now working on a GPU driven version:
-
Will we see a “stable diffusion” version of ChatGPT?
freediver 2 days ago | prev | next [–] Here is an example of one general purpose open source LLM, probably the best you can get: https://github.com/EleutherAI/gpt-neox To manage your expectations it is nowhere as good as ChatGPT. If you are interested in programming only: https://github.com/salesforce/CodeGen
- GPT-3 can create both sides of an Interactive Fiction transcript
-
[D] Is a GPT-J successor in the works?
There's been a few different open-source GPT-3 style large language models since GPT-J: ~175B: Bloom from huggingface, ~100B: YaLM from Yandex, and ~20B: GPT NeoX. None of them match GPT-3 performance but since their open source (for commercial use too) theyre worth checking out. I'm not sure if Stability has plans to train a GPT3 size model though.
- Show HN: Use GPT-NeoX to generate quotes
What are some alternatives?
ColossalAI - Making large AI models cheaper, faster and more accessible
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
fairscale - PyTorch extensions for high performance and large scale training.
gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
TensorRT - NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.
Megatron-LM - Ongoing research training transformer models at scale
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
llama - Inference code for LLaMA models
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
YaLM-100B - Pretrained language model with 100B parameters
open-ai - OpenAI PHP SDK : Most downloaded, forked, contributed, huge community supported, and used PHP (Laravel , Symfony, Yii, Cake PHP or any PHP framework) SDK for OpenAI GPT-3 and DALL-E. It also supports chatGPT-like streaming. (ChatGPT AI is supported)