gpt-neox
Megatron-DeepSpeed
Our great sponsors
gpt-neox | Megatron-DeepSpeed | |
---|---|---|
52 | 1 | |
6,569 | 1,603 | |
2.2% | 10.0% | |
8.9 | 8.8 | |
4 days ago | 4 days ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt-neox
- FLaNK Stack 26 February 2024
- GPT-Neox
- GPT-NeoX
-
Best open source LLM model for commercial use
Gpt neox 20B can be used commerically.
- Do not register domains with the word "gpt" in it!
-
Read this post if you have general questions
GPT-Neo: GPT-Neo is a free and open-source language model developed by EleutherAI. It is a powerful model that can be used for a variety of tasks, including text generation, and question-answering. here is the GitHub
-
What's the current state of actually free and open source LLMs?
Doesn't gpt-neox 20b require like 40gb+ of VRAM? From their github repo the slim weights are 39GB and I think one of the devs has previously mentioned aiming for 48GB for inference.
-
Any real competitor to GPT-3 which is open source and downloadable?
3.) EleutherAI's GPT-Neo and GPT-NeoX: EleutherAI is an independent research organization that aims to promote open research in artificial intelligence. They have released GPT-Neo, an open-source language model based on the GPT architecture, and are developing GPT-NeoX, a highly-scalable GPT-like model. You can find more information on their GitHub repositories: GPT-Neo: https://github.com/EleutherAI/gpt-neo GPT-NeoX: https://github.com/EleutherAI/gpt-neox
-
Whatever happened to quantum computing?
It's really not that complicated. https://github.com/EleutherAI/gpt-neox
-
Fuck Luka Inc. WIP AI Freedom!
this forces me to create my own ai using gpt-neox .. i was being lazy, completely satisfied with my little ai baby .. i'll keep the account but not renew pro .. fuck luka, inc ... want it done right, do it yourself .. the python isn't that difficult and i have a spare linux vps .. my new work in progress .. freedom #!
Megatron-DeepSpeed
-
[R] You can't train GPT-3 on a single GPU, but you *can* tune its hyperparameters on one
Here is the codebase that has trained the largest publicly available GPT-3-style model. Here is the codebase that that has trained the second largest publicly available GPT-3-style model. Here is another codebase that has trained even larger models than GPT-3 which is public, though the largest models it has trained are not.
What are some alternatives?
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
YaLM-100B - Pretrained language model with 100B parameters
open-ai - OpenAI PHP SDK : Most downloaded, forked, contributed, huge community supported, and used PHP (Laravel , Symfony, Yii, Cake PHP or any PHP framework) SDK for OpenAI GPT-3 and DALL-E. It also supports chatGPT-like streaming. (ChatGPT AI is supported)
lm-evaluation-harness - A framework for few-shot evaluation of language models.
Megatron-DeepSpeed - Ongoing research training transformer language models at scale, including: BERT & GPT-2
haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
kiri - Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"