PaLM-rlhf-pytorch
petals
PaLM-rlhf-pytorch | petals | |
---|---|---|
25 | 98 | |
7,593 | 8,684 | |
- | 1.5% | |
4.6 | 8.3 | |
4 months ago | 5 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
PaLM-rlhf-pytorch
-
How should I get an in-depth mathematical understanding of generative AI?
ChatGPT isn't open sourced so we don't know what the actual implementation is. I think you can read Open Assistant's source code for application design. If that is too much, try Open Chat Toolkit's source code for developer tools . If you need very bare implementation, you should go for lucidrains/PaLM-rlhf-pytorch.
-
[P] Open-source PaLM models trained at 8k context length
AFAIK, it is not. They are using the open-source re-implementation of Phil Wang (aka lucidrains), which is available here: https://github.com/lucidrains/PaLM-rlhf-pytorch
-
Should AI language models be free software?
Not sure what do you mean by putting source code in double quote, but I don't think the source code is petabytes of text. GPT-2 implementation is few hundred lines of Python (in HuggingFace). PaLM + RLHF - Pytorch (Basically ChatGPT but with PaLM) is less than 1000 lines.
- Would a decentralized open-source platform of ChatGPT work?
- Exciting new shit.
-
Top 10 Best Open Source GitHub repos for Developers 2023
GitHub Link: https://github.com/lucidrains/PaLM-rlhf-pytorch
-
Gather up great coders and make a better Character.Ai
Well... Not necessarily. Actually, if you want to be extra thrifty, you could even go without an ML expert. Just use an open-source one, like LaMDA or PaLM. After that, use chatGPT to build you a basic front end (which would still be better than CAI lol).
-
Open-Source competitor to OpenAI?
and PaLM with RLHF from Phil Wang (open model, needs to be trained): https://github.com/lucidrains/PaLM-rlhf-pytorch
-
Microsoft in talks to acquire a 49% stake in ChatGPT owner OpenAI
Closest you can get is probably with Google T5-Flan [1].
It is not the size of the model or the text it was trained on that makes ChatGPT so performant. It is the additional human assisted training to make it respond well to instructions. Open source versions of that are just starting to see the light of day [2].
[1] https://huggingface.co/google/flan-t5-xxl
[2] https://github.com/lucidrains/PaLM-rlhf-pytorch
- Will we have a free version of ChatGPT (GPT-3) similar to Stable Diffusion?
petals
-
Mistral Large
So how long until we can do an open source Mistral Large?
We could make a start on Petals or some other open source distributed training network cluster possibly?
[0] https://petals.dev/
-
Distributed Inference and Fine-Tuning of Large Language Models over the Internet
Can check out their project at https://github.com/bigscience-workshop/petals
- Make no mistake—AI is owned by Big Tech
- Would you donate computation and storage to help build an open source LLM?
-
Run 70B LLM Inference on a Single 4GB GPU with This New Technique
There is already an implementation along the same line using the torrent architecture.
https://petals.dev/
-
Run LLMs in bittorrent style
Check it out at Petals.dev. Chatbot
- Is distributed computing dying, or just fading into the background?
-
Ask HN: Are there any projects currently exploring distributed AI training?
https://github.com/bigscience-workshop/petals
-
Mistral 7B,The complete Guide of the Best 7B model
https://github.com/bigscience-workshop/petals
Inference only: https://lite.koboldai.net/
- Run LLMs at home, BitTorrent‑style
What are some alternatives?
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
llama - Inference code for Llama models
alpaca-lora - Instruct-tune LLaMA on consumer hardware
ggml - Tensor library for machine learning
trlx - A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.