ColossalAI
PaLM-rlhf-pytorch
ColossalAI | PaLM-rlhf-pytorch | |
---|---|---|
42 | 25 | |
39,061 | 7,747 | |
0.2% | 0.2% | |
9.7 | 6.6 | |
1 day ago | 8 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ColossalAI
- FLaNK AI-April 22, 2024
- Making large AI models cheaper, faster and more accessible
-
ColossalChat: An Open-Source Solution for Cloning ChatGPT with a RLHF Pipeline
> open-source a complete RLHF pipeline ... based on the LLaMA pre-trained model
I've gotten to where when I see "open source AI" I now know it's "well, except for $some_other_dependencies"
Anyway: https://scribe.rip/@yangyou_berkeley/colossalchat-an-open-so... and https://github.com/hpcaitech/ColossalAI#readme (Apache 2) can save you some medium.com heartache at least
-
Meet ColossalChat: An Open-Source AI Solution For Cloning ChatGPT With A Complete RLHF Pipeline
Quick Read: https://www.marktechpost.com/2023/04/01/meet-colossalchat-an-open-source-ai-solution-for-cloning-chatgpt-with-a-complete-rlhf-pipeline/ Github: https://github.com/hpcaitech/ColossalAI Examples: https://chat.colossalai.org/
-
A top AI researcher reportedly left Google for OpenAI after sharing concerns the company was training Bard on ChatGPT data
One of the current methods for training competing models is to have ChatGPT literally create prompt -> completion data sets. That's what was used for https://github.com/hpcaitech/ColossalAI. A model based off of the Llama weights released by facebook, then fine tuned on ChatGPT3.5 prompt + completions. So yes, there is a good chance that google is literally using ChatGPT in the training loop.
- Colossal-AI: open-source RLHF pipeline based on LLaMA pre-trained model
- ColossalChat
-
ColossalChat: An Open-Source Solution for Cloning ChatGPT with RLHF Pipeline
Here's the github from the article:
https://github.com/hpcaitech/ColossalAI
-
Open source solution replicates ChatGPT training process
The article talks about their RLHF implementation briefly. There’s details on their RLHF implementation here: https://github.com/hpcaitech/ColossalAI/blob/a619a190df71ea3...
-
how can I make my own chatGPT?
Here’s the project on GitHub: https://github.com/hpcaitech/ColossalAI
PaLM-rlhf-pytorch
-
How should I get an in-depth mathematical understanding of generative AI?
ChatGPT isn't open sourced so we don't know what the actual implementation is. I think you can read Open Assistant's source code for application design. If that is too much, try Open Chat Toolkit's source code for developer tools . If you need very bare implementation, you should go for lucidrains/PaLM-rlhf-pytorch.
-
[P] Open-source PaLM models trained at 8k context length
AFAIK, it is not. They are using the open-source re-implementation of Phil Wang (aka lucidrains), which is available here: https://github.com/lucidrains/PaLM-rlhf-pytorch
-
Should AI language models be free software?
Not sure what do you mean by putting source code in double quote, but I don't think the source code is petabytes of text. GPT-2 implementation is few hundred lines of Python (in HuggingFace). PaLM + RLHF - Pytorch (Basically ChatGPT but with PaLM) is less than 1000 lines.
- Would a decentralized open-source platform of ChatGPT work?
- Exciting new shit.
-
Top 10 Best Open Source GitHub repos for Developers 2023
GitHub Link: https://github.com/lucidrains/PaLM-rlhf-pytorch
-
Gather up great coders and make a better Character.Ai
Well... Not necessarily. Actually, if you want to be extra thrifty, you could even go without an ML expert. Just use an open-source one, like LaMDA or PaLM. After that, use chatGPT to build you a basic front end (which would still be better than CAI lol).
-
Open-Source competitor to OpenAI?
and PaLM with RLHF from Phil Wang (open model, needs to be trained): https://github.com/lucidrains/PaLM-rlhf-pytorch
-
Microsoft in talks to acquire a 49% stake in ChatGPT owner OpenAI
Closest you can get is probably with Google T5-Flan [1].
It is not the size of the model or the text it was trained on that makes ChatGPT so performant. It is the additional human assisted training to make it respond well to instructions. Open source versions of that are just starting to see the light of day [2].
[1] https://huggingface.co/google/flan-t5-xxl
[2] https://github.com/lucidrains/PaLM-rlhf-pytorch
- Will we have a free version of ChatGPT (GPT-3) similar to Stable Diffusion?
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Megatron-LM - Ongoing research training transformer models at scale
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
DeepFaceLive - Real-time face swap for PC streaming or video calls
text-generation-webui - A Gradio web UI for Large Language Models with support for multiple inference backends.
ivy - Convert Machine Learning Code Between Frameworks
trlx - A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
determined - Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, and resource management. Works with PyTorch and TensorFlow.
Rath - Next generation of automated data exploratory analysis and visualization platform.
fairscale - PyTorch extensions for high performance and large scale training.
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.