trlx
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF) (by CarperAI)
llm-law-hackathon
POC project part of the llm law hackathon by Stanford #3 (by dope-projects)
trlx | llm-law-hackathon | |
---|---|---|
6 | 1 | |
4,367 | 0 | |
1.0% | - | |
7.9 | 6.2 | |
5 months ago | about 2 months ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
trlx
Posts with mentions or reviews of trlx.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-05-02.
-
Recapping the AI, Machine Learning and Data Science Meetup — May 2, 2024
Transformer Reinforcement Learning X on GitHub
-
Why did Stability not copy Midjourney's RLHF process? And what's the future of Stable Diffusion?
We drove and released the top RLHF framework TRLX for example from our Carper AI lab used by some of the biggest companies in the world: https://github.com/CarperAI/trlx
-
[R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003
If you checkout the trlx repo they have some examples and they have an example of how they trained sft and ppo on the hh dataset. So it’s basically that but with llama. https://github.com/CarperAI/trlx/blob/main/examples/hh/sft_hh.py
-
Sam Altman: OpenAI’s GPT-4 will launch only when they can do it safely & responsibly. “In general we are going to release technology much more slowly than people would like. We're going to sit on it for much longer…”. Also confirmed video model in the works.
We’ll release our first trained model with Stability AI soon. If you want to start tinkering with RLHF now, we’re also helping develop TRLX: https://github.com/CarperAI/trlx — the open source library for reinforcement learning with transformers.
-
[P] RLHF Learning to Summarize: Implementation by CarperAI with trlX
trlX library here: https://github.com/CarperAI/trlx
- Will we ever see an open source alternative to ChatGPT?
llm-law-hackathon
Posts with mentions or reviews of llm-law-hackathon.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-05-02.
-
Recapping the AI, Machine Learning and Data Science Meetup — May 2, 2024
Legal Vector Search application! on GitHub
What are some alternatives?
When comparing trlx and llm-law-hackathon you can also consider the following projects:
alpaca-lora - Instruct-tune LLaMA on consumer hardware
PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
trl - Train transformer language models with reinforcement learning.
RL4LMs - A modular RL library to fine-tune language models to human preferences
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
summarize-from-feedback - Code for "Learning to summarize from human feedback"
gigagan-pytorch - Implementation of GigaGAN, new SOTA GAN out of Adobe. Culmination of nearly a decade of research into GANs