trlx
summarize-from-feedback
trlx | summarize-from-feedback | |
---|---|---|
6 | 4 | |
4,332 | 949 | |
1.1% | 1.3% | |
7.9 | 2.8 | |
4 months ago | 8 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
trlx
-
Recapping the AI, Machine Learning and Data Science Meetup — May 2, 2024
Transformer Reinforcement Learning X on GitHub
-
Why did Stability not copy Midjourney's RLHF process? And what's the future of Stable Diffusion?
We drove and released the top RLHF framework TRLX for example from our Carper AI lab used by some of the biggest companies in the world: https://github.com/CarperAI/trlx
-
[R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003
If you checkout the trlx repo they have some examples and they have an example of how they trained sft and ppo on the hh dataset. So it’s basically that but with llama. https://github.com/CarperAI/trlx/blob/main/examples/hh/sft_hh.py
-
Sam Altman: OpenAI’s GPT-4 will launch only when they can do it safely & responsibly. “In general we are going to release technology much more slowly than people would like. We're going to sit on it for much longer…”. Also confirmed video model in the works.
We’ll release our first trained model with Stability AI soon. If you want to start tinkering with RLHF now, we’re also helping develop TRLX: https://github.com/CarperAI/trlx — the open source library for reinforcement learning with transformers.
-
[P] RLHF Learning to Summarize: Implementation by CarperAI with trlX
trlX library here: https://github.com/CarperAI/trlx
- Will we ever see an open source alternative to ChatGPT?
summarize-from-feedback
-
Learning to Summarize from Human Feedback
Note that they released code, models and raw data here: https://github.com/openai/summarize-from-feedback
-
Need a Sanity Check on World vs. Spatial MoE Models
Generating well-written human text answering specific prompts is very costly, as it often requires hiring part-time staff (rather than being able to rely on product users or crowdsourcing). Thankfully, the scale of data used in training the reward model for most applications of RLHF (~50k labeled preference samples) is not as expensive. However, it is still a higher cost than academic labs would likely be able to afford. Currently, there only exists one large-scale dataset for RLHF on a general language model (from Anthropic) and a couple of smaller-scale task-specific datasets (such as summarization data from OpenAI). The second challenge of data for RLHF is that human annotators can often disagree, adding a substantial potential variance to the training data without ground truth.
-
[P] RLHF Learning to Summarize: Implementation by CarperAI with trlX
Found relevant code at https://github.com/openai/summarize-from-feedback + all code implementations here
-
The Great Software Stagnation
> Software 2.0 is happening right now. GTP-3 and Tesla FSD are examples of this.
I agree with this. As an anecdote, I've spent the past decade explaining to clients that things like natural language question answering and abstractive summarization are impossible, and now we have OpenAI and others dropping pretrained models like https://github.com/openai/summarize-from-feedback that turn all those assumptions on their head. There are caveats, of course, but I've gone from a deep learning skeptic (I started my career with "traditional" ML and NLP) to believing that these sorts of techniques are truly revolutionary and we are only yet scratching the surface of what's possible with them.
What are some alternatives?
alpaca-lora - Instruct-tune LLaMA on consumer hardware
programming-languages-genealogical-tree - Programming languages genealogical tree
PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
PurefunctionPipelineDataflow - My Blog: The Math-based Grand Unified Programming Theory: The Pure Function Pipeline Data Flow with principle-based Warehouse/Workshop Model
trl - Train transformer language models with reinforcement learning.
verona - Research programming language for concurrent ownership
RL4LMs - A modular RL library to fine-tune language models to human preferences
dolt - Dolt – Git for Data
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
curation-corpus - Code for obtaining the Curation Corpus abstractive text summarisation dataset
gigagan-pytorch - Implementation of GigaGAN, new SOTA GAN out of Adobe. Culmination of nearly a decade of research into GANs