trlx
stylegan-t
trlx | stylegan-t | |
---|---|---|
6 | 5 | |
4,332 | 1,124 | |
1.1% | 0.2% | |
7.9 | 1.1 | |
4 months ago | about 1 year ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
trlx
-
Recapping the AI, Machine Learning and Data Science Meetup — May 2, 2024
Transformer Reinforcement Learning X on GitHub
-
Why did Stability not copy Midjourney's RLHF process? And what's the future of Stable Diffusion?
We drove and released the top RLHF framework TRLX for example from our Carper AI lab used by some of the biggest companies in the world: https://github.com/CarperAI/trlx
-
[R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003
If you checkout the trlx repo they have some examples and they have an example of how they trained sft and ppo on the hh dataset. So it’s basically that but with llama. https://github.com/CarperAI/trlx/blob/main/examples/hh/sft_hh.py
-
Sam Altman: OpenAI’s GPT-4 will launch only when they can do it safely & responsibly. “In general we are going to release technology much more slowly than people would like. We're going to sit on it for much longer…”. Also confirmed video model in the works.
We’ll release our first trained model with Stability AI soon. If you want to start tinkering with RLHF now, we’re also helping develop TRLX: https://github.com/CarperAI/trlx — the open source library for reinforcement learning with transformers.
-
[P] RLHF Learning to Summarize: Implementation by CarperAI with trlX
trlX library here: https://github.com/CarperAI/trlx
- Will we ever see an open source alternative to ChatGPT?
stylegan-t
-
Why did Stability not copy Midjourney's RLHF process? And what's the future of Stable Diffusion?
My hope these days is that newer (not actually new but you get the point) techniques like StyleGAN and GigaGAN may give the open source generative ai community a fresh boost going forward. We'll see how well those projects can be optimized for consumer-grade hardware.
-
NVIDIA’s New AI: Wow, 30X Faster Than Stable Diffusion! … but could we do this kind of refining in SD, see comment
it says "coming soon" https://github.com/autonomousvision/stylegan-t
-
Nvidia’s New StyleGAN-T Is 30X Faster Than Stable Diffusion
Release aimed for the "end of March"
https://github.com/autonomousvision/stylegan-t/issues/3
- unlocking the power of GANs for fast large-scale text-to-image synthesis | STYLEGAN-T
- StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis
What are some alternatives?
alpaca-lora - Instruct-tune LLaMA on consumer hardware
gigagan-pytorch - Implementation of GigaGAN, new SOTA GAN out of Adobe. Culmination of nearly a decade of research into GANs
PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
trl - Train transformer language models with reinforcement learning.
RL4LMs - A modular RL library to fine-tune language models to human preferences
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
summarize-from-feedback - Code for "Learning to summarize from human feedback"