lm-human-preferences
ChatRWKV
Our great sponsors
lm-human-preferences | ChatRWKV | |
---|---|---|
8 | 28 | |
1,106 | 9,276 | |
5.3% | - | |
2.7 | 8.3 | |
9 months ago | 9 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
lm-human-preferences
- Ask HN: Open-source GPT-3 alternatives
- El éxito continuo de OpenAI: Y como llegaron a crear la IA más avanzada del 2023. ChatGPT.
-
Sam Altman on the best and worst case scenario for AI - "...the good case is just so unbelievably good that you sound like a really crazy person to start talking about it."
Lest you think that that sounds like a too galaxy-brained possibility, it has already happened at OpenAI (scroll down to "Bugs can optimize for bad behavior"), just with a model that was very far from being capable enough to be dangerous.
-
Value head in GPT2
Found relevant code at https://github.com/openai/lm-human-preferences + all code implementations here
-
Should we stick to the devil we know?
That's why, when they're serious, they use RL for finetuning from human preferences (would be hilarious if this attempt to solve the terrible bias you take to be evidence of AGI threat ends up creating a Woke Singleton itself, btw); it's a powerful general approach, and I see no sign of it being applied here.
-
Dall-E 2
The kind of measures they are taking, like simply deleting wholesale anything problematic, don't really have a '-1'.
But amusingly, exactly that did happen in one of their GPT experiments! https://openai.com/blog/fine-tuning-gpt-2/
- Discussion Thread
-
[D] Applications for using reinforcement learning to fine-tune GPT-2
Code for https://arxiv.org/abs/1909.08593 found: https://github.com/openai/lm-human-preferences
ChatRWKV
- People who've used RWKV, whats your wishlist for it?
- How the RWKV language model works
-
Questions about memory, tree-of-thought, planning
Most LLMs actually do a decent job out of the box if you ask them for step by step instructions. Tree of tough is one way to improve the results, reflexion is another that can be used separate or additionally. The downside is that most models will run quickly into their token limit (around 2k for most). However the new SuperHot models can handle up to 8k and then there are the RMVK-Raven models, they are RNNs and not transformers like all the other LLMs and can theoretically handle infinite context lengths (but they loose "focus" after a while).
-
New model: RWKV-4-Raven-7B-v12-Eng49%-Chn49%-Jpn1%-Other1%-20230530-ctx8192.pth
RWKV models inference: https://github.com/BlinkDL/ChatRWKV (fast CUDA).
-
KoboldCpp - Combining all the various ggml.cpp CPU LLM inference projects with a WebUI and API (formerly llamacpp-for-kobold)
I'm most interested in that last one. I think I heard the RWKV models are very fast, don't need much Ram, and can have huge context tokens, so maybe their 14b can work for me. I wasn't sure how ready for use they were though, but looking more into it, stuff like rwkv.cpp and ChatRWKV and a whole lot of other community projects are mentioned on their github.
- I created a simple implementation of the RWKV language model (RWKV competes with the dominant Transformers-based approach which is the "T" in GPT)
-
[P] Raven 7B & 14B 🐦(RWKV finetuned on Alpaca+CodeAlpaca+Guanaco) and Gradio Demo for Raven 7B
You can use ChatRWKV v2 (https://github.com/BlinkDL/ChatRWKV) to run Raven🐦 (compatible with vanilla RWKV):
-
What's the current state of actually free and open source LLMs?
I feel compelled to summon /u/bo_peng here and to mention his work on RWKV. (See https://github.com/BlinkDL/ChatRWKV and related repos.)
- Try Google's Bard
-
[D] Totally Open Alternatives to ChatGPT
Please test https://github.com/BlinkDL/ChatRWKV which is a good chatbot despite only trained on the Pile :)
What are some alternatives?
trl - Train transformer language models with reinforcement learning.
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
dalle-mini - DALL·E Mini - Generate images from a text prompt
SillyTavern - LLM Frontend for Power Users.
tensorrtx - Implementation of popular deep learning networks with TensorRT network definition API
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]
glide-text2im - GLIDE: a diffusion-based text-conditional image synthesis model
gpt4all - gpt4all: run open-source LLMs anywhere
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
KoboldAI