ChatRWKV
laion.ai
ChatRWKV | laion.ai | |
---|---|---|
28 | 25 | |
9,282 | 104 | |
- | 4.8% | |
8.3 | 8.6 | |
11 days ago | 10 days ago | |
Python | HTML | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ChatRWKV
- People who've used RWKV, whats your wishlist for it?
- How the RWKV language model works
-
Questions about memory, tree-of-thought, planning
Most LLMs actually do a decent job out of the box if you ask them for step by step instructions. Tree of tough is one way to improve the results, reflexion is another that can be used separate or additionally. The downside is that most models will run quickly into their token limit (around 2k for most). However the new SuperHot models can handle up to 8k and then there are the RMVK-Raven models, they are RNNs and not transformers like all the other LLMs and can theoretically handle infinite context lengths (but they loose "focus" after a while).
-
New model: RWKV-4-Raven-7B-v12-Eng49%-Chn49%-Jpn1%-Other1%-20230530-ctx8192.pth
RWKV models inference: https://github.com/BlinkDL/ChatRWKV (fast CUDA).
-
KoboldCpp - Combining all the various ggml.cpp CPU LLM inference projects with a WebUI and API (formerly llamacpp-for-kobold)
I'm most interested in that last one. I think I heard the RWKV models are very fast, don't need much Ram, and can have huge context tokens, so maybe their 14b can work for me. I wasn't sure how ready for use they were though, but looking more into it, stuff like rwkv.cpp and ChatRWKV and a whole lot of other community projects are mentioned on their github.
- I created a simple implementation of the RWKV language model (RWKV competes with the dominant Transformers-based approach which is the "T" in GPT)
-
[P] Raven 7B & 14B 🐦(RWKV finetuned on Alpaca+CodeAlpaca+Guanaco) and Gradio Demo for Raven 7B
You can use ChatRWKV v2 (https://github.com/BlinkDL/ChatRWKV) to run Raven🐦 (compatible with vanilla RWKV):
-
What's the current state of actually free and open source LLMs?
I feel compelled to summon /u/bo_peng here and to mention his work on RWKV. (See https://github.com/BlinkDL/ChatRWKV and related repos.)
- Try Google's Bard
-
[D] Totally Open Alternatives to ChatGPT
Please test https://github.com/BlinkDL/ChatRWKV which is a good chatbot despite only trained on the Pile :)
laion.ai
-
How Open is Generative AI? Part 2
LAION (Large-scale Artificial Intelligence Open Network), a German non-profit established in 2020, is dedicated to advancing open-source models and datasets (primarily under Apache 2 and MIT licenses) to foster open research and the evolution of benevolent AI. Their datasets, encompassing both images and text, have been pivotal in the training of renowned text-to-image models like Stable Diffusion.
-
How artists are sabotaging AI to take revenge on image generators
> there is going to be a "pre-GPT" internet training set from 2022
Well, yeah, there are several here, and I think all the major image generators are using some combination of them as their starting points: https://laion.ai/
> As AI increases as an overall % of all online posts and activity it will death spiral on model quality.
Nope, it will just mean that it will be more expensive to source additional training data on top of the massive trove of existing "clean" (from intentional poisoning) data (much of which isn't perfectly captioned and human work on improving captioning can improve its utility in model training, as can more advanced models with more advanced text encoders, etc.)
If poisoning was widespread, it wouldn't impact "big model" quality much -- they aren't grabbing new random data on the internet for continuous training. It might drive up the expense of community fine tuning, which often does depend on sourcing representative imagery for target styles or concepts from, among other places, the internet.
-
[D] Why is most Open Source AI happening outside the USA?
Also don't forget https://laion.ai/ from Germany. They focus more on datasets, but still.
-
OpenAI is too cheap to beat
I think the weird thing about this is that it's completely true right now but in X months it may be totally outdated advice.
For example, efforts like OpenMOE https://github.com/XueFuzhao/OpenMoE or similar will probably eventually lead to very competitive performance and cost-effectiveness for open source models. At least in terms of competing with GPT-3.5 for many applications.
Also see https://laion.ai/
I also believe that within say 1-3 years there will be a different type of training approach that does not require such large datasets or manual human feedback.
-
MJ images sources?
Billions. MJ's initial training dataset was from LAION: https://laion.ai/ . Not sure which version, and I am pretty sure additional data has been added since MJ v1, but MJ doesn't release anything more exact. However my guess is: more billions, lol.
-
AI tools apps in one place sorted by category
Missing LAION and OpenAssistant: https://laion.ai/
- GPT detectors are biased against non-native English writers
-
Model Suggestions
As far as I am concerned weights of llama are not allowed for commercial use, but if you are willing to do full training and change it's all weights it would probably be fine. There was a discussion on this topic on forums and no one was sure, you can research it. Also you can take a look at laion.ai and dolly from databricks, they are open source and are allowed for commercial use, if they meet your needs.
- HuggingChat, the first open source alternative to ChatGPT
-
Hugging Face releases its own version of ChatGPT
that's OpenAssistant's / LAION AI model, HuggingFace provided the infrastructure.
What are some alternatives?
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
llm-foundry - LLM training code for Databricks foundation models
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llama - Inference code for Llama models
SillyTavern - LLM Frontend for Power Users.
stable-diffusion-webui - Stable Diffusion web UI
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
gpt4all - gpt4all: run open-source LLMs anywhere
gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
KoboldAI
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.