ChatRWKV
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source. (by BlinkDL)
pygmalion.cpp
C/C++ implementation of PygmalionAI/pygmalion-6b (by AlpinDale)
ChatRWKV | pygmalion.cpp | |
---|---|---|
28 | 6 | |
9,282 | 57 | |
- | - | |
8.3 | 10.0 | |
11 days ago | about 1 year ago | |
Python | C | |
Apache License 2.0 | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ChatRWKV
Posts with mentions or reviews of ChatRWKV.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-12-09.
- People who've used RWKV, whats your wishlist for it?
- How the RWKV language model works
-
Questions about memory, tree-of-thought, planning
Most LLMs actually do a decent job out of the box if you ask them for step by step instructions. Tree of tough is one way to improve the results, reflexion is another that can be used separate or additionally. The downside is that most models will run quickly into their token limit (around 2k for most). However the new SuperHot models can handle up to 8k and then there are the RMVK-Raven models, they are RNNs and not transformers like all the other LLMs and can theoretically handle infinite context lengths (but they loose "focus" after a while).
-
New model: RWKV-4-Raven-7B-v12-Eng49%-Chn49%-Jpn1%-Other1%-20230530-ctx8192.pth
RWKV models inference: https://github.com/BlinkDL/ChatRWKV (fast CUDA).
-
KoboldCpp - Combining all the various ggml.cpp CPU LLM inference projects with a WebUI and API (formerly llamacpp-for-kobold)
I'm most interested in that last one. I think I heard the RWKV models are very fast, don't need much Ram, and can have huge context tokens, so maybe their 14b can work for me. I wasn't sure how ready for use they were though, but looking more into it, stuff like rwkv.cpp and ChatRWKV and a whole lot of other community projects are mentioned on their github.
- I created a simple implementation of the RWKV language model (RWKV competes with the dominant Transformers-based approach which is the "T" in GPT)
-
[P] Raven 7B & 14B 🐦(RWKV finetuned on Alpaca+CodeAlpaca+Guanaco) and Gradio Demo for Raven 7B
You can use ChatRWKV v2 (https://github.com/BlinkDL/ChatRWKV) to run Raven🐦 (compatible with vanilla RWKV):
-
What's the current state of actually free and open source LLMs?
I feel compelled to summon /u/bo_peng here and to mention his work on RWKV. (See https://github.com/BlinkDL/ChatRWKV and related repos.)
- Try Google's Bard
-
[D] Totally Open Alternatives to ChatGPT
Please test https://github.com/BlinkDL/ChatRWKV which is a good chatbot despite only trained on the Pile :)
pygmalion.cpp
Posts with mentions or reviews of pygmalion.cpp.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-04-07.
-
How/Where to download Pygmalion language models?
yes it's called Pygmalion CPP, not sure if you can run it on vast but you might be able to run it on your PC. Link: https://github.com/AlpinDale/pygmalion.cpp
- How to use Pygmalion 6b on CPU?
-
well.....now what
Alternatives: Pygmalion.cpp for phones with at least 8GB of RAM https://github.com/AlpinDale/pygmalion.cpp Local KoboldAI (Can generate API key for use with Tavern) or Kobold Horde https://github.com/KoboldAI/KoboldAI-Client / https://lite.koboldai.net/ local Oobabooga (supports 4 and 8bit quantization. Can generate API key for use with Tavern.) https://github.com/oobabooga/text-generation-webui
-
So, is there any way accessing Pyg from mobile?
Pygmalion.cpp: https://github.com/AlpinDale/pygmalion.cpp
-
KoboldCpp - Combining all the various ggml.cpp CPU LLM inference projects with a WebUI and API (formerly llamacpp-for-kobold)
GPT-J/JT models (legacy f16 formats here as well as 4 bit quantized ones like this and pygmalion see pyg.cpp)
-
Any possibility to make Pygmalion 6B run in 4bit?
https://github.com/AlpinDale/pygmalion.cpp it can run on an android e-toaster, but what is love
What are some alternatives?
When comparing ChatRWKV and pygmalion.cpp you can also consider the following projects:
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
GPTQ-for-LLaMa - 4 bits quantization of LLMs using GPTQ
SillyTavern - LLM Frontend for Power Users.
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
gpt4all - gpt4all: run open-source LLMs anywhere
KoboldAI
alpaca-lora - Instruct-tune LLaMA on consumer hardware
ChatRWKV vs koboldcpp
pygmalion.cpp vs koboldcpp
ChatRWKV vs text-generation-webui
pygmalion.cpp vs GPTQ-for-LLaMa
ChatRWKV vs SillyTavern
pygmalion.cpp vs text-generation-webui
ChatRWKV vs SillyTavern
pygmalion.cpp vs alpaca.cpp
ChatRWKV vs gpt4all
pygmalion.cpp vs KoboldAI
ChatRWKV vs KoboldAI
ChatRWKV vs alpaca-lora