smolrsrwkv
A relatively basic implementation of RWKV in Rust written by someone with very little math and ML knowledge. Supports 32, 8 and 4 bit evaluation. It can also directly load PyTorch RWKV models. (by KerfuffleV2)
ChatRWKV
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source. (by BlinkDL)
smolrsrwkv | ChatRWKV | |
---|---|---|
6 | 28 | |
91 | 9,282 | |
- | - | |
5.6 | 8.3 | |
8 months ago | 4 days ago | |
Rust | Python | |
MIT License | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
smolrsrwkv
Posts with mentions or reviews of smolrsrwkv.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-04-06.
-
Introducing repugnant-pickle, a crate for scraping Python Pickle files in a basic way. Notable, it can deal with (some) PyTorch model files.
For an example of actually using it, you can look at my little project for running inference on RWKV models: https://github.com/KerfuffleV2/smolrsrwkv It uses repugnant-pickle to enable loading PyTorch models directly with no conversion requirement or Python dependencies.
-
Is GPT-4 still just a language model trying to predict text?
If you want some proof, I wrote my own application that can run inference on RWKV models (competing approach similar to GPT which most LLMs use currently): https://github.com/KerfuffleV2/smolrsrwkv
-
I created a simple implementation of the RWKV language model (RWKV competes with the dominant Transformers-based approach which is the "T" in GPT)
It can now quantize to 8bit for 4x memory savings: https://github.com/KerfuffleV2/smolrsrwkv/tree/experiment-quantize
-
ChatGPT saved this dog's life...
Here's an implementation of RWKV I wrote that can run inference on models: https://github.com/KerfuffleV2/smolrsrwkv
-
LLMs are not that different from us -- A delve into our own conscious process
Now, I'm not an expert but I do know a little more than the average person. I actually just got done implementing a simple one based on the RWKV approach rather than transformers: https://github.com/KerfuffleV2/smolrsrwkv
ChatRWKV
Posts with mentions or reviews of ChatRWKV.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-12-09.
- People who've used RWKV, whats your wishlist for it?
- How the RWKV language model works
-
Questions about memory, tree-of-thought, planning
Most LLMs actually do a decent job out of the box if you ask them for step by step instructions. Tree of tough is one way to improve the results, reflexion is another that can be used separate or additionally. The downside is that most models will run quickly into their token limit (around 2k for most). However the new SuperHot models can handle up to 8k and then there are the RMVK-Raven models, they are RNNs and not transformers like all the other LLMs and can theoretically handle infinite context lengths (but they loose "focus" after a while).
-
New model: RWKV-4-Raven-7B-v12-Eng49%-Chn49%-Jpn1%-Other1%-20230530-ctx8192.pth
RWKV models inference: https://github.com/BlinkDL/ChatRWKV (fast CUDA).
-
KoboldCpp - Combining all the various ggml.cpp CPU LLM inference projects with a WebUI and API (formerly llamacpp-for-kobold)
I'm most interested in that last one. I think I heard the RWKV models are very fast, don't need much Ram, and can have huge context tokens, so maybe their 14b can work for me. I wasn't sure how ready for use they were though, but looking more into it, stuff like rwkv.cpp and ChatRWKV and a whole lot of other community projects are mentioned on their github.
- I created a simple implementation of the RWKV language model (RWKV competes with the dominant Transformers-based approach which is the "T" in GPT)
-
[P] Raven 7B & 14B 🐦(RWKV finetuned on Alpaca+CodeAlpaca+Guanaco) and Gradio Demo for Raven 7B
You can use ChatRWKV v2 (https://github.com/BlinkDL/ChatRWKV) to run Raven🐦 (compatible with vanilla RWKV):
-
What's the current state of actually free and open source LLMs?
I feel compelled to summon /u/bo_peng here and to mention his work on RWKV. (See https://github.com/BlinkDL/ChatRWKV and related repos.)
- Try Google's Bard
-
[D] Totally Open Alternatives to ChatGPT
Please test https://github.com/BlinkDL/ChatRWKV which is a good chatbot despite only trained on the Pile :)