rwkv.cpp
inference
rwkv.cpp | inference | |
---|---|---|
13 | 2 | |
1,541 | 8,446 | |
0.3% | 1.7% | |
7.7 | 9.7 | |
5 months ago | 4 days ago | |
C++ | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rwkv.cpp
-
RWKV.cpp is now being deployed with the latest Windows 11 system
I didn’t know what RWKV.cpp was, so hopefully this helps others: https://github.com/RWKV/rwkv.cpp
> INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
-
Eagle 7B: Soaring past Transformers
There's https://github.com/saharNooby/rwkv.cpp, which related-ish[0] to ggml/llama.cpp
[0]: https://github.com/ggerganov/llama.cpp/issues/846
- People who've used RWKV, whats your wishlist for it?
-
The Eleuther AI Mafia
Quantisation thankfully is applicable to RWKV as much as transformers. Most notably in our RWKV.cpp community project: https://github.com/saharNooby/rwkv.cpp
Tooling/Ecosystem is something that I am actively working on as there is still a gap to transformers level of tooling. But i'm glad that there is a noticeable difference!
And yes! experiments are important, to ensure improvements in the architecture. Even if "Linear Transformers" replaces "Transformers". Alternatives should always be explored, to learn from such trade-offs to the benefit of the ecosystem
(This was lightly covered in the podcast, where I share IMO that we should have more research into text based diffusion networks)
- Tiny models for contextually coherent conversations?
-
New model: RWKV-4-Raven-7B-v12-Eng49%-Chn49%-Jpn1%-Other1%-20230530-ctx8192.pth
Q8_0 models: only for https://github.com/saharNooby/rwkv.cpp (fast CPU).
- [R] RWKV: Reinventing RNNs for the Transformer Era
-
4096 Context length (and beyond)
There's https://github.com/saharNooby/rwkv.cpp which seems to work, and might be compatible with text-generation-webui.
-
The Coming of Local LLMs
Also worth checking out https://github.com/saharNooby/rwkv.cpp which is based on Georgi's library and offers support for the RWKV family of models which are Apache-2.0 licensed.
-
KoboldCpp - Combining all the various ggml.cpp CPU LLM inference projects with a WebUI and API (formerly llamacpp-for-kobold)
I'm most interested in that last one. I think I heard the RWKV models are very fast, don't need much Ram, and can have huge context tokens, so maybe their 14b can work for me. I wasn't sure how ready for use they were though, but looking more into it, stuff like rwkv.cpp and ChatRWKV and a whole lot of other community projects are mentioned on their github.
inference
-
GreptimeAI + Xinference - Efficient Deployment and Monitoring of Your LLM Applications
Xorbits Inference (Xinference) is an open-source platform to streamline the operation and integration of a wide array of AI models. With Xinference, you’re empowered to run inference using any open-source LLMs, embedding models, and multimodal models either in the cloud or on your own premises, and create robust AI-driven applications. It provides a RESTful API compatible with OpenAI API, Python SDK, CLI, and WebUI. Furthermore, it integrates third-party developer tools like LangChain, LlamaIndex, and Dify, facilitating model integration and development.
-
🤖 AI Podcast - Voice Conversations🎙 with Local LLMs on M2 Max
Code: https://github.com/xorbitsai/inference/blob/main/examples/AI_podcast.py
What are some alternatives?
RWKV-LM - RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.
aihandler - A simple engine to help run diffusers and transformers models
mpt-30B-inference - Run inference on MPT-30B using CPU
ChatRWKV - ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
truss - The simplest way to serve AI/ML models in production