rwkv.cpp
pygmalion.cpp
rwkv.cpp | pygmalion.cpp | |
---|---|---|
12 | 6 | |
1,111 | 57 | |
2.6% | - | |
6.8 | 10.0 | |
29 days ago | about 1 year ago | |
C++ | C | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rwkv.cpp
-
Eagle 7B: Soaring past Transformers
There's https://github.com/saharNooby/rwkv.cpp, which related-ish[0] to ggml/llama.cpp
[0]: https://github.com/ggerganov/llama.cpp/issues/846
- People who've used RWKV, whats your wishlist for it?
-
The Eleuther AI Mafia
Quantisation thankfully is applicable to RWKV as much as transformers. Most notably in our RWKV.cpp community project: https://github.com/saharNooby/rwkv.cpp
Tooling/Ecosystem is something that I am actively working on as there is still a gap to transformers level of tooling. But i'm glad that there is a noticeable difference!
And yes! experiments are important, to ensure improvements in the architecture. Even if "Linear Transformers" replaces "Transformers". Alternatives should always be explored, to learn from such trade-offs to the benefit of the ecosystem
(This was lightly covered in the podcast, where I share IMO that we should have more research into text based diffusion networks)
- Tiny models for contextually coherent conversations?
-
New model: RWKV-4-Raven-7B-v12-Eng49%-Chn49%-Jpn1%-Other1%-20230530-ctx8192.pth
Q8_0 models: only for https://github.com/saharNooby/rwkv.cpp (fast CPU).
- [R] RWKV: Reinventing RNNs for the Transformer Era
-
4096 Context length (and beyond)
There's https://github.com/saharNooby/rwkv.cpp which seems to work, and might be compatible with text-generation-webui.
-
The Coming of Local LLMs
Also worth checking out https://github.com/saharNooby/rwkv.cpp which is based on Georgi's library and offers support for the RWKV family of models which are Apache-2.0 licensed.
-
KoboldCpp - Combining all the various ggml.cpp CPU LLM inference projects with a WebUI and API (formerly llamacpp-for-kobold)
I'm most interested in that last one. I think I heard the RWKV models are very fast, don't need much Ram, and can have huge context tokens, so maybe their 14b can work for me. I wasn't sure how ready for use they were though, but looking more into it, stuff like rwkv.cpp and ChatRWKV and a whole lot of other community projects are mentioned on their github.
- rwkv.cpp: FP16 & INT4 inference on CPU for RWKV language model (r/MachineLearning)
pygmalion.cpp
-
How/Where to download Pygmalion language models?
yes it's called Pygmalion CPP, not sure if you can run it on vast but you might be able to run it on your PC. Link: https://github.com/AlpinDale/pygmalion.cpp
- How to use Pygmalion 6b on CPU?
-
well.....now what
Alternatives: Pygmalion.cpp for phones with at least 8GB of RAM https://github.com/AlpinDale/pygmalion.cpp Local KoboldAI (Can generate API key for use with Tavern) or Kobold Horde https://github.com/KoboldAI/KoboldAI-Client / https://lite.koboldai.net/ local Oobabooga (supports 4 and 8bit quantization. Can generate API key for use with Tavern.) https://github.com/oobabooga/text-generation-webui
-
So, is there any way accessing Pyg from mobile?
Pygmalion.cpp: https://github.com/AlpinDale/pygmalion.cpp
-
KoboldCpp - Combining all the various ggml.cpp CPU LLM inference projects with a WebUI and API (formerly llamacpp-for-kobold)
GPT-J/JT models (legacy f16 formats here as well as 4 bit quantized ones like this and pygmalion see pyg.cpp)
-
Any possibility to make Pygmalion 6B run in 4bit?
https://github.com/AlpinDale/pygmalion.cpp it can run on an android e-toaster, but what is love
What are some alternatives?
llama.cpp - LLM inference in C/C++
ChatRWKV - ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
GPTQ-for-LLaMa - 4 bits quantization of LLMs using GPTQ
mpt-30B-inference - Run inference on MPT-30B using CPU
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
verbaflow - Neural Language Model for Go
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
KoboldAI