RWKV-LM-LoRA
RWKV-CUDA
RWKV-LM-LoRA | RWKV-CUDA | |
---|---|---|
4 | 3 | |
403 | 195 | |
- | - | |
5.6 | 8.1 | |
11 months ago | 28 days ago | |
Python | Cuda | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RWKV-LM-LoRA
- People who've used RWKV, whats your wishlist for it?
-
Scaling Transformers to 1B Tokens
> RWKV
The current versions of RWKV slowly go insane when exposed to sequences that are too long, because the state slowly diverges over time as you increase past the context length of the training session. They are experimenting with ways to avoid this though: https://github.com/Blealtan/RWKV-LM-LoRA/tree/dev-infctx
-
[R] RWKV 14B ctx8192 is a zero-shot instruction-follower without finetuning, 23 token/s on 3090 after latest optimization (16G VRAM is enough, and you can stream layers to save more VRAM)
Someone in RWKV Discord tried it using LoRA (https://github.com/Blealtan/RWKV-LM-LoRA) and the result is quite nice. Join RWKV Discord for latest updates :)
- [P] RWKV 14B is a strong chatbot despite only trained on Pile (16G VRAM for 14B ctx4096 INT8, more optimizations incoming)
RWKV-CUDA
- People who've used RWKV, whats your wishlist for it?
-
Accelerate PyTorch with Taichi: Data Preprocessing & High-performance ML Operator Customization
This repo introduces an interesting example of customizing an ML operator in CUDA. The author developed an RWKV language model using sort of a one-dimensional depthwise convolution custom operator. The model in itself does not involve large amounts of computation, but still runs slow because PyTorch does not have native support for it. So, the author customized the operator in CUDA and used a set of optimization techniques, such as loop fusion and Shared Memory, achieving a performance 20x better than he did with PyTorch.
-
[R] RWKV-v2-RNN : A parallelizable RNN with transformer-level LM performance, and without using attention
It's using my custom CUDA kernel ( https://github.com/BlinkDL/RWKV-CUDA ) to speedup training, so only GPU for now. On the other hand, you don't need CUDA for inference, and it is very fast even on CPUs.
What are some alternatives?
SpikeGPT - Implementation of "SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks"
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
ChatRWKV - ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
token-shift-gpt - Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
RWKV-v2-RNN-Pile - RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
faster-rwkv
AI-Writer - AI 写小说,生成玄幻和言情网文等等。中文预训练生成模型。采用我的 RWKV 模型,类似 GPT-2 。AI写作。RWKV for Chinese novel generation.
web-rwkv - Implementation of the RWKV language model in pure WebGPU/Rust.