RWKV-CUDA
token-shift-gpt
RWKV-CUDA | token-shift-gpt | |
---|---|---|
3 | 1 | |
193 | 47 | |
- | - | |
8.5 | 0.0 | |
2 months ago | over 2 years ago | |
Cuda | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RWKV-CUDA
- People who've used RWKV, whats your wishlist for it?
-
Accelerate PyTorch with Taichi: Data Preprocessing & High-performance ML Operator Customization
This repo introduces an interesting example of customizing an ML operator in CUDA. The author developed an RWKV language model using sort of a one-dimensional depthwise convolution custom operator. The model in itself does not involve large amounts of computation, but still runs slow because PyTorch does not have native support for it. So, the author customized the operator in CUDA and used a set of optimization techniques, such as loop fusion and Shared Memory, achieving a performance 20x better than he did with PyTorch.
-
[R] RWKV-v2-RNN : A parallelizable RNN with transformer-level LM performance, and without using attention
It's using my custom CUDA kernel ( https://github.com/BlinkDL/RWKV-CUDA ) to speedup training, so only GPU for now. On the other hand, you don't need CUDA for inference, and it is very fast even on CPUs.
token-shift-gpt
-
[R] RWKV-v2-RNN : A parallelizable RNN with transformer-level LM performance, and without using attention
indeed :) took this to the extreme with https://github.com/lucidrains/token-shift-gpt
What are some alternatives?
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
RWKV-v2-RNN-Pile - RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
AI-Writer - AI 写小说,生成玄幻和言情网文等等。中文预训练生成模型。采用我的 RWKV 模型,类似 GPT-2 。AI写作。RWKV for Chinese novel generation.
web-rwkv - Implementation of the RWKV language model in pure WebGPU/Rust.
pytorch-lightning - Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
SmallInitEmb - LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence
blog_code
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python