token-shift-gpt
AI-Writer
token-shift-gpt | AI-Writer | |
---|---|---|
1 | 2 | |
47 | 2,749 | |
- | - | |
0.0 | 3.4 | |
over 2 years ago | 8 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
token-shift-gpt
-
[R] RWKV-v2-RNN : A parallelizable RNN with transformer-level LM performance, and without using attention
indeed :) took this to the extreme with https://github.com/lucidrains/token-shift-gpt
AI-Writer
-
[R] RWKV-v2-RNN : A parallelizable RNN with transformer-level LM performance, and without using attention
I need more FLOPS lol. On the other hand, quite some users have fine-tuned the Chinese novel model (https://github.com/BlinkDL/AI-Writer).
What are some alternatives?
RWKV-CUDA - The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
RWKV-v2-RNN-Pile - RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
pytorch-lightning - Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
SmallInitEmb - LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python