AI-Writer
AI 写小说,生成玄幻和言情网文等等。中文预训练生成模型。采用我的 RWKV 模型,类似 GPT-2 。AI写作。RWKV for Chinese novel generation. (by BlinkDL)
RWKV-v2-RNN-Pile
RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details. (by BlinkDL)
AI-Writer | RWKV-v2-RNN-Pile | |
---|---|---|
2 | 6 | |
3,106 | 67 | |
3.6% | - | |
3.4 | 0.0 | |
over 1 year ago | over 2 years ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AI-Writer
Posts with mentions or reviews of AI-Writer.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-05-10.
-
[R] RWKV-v2-RNN : A parallelizable RNN with transformer-level LM performance, and without using attention
I need more FLOPS lol. On the other hand, quite some users have fine-tuned the Chinese novel model (https://github.com/BlinkDL/AI-Writer).
RWKV-v2-RNN-Pile
Posts with mentions or reviews of RWKV-v2-RNN-Pile.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-07-15.
-
[R] RWKV-3: Scaling RNN to 1.5B and Reach Transformer LM Performance (without using attention)
See https://github.com/BlinkDL/RWKV-v2-RNN-Pile for the ppl vs ctxlen curve :)
- [D] Why are transformers still being used?
- [R] RWKV-2 430M release (a parallelizable RNN with transformer-level LM performance, and without using attention)
-
[R] RWKV-v2-RNN : A parallelizable RNN with transformer-level LM performance, and without using attention
Read the inference code in https://github.com/BlinkDL/RWKV-v2-RNN-Pile first :)
What are some alternatives?
When comparing AI-Writer and RWKV-v2-RNN-Pile you can also consider the following projects:
RWKV-CUDA - The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )
flash-attention - Fast and memory-efficient exact attention
token-shift-gpt - Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing
RWKV-LM - RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.
SmallInitEmb - LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence