RWKV-v2-RNN-Pile
quality
RWKV-v2-RNN-Pile | quality | |
---|---|---|
6 | 2 | |
65 | 100 | |
- | 4.0% | |
0.0 | 5.8 | |
over 1 year ago | 3 months ago | |
Python | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RWKV-v2-RNN-Pile
-
[R] RWKV-3: Scaling RNN to 1.5B and Reach Transformer LM Performance (without using attention)
See https://github.com/BlinkDL/RWKV-v2-RNN-Pile for the ppl vs ctxlen curve :)
- [D] Why are transformers still being used?
- [R] RWKV-2 430M release (a parallelizable RNN with transformer-level LM performance, and without using attention)
-
[R] RWKV-v2-RNN : A parallelizable RNN with transformer-level LM performance, and without using attention
Read the inference code in https://github.com/BlinkDL/RWKV-v2-RNN-Pile first :)
quality
-
[D] Why are transformers still being used?
Totally agree. Almost all standard tasks which are used to measure the performance of nlp are using very short inputs. So nlp researcher have no real incentives to work on that. On the other hand, tasks related to longer texts are often concentrating on aspects which do not need a deeper understanding of text (often a bow representation does quite well in this context). But I think interest in more compex tasks in relation to longer texts is growing. See for example this dataset published 6 months ago: https://github.com/nyu-mll/quality
- "QuALITY: Question Answering with Long Input Texts, Yes!", Pang et al 2021 (6.7k >2k-long reading comprehension tests, constructed for high validity & difficulty; defeats Longformer & DeBERTa-retrieval)
What are some alternatives?
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
flash-attention - Fast and memory-efficient exact attention
RWKV-CUDA - The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )
token-shift-gpt - Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing
AI-Writer - AI 写小说,生成玄幻和言情网文等等。中文预训练生成模型。采用我的 RWKV 模型,类似 GPT-2 。AI写作。RWKV for Chinese novel generation.
SmallInitEmb - LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence