SpikeGPT
RWKV-LM-LoRA
SpikeGPT | RWKV-LM-LoRA | |
---|---|---|
7 | 4 | |
695 | 403 | |
- | - | |
7.2 | 5.6 | |
4 months ago | 10 months ago | |
Python | Python | |
BSD 2-clause "Simplified" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SpikeGPT
-
why don't somebody adapt this with pygmalionai ?
it seems to be trainable with a data . GitHub - ridgerchu/SpikeGPT: Implementation of "SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks"
-
[D] The Complete Guide to Spiking Neural Networks
The relationship is that SpikeGPT is inspired/is an implementation of RWKV with SNNs.
-
[P] New toolchain to train robust spiking NNs for mixed-signal Neuromorphic chips
Have you already seen this? https://github.com/ridgerchu/SpikeGPT
-
[R] RWKV 14B ctx8192 is a zero-shot instruction-follower without finetuning, 23 token/s on 3090 after latest optimization (16G VRAM is enough, and you can stream layers to save more VRAM)
Soon :) working on it. Meanwhile take a look at https://github.com/ridgerchu/SpikeGPT which is a SNN version of RWKV, so has some explaination in the paper.
-
SpikeGPT: 230M-parameter Spiking Neural Network trained to be a language model
Found relevant code at https://github.com/ridgerchu/SpikeGPT + all code implementations here
RWKV-LM-LoRA
- People who've used RWKV, whats your wishlist for it?
-
Scaling Transformers to 1B Tokens
> RWKV
The current versions of RWKV slowly go insane when exposed to sequences that are too long, because the state slowly diverges over time as you increase past the context length of the training session. They are experimenting with ways to avoid this though: https://github.com/Blealtan/RWKV-LM-LoRA/tree/dev-infctx
-
[R] RWKV 14B ctx8192 is a zero-shot instruction-follower without finetuning, 23 token/s on 3090 after latest optimization (16G VRAM is enough, and you can stream layers to save more VRAM)
Someone in RWKV Discord tried it using LoRA (https://github.com/Blealtan/RWKV-LM-LoRA) and the result is quite nice. Join RWKV Discord for latest updates :)
- [P] RWKV 14B is a strong chatbot despite only trained on Pile (16G VRAM for 14B ctx4096 INT8, more optimizations incoming)
What are some alternatives?
ChatRWKV - ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
norse - Deep learning with spiking neural networks (SNNs) in PyTorch.
faster-rwkv
RWKV-CUDA - The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )
web-rwkv - Implementation of the RWKV language model in pure WebGPU/Rust.
RWKV-infctx-trainer - RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!