RWKV-LM
text-generation-webui
RWKV-LM | text-generation-webui | |
---|---|---|
85 | 884 | |
13,421 | 42,951 | |
2.7% | 2.7% | |
8.9 | 9.7 | |
7 days ago | 10 days ago | |
Python | Python | |
Apache License 2.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RWKV-LM
-
Ask HN: Is anybody building an alternative transformer?
You can see all the development directly from them: https://github.com/BlinkDL/RWKV-LM
Last week version 7 was released and every time they make significant improvements.
-
Do LLMs need a context window?
https://github.com/BlinkDL/RWKV-LM#rwkv-discord-httpsdiscord... lists a number of implementations of various versions of RWKV.
https://github.com/BlinkDL/RWKV-LM#rwkv-parallelizable-rnn-w... :
> RWKV: Parallelizable RNN with Transformer-level LLM Performance (pronounced as "RwaKuv", from 4 major params: R W K V)
> RWKV is an RNN with Transformer-level LLM performance, which can also be directly trained like a GPT transformer (parallelizable). And it's 100% attention-free. You only need the hidden state at position t to compute the state at position t+1. You can use the "GPT" mode to quickly compute the hidden state for the "RNN" mode.
> So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding (using the final hidden state).
> "Our latest version is RWKV-6,*
- People who've used RWKV, whats your wishlist for it?
- Paving the way to efficient architectures: StripedHyena-7B
-
Understanding Deep Learning
That is not true. There are RNNs with transformer/LLM-like performance. See https://github.com/BlinkDL/RWKV-LM.
-
Q-Transformer: Scalable Reinforcement Learning via Autoregressive Q-Functions
This is what RWKV (https://github.com/BlinkDL/RWKV-LM) was made for, and what it will be good at.
Wow. Pretty darn cool! <3 :'))))
-
Personal GPT: A tiny AI Chatbot that runs fully offline on your iPhone
Thanks for the support! Two weeks ago, I'd have said longer contexts on small on-device LLMs are at least a year away, but developments from last week seem to indicate that it's well within reach. Once the low hanging product features are done, I think it's a worthy problem to spend a couple of weeks or perhaps even months on. Speaking of context lengths, recurrent models like RWKV technically have infinite context lengths, but in practice the context slowly fades away after a few thousands of tokens.
-
"If you see a startup claiming to possess top-secret results leading to human level AI, they're lying or delusional. Don't believe them!" - Yann LeCun, on the conspiracy theories of "X company has reached AGI in secret"
This is the reason there are only a few AI labs, and they show little of the theoretical and scientific understanding you believe is required. Go check their code, there's nothing there. Even the transformer with it's heads and other architectural elements turns out to not do anything and it is less efficient than RNNs. (see https://github.com/BlinkDL/RWKV-LM)
-
The Secret Sauce behind 100K context window in LLMs: all tricks in one place
I've been pondering the same thing, as simply extending the context window in a straightforward manner would lead to a significant increase in computational resources. I've had the opportunity to experiment with Anthropics' 100k model, and it's evident that they're employing some clever techniques to make it work, albeit with some imperfections. One interesting observation is that their prompt guide recommends placing instructions after the reference text when inputting lengthy text bodies. I noticed that the model often disregarded the instructions if placed beforehand. It's clear that the model doesn't allocate the same level of "attention" to all parts of the input across the entire context window.
Moreover, the inability to cache transformers makes the use of large context windows quite costly, as all previous messages must be sent with each call. In this context, the RWKV-LM project on GitHub (https://github.com/BlinkDL/RWKV-LM) might offer a solution. They claim to achieve performance comparable to transformers using an RNN, which could potentially handle a 100-page document and cache it, thereby eliminating the need to process the entire document with each subsequent query. However, I suspect RWKV might fall short in handling complex tasks that require maintaining multiple variables in memory, such as mathematical computations, but it should suffice for many scenarios.
On a related note, I believe Anthropics' Claude is somewhat underappreciated. In some instances, it outperforms GPT4, and I'd rank it somewhere between GPT4 and Bard overall.
-
Meta's plan to offer free commercial AI models puts pressure on Google, OpenAI
> The only reason open-source LLMs have a heartbeat is they’re standing on Meta’s weights.
Not necessarily.
RWKV, for example, is a different architecture that wasn't based on Facebook's weights whatsoever. I don't know where BlinkDL (the author) got the training data, but they seem to have done everything mostly independently otherwise.
https://github.com/BlinkDL/RWKV-LM
disclaimer: I've been doing a lot of work lately on an implementation of CPU inference for this model, so I'm obviously somewhat biased since this is the model I have the most experience in.
text-generation-webui
-
1,156 Questions Censored by DeepSeek
total time = 392339.02 ms / 2221 tokens
And my exact command was:
llama-server --model DeepSeek-R1-UD-Q2_K_XL-00001-of-00005.gguf --temp 0.6 -c 9000 --min-p 0.1 --top-k 0 --top-p 1 --timeout 3600 --slot-save-path ~/llama_kv_path --port 8117 -ctk q8_0
(IIRC slot save path argument does absolutely nothing unless and is superfluous, but I have been pasting a similar command around and been too lazy to remove it). -ctk q8_0 reduces memory use a bit for context.
I think my 256gb is right at the limit of spilling a bit into swap, so I'm pushing the limits :)
To explain to anyone not aware of llama-server: it exposes (a somewhat) OpenAI-compatible API and then you can use it with any software that speaks that. llama-server itself also has a UI, but I haven't used it.
I had some SSH tunnels set up to use the server interface with https://github.com/oobabooga/text-generation-webui where I hacked an "OpenAI" client to it (that UI doesn't have it natively). The only reason I use the oobabooga UI is out of habit so I don't recommend this setup to others.
-
DeepSeek-R1 with Dynamic 1.58-bit Quantization
Can't this kind of repetition be dealt with at the decoder level, like for any models? (see DRY decoder for instance: https://github.com/oobabooga/text-generation-webui/pull/5677)
-
I Run LLMs Locally
Still nothing better than oobabooga (https://github.com/oobabooga/text-generation-webui) in terms of maximalism/"Pro"/"Prosumer" LLM UI/UX ALA Blender, Photoshop, Final Cut Pro, etc.
Embarrassing and any VCs reading this can contact me to talk about how to fix that. lm-studio is today the closest competition (but not close enough) and Adobe or Microsoft could do it if they fired their current folks which prevent this from happening.
If you're not using Oobabooga, you're likely not playing with the settings on models, and if you're not playing with your models settings, you're hardly even scratching the surface on its total capabilities.
-
Yi-Coder: A Small but Mighty LLM for Code
I understand your situation. It sounds super simple to me now but I remember having to spend at least a week trying to get the concepts and figuring out what prerequisite knowledge I would need between a continium of just using chatgpt and learning relevant vector math etc. It is much closer to the chatgpt side fortunately. I don't like ollama per se (because i can't reuse its models due to it compressing them in its own format) but it's still a very good place to start. Any interface that lets you download models as gguf from huggingface will do just fine. Don't be turned off by the roleplaying/waifu sounding frontend names. They are all fine. This is what I mostly prefer: https://github.com/oobabooga/text-generation-webui
- XTC: An LLM sampler that boosts creativity, breaks writing clichés
-
Codestral Mamba
Why do people recommend this instead of the much better oobabooga text-gen-webui?
https://github.com/oobabooga/text-generation-webui
It's like you hate settings, features, and access to many backends!
-
Why I made TabbyAPI
The issue is running the model. Exl2 is part of the ExllamaV2 library, but to run a model, a user needs an API server. The only option out there was using text-generation-webui (TGW), a program that bundled every loader out there into a Gradio webui. Gradio is a common “building-block” UI framework for python development and is often used for AI applications. This setup was good for a while, until it wasn’t.
-
Take control! Run ChatGPT and Github Copilot yourself!
What I described here is most optimal workflow I found to be working for me. There are multiple ways to run open source models locally worth mentioning like Oobabooga WebUI or LM Studio, however I didn't found them to be so seamless, and fit my workflow.
-
Ask HN: What is the current (Apr. 2024) gold standard of running an LLM locally?
Some of the tools offer a path to doing tool use (fetching URLs and doing things with them) or RAG (searching your documents). I think Oobabooga https://github.com/oobabooga/text-generation-webui offers the latter through plugins.
Our tool, https://github.com/transformerlab/transformerlab-app also supports the latter (document search) using local llms.
-
Ask HN: How to get started with local language models?
You can use webui https://github.com/oobabooga/text-generation-webui
Once you get a version up and running I make a copy before I update it as several times updates have broken my working version and caused headaches.
a decent explanation of parameters outside of reading archive papers: https://github.com/oobabooga/text-generation-webui/wiki/03-%...
a news ai website:
What are some alternatives?
flash-attention - Fast and memory-efficient exact attention
ollama - Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, and other large language models.
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
koboldcpp - Run GGUF models easily with a KoboldAI UI. One File. Zero Install.
SillyTavern - LLM Frontend for Power Users.