RWKV-LM
stable-diffusion-webui
RWKV-LM | stable-diffusion-webui | |
---|---|---|
84 | 2,808 | |
11,657 | 129,975 | |
- | - | |
8.8 | 9.9 | |
8 days ago | 8 days ago | |
Python | Python | |
Apache License 2.0 | MIT |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RWKV-LM
-
Do LLMs need a context window?
https://github.com/BlinkDL/RWKV-LM#rwkv-discord-httpsdiscord... lists a number of implementations of various versions of RWKV.
https://github.com/BlinkDL/RWKV-LM#rwkv-parallelizable-rnn-w... :
> RWKV: Parallelizable RNN with Transformer-level LLM Performance (pronounced as "RwaKuv", from 4 major params: R W K V)
> RWKV is an RNN with Transformer-level LLM performance, which can also be directly trained like a GPT transformer (parallelizable). And it's 100% attention-free. You only need the hidden state at position t to compute the state at position t+1. You can use the "GPT" mode to quickly compute the hidden state for the "RNN" mode.
> So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding (using the final hidden state).
> "Our latest version is RWKV-6,*
- People who've used RWKV, whats your wishlist for it?
- Paving the way to efficient architectures: StripedHyena-7B
-
Understanding Deep Learning
That is not true. There are RNNs with transformer/LLM-like performance. See https://github.com/BlinkDL/RWKV-LM.
-
Q-Transformer: Scalable Reinforcement Learning via Autoregressive Q-Functions
This is what RWKV (https://github.com/BlinkDL/RWKV-LM) was made for, and what it will be good at.
Wow. Pretty darn cool! <3 :'))))
-
Personal GPT: A tiny AI Chatbot that runs fully offline on your iPhone
Thanks for the support! Two weeks ago, I'd have said longer contexts on small on-device LLMs are at least a year away, but developments from last week seem to indicate that it's well within reach. Once the low hanging product features are done, I think it's a worthy problem to spend a couple of weeks or perhaps even months on. Speaking of context lengths, recurrent models like RWKV technically have infinite context lengths, but in practice the context slowly fades away after a few thousands of tokens.
-
"If you see a startup claiming to possess top-secret results leading to human level AI, they're lying or delusional. Don't believe them!" - Yann LeCun, on the conspiracy theories of "X company has reached AGI in secret"
This is the reason there are only a few AI labs, and they show little of the theoretical and scientific understanding you believe is required. Go check their code, there's nothing there. Even the transformer with it's heads and other architectural elements turns out to not do anything and it is less efficient than RNNs. (see https://github.com/BlinkDL/RWKV-LM)
-
The Secret Sauce behind 100K context window in LLMs: all tricks in one place
I've been pondering the same thing, as simply extending the context window in a straightforward manner would lead to a significant increase in computational resources. I've had the opportunity to experiment with Anthropics' 100k model, and it's evident that they're employing some clever techniques to make it work, albeit with some imperfections. One interesting observation is that their prompt guide recommends placing instructions after the reference text when inputting lengthy text bodies. I noticed that the model often disregarded the instructions if placed beforehand. It's clear that the model doesn't allocate the same level of "attention" to all parts of the input across the entire context window.
Moreover, the inability to cache transformers makes the use of large context windows quite costly, as all previous messages must be sent with each call. In this context, the RWKV-LM project on GitHub (https://github.com/BlinkDL/RWKV-LM) might offer a solution. They claim to achieve performance comparable to transformers using an RNN, which could potentially handle a 100-page document and cache it, thereby eliminating the need to process the entire document with each subsequent query. However, I suspect RWKV might fall short in handling complex tasks that require maintaining multiple variables in memory, such as mathematical computations, but it should suffice for many scenarios.
On a related note, I believe Anthropics' Claude is somewhat underappreciated. In some instances, it outperforms GPT4, and I'd rank it somewhere between GPT4 and Bard overall.
-
Meta's plan to offer free commercial AI models puts pressure on Google, OpenAI
> The only reason open-source LLMs have a heartbeat is they’re standing on Meta’s weights.
Not necessarily.
RWKV, for example, is a different architecture that wasn't based on Facebook's weights whatsoever. I don't know where BlinkDL (the author) got the training data, but they seem to have done everything mostly independently otherwise.
https://github.com/BlinkDL/RWKV-LM
disclaimer: I've been doing a lot of work lately on an implementation of CPU inference for this model, so I'm obviously somewhat biased since this is the model I have the most experience in.
-
Eliezer Yudkowsky - open letter on AI
I think the main concern is that, due to the resources put into LLM research for finding new ways to refine and improve them, that work can then be used by projects that do go the extra mile and create things that are more than just LLMs. For example, RWKV is similar to an LLM but will actually change its own model after every processed token, thus letting it remember things longer-term without the use of 'context tokens'.
stable-diffusion-webui
-
Show HN: I made an app to use local AI as daily driver
* LLaVA model: I'll add more documentation. You are right Llava could not generate images. For image generation I don't have immediate plans, but checkout these projects for local image generation.
- https://diffusionbee.com/
- https://github.com/comfyanonymous/ComfyUI
- https://github.com/AUTOMATIC1111/stable-diffusion-webui
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
I would love to be able to have a native stable diffusion experience, my rx 580 takes 30s to generate a single image. But it does work after following https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki...
I got this up and running on my windows machine in short order and I don't even know what stable diffusion is.
But again, it would be nice to have first class support to locally participate in the fun.
-
Ask HN: What is the state of the art in AI photo enhancement?
In Auto1111, that just uses Image.blend. :)
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob...
- How To Increase Performance Time on MacOS
-
Can anyone suggest an AI model that can help me enhance a poorly drawn logo?
I used SDXL in automatic1111 webui for both images. Now that I think about it, the procedure I described was how I made this one, but the one that looks like an illustration was done in two steps. I used the canny ControlNet as I said for the outer part of the logo to preserve the shape of the fonts, but I had to turn it off for the boot to give SDXL leeway to add detail and make it look more like a boot.
-
Seeking out an experienced and empathetic coding buddy.
That said, please do learn coding and don't get discouraged when somebody says to learn PyTorch or recommends using a Jupiter notebook with no further information on how to translate the skill into images. I would highly recommend some short term goals. Get your feet wet by taking apart the UIs. The comfy API documentation is here and the A1111 API documentation is here. There is a difference in completeness, welcome to programming. Writing nodes or plugins is also a good way to jump into this world. Custom wildcard logic might be very attractive to you if you aren't the type that want to deal with a nested file structure to simulate logic.
- can't get it working with an AMD gpu
-
SD extension that allows for setting override
Possibly Unprompted? https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/8094
- Need to write an application to use Stable Diffusion on my desktop PC - which resource should I learn to use?
-
4090 Speed Decrease on each Generation/Iteration
version: v1.6.1 • python: 3.10.13 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.41.2 • checkpoint: 6e8d4871f8
What are some alternatives?
llama - Inference code for Llama models
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
alpaca-lora - Instruct-tune LLaMA on consumer hardware
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
flash-attention - Fast and memory-efficient exact attention
SHARK - SHARK - High Performance Machine Learning Distribution
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
gpt4all - gpt4all: run open-source LLMs anywhere
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
RWKV-CUDA - The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )
safetensors - Simple, safe way to store and distribute tensors