mamba
mamba | stable_diffusion.openvino | |
---|---|---|
15 | 47 | |
9,506 | 1,525 | |
15.3% | - | |
8.1 | 0.8 | |
9 days ago | 7 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mamba
-
Based: Simple linear attention language models
> how the recall can grow unbounded with no tradeoff
this? https://github.com/state-spaces/mamba/issues/175
-
Mamba: The Easy Way
If you want to learn this stuff as a computer engineer, you can read the code here [0]. I find the math quite helpful.
[0]: https://github.com/state-spaces/mamba
- FLaNK Stack 05 Feb 2024
- Introduction to State Space Models (SSM)
-
Fortran inference code for the Mamba state space language model
This model was discussed recently: https://news.ycombinator.com/item?id=38522428 It's a new kind of ML model architecture that can be used instead of a transformer in LLMs.
See also the original repo from the paper: https://github.com/state-spaces/mamba
-
Mamba outperforms transformers "everywhere we tried"
[2] - https://github.com/state-spaces/mamba
Out of curiosity, does anyone feel as though there's any benefit to linking to reddit when we can link to whatever the link is? I for one do not click the link and read discussion on reddit - if I wanted that sort of discussion, I would browse there, not HN.
- GitHub – State-Spaces/Mamba
-
Generate valid JSON with Mamba models
The library is compatible with any auto-regressive model, not transformers. To prove our point we integrated Mamba, a new state-space model architecture, to the library. Try it out!
-
[D] Thoughts on Mamba?
I ran the NanoGPT of Karparthy replacing Self-Attention with Mamba on his TinyShakespeare Dataset and within 5 minutes it started spitting out the following:
-
Mamba-Chat: A Chat LLM based on State Space Models
You might have come across the paper Mamba paper in the last days, which was the first attempt at scaling up state space models to 2.8B parameters to work on language data.
stable_diffusion.openvino
- FLaNK Stack 05 Feb 2024
-
Installing A1111 Stable Diffusion Error
it might be the --xformers flag, try getting rid of that since your not using cuda you wouldn't be able to run it with xformers and you could also try --use-cpu all ... you can also check this out .. https://github.com/bes-dev/stable_diffusion.openvino .. it's probably your best option if your using CPU, which if your PC Graphics are using Intel UHD 620 then you don't have a GPU and an optimized CPU inference would be best to run
- 4 Reasons to Switch to Intel Arc GPUs
-
why is SD not actually using the GPU?
SD can be run on a CPU without a GPU. I know for certain it can be done with OpenVINO. In fact, on some i7s, it will run at around 3 seconds per iteration. There was a reddit SD thread a while back saying it can be done with Automatic111. Also, soe recent threads on problems with AMD GPUs suggest Automatic1111 is using the CPU rather than the intended GPU. (Fortuanely, I have a GPU, so I don't have to deal with it myself!)
-
Slow Performance on RX 6800 XT; Am I Doing Something Wrong or is ROCm Just this Slow?
I'm not actually entirely convinced that it's even using the GPU. Radeontop shows 0% utilization while the images are generating. Additionally, the listed iteration speed should be impossibly slow for any GPU; it says 26.58s/it, which is slower than just running on a CPU.
-
How can i fix it?
iGPU's are in short not supported. There's this repo that may or may not help you, but even if it did I wouldn't expect much.
-
Stable Diffusion Web UI for Intel Arc
You can also run it in windows native with openvino, there is a barebones webui for it as well in one of the forks.Requires setting cpu to gpu in one the files. https://github.com/bes-dev/stable_diffusion.openvino
-
Intel Arc A770 is underperforming in Tom's Hardware Review
In https://github.com/bes-dev/stable_diffusion.openvino/blob/master/stable_diffusion_engine.py
-
So a new benchmark was done for Stable Diffusion on GPU's
" We ended up using three different Stable Diffusion projects for our testing, mostly because no single package worked on every GPU. For Nvidia, we opted for Automatic 1111's webui version(opens in new tab). AMD GPUs were tested using Nod.ai's Shark version(opens in new tab), while for Intel's Arc GPUs we used Stable Diffusion OpenVINO(opens in new tab). "
- Anyone here using Mac?
What are some alternatives?
miniforge - A conda-forge distribution.
stable-diffusion
pip - The Python package installer
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
llm.f90 - LLM inference in Fortran
stable-diffusion
conda - A system-level, binary package and environment manager running on all major operating systems and platforms.
stable-diffusion-rocm
mamba-chat - Mamba-Chat: A chat LLM based on the state-space model architecture 🐍
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
spack - A flexible package manager that supports multiple versions, configurations, platforms, and compilers.
stable-diffusion - A latent text-to-image diffusion model