llama
SHARK
Our great sponsors
llama | SHARK | |
---|---|---|
3 | 84 | |
35 | 1,382 | |
- | 4.1% | |
1.6 | 9.4 | |
about 1 year ago | 4 days ago | |
Python | ||
GNU General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama
-
Alpaca- An Instruct Tuned Llama 7B. Responses on par with txt-DaVinci-3. Demo up
> All the magic of "7B LLaMA running on a potato" seems to involve lowering precision down to f16 and then further quantizing to int4.
LLaMa weights are f16s to start out with, no lowering necessary to get to there.
You can stream weights from RAM to the GPU pretty efficiently. If you have >= 32GB ram and >=2GB vram my code here should work for you: https://github.com/gmorenz/llama/tree/gpu_offload
There's probably a cleaner version of it somewhere else. Really you should only need >= 16 GB ram, but the (meta provided) code to load the initial weights is completely unnecessarily making two copies of the weights in RAM simultaneously.
-
LLaMA-7B in Pure C++ with full Apple Silicon support
My code for this is very much not high quality, but I have a CPU + GPU + SSD combination: https://github.com/gmorenz/llama/tree/ssd
Usage instructions in the commit message: https://github.com/facebookresearch/llama/commit/5be06e56056...
At least with my hardware this runs at "[size of model]/[speed of SSD reads]" tokens per second, which (up to some possible further memory reduction so you can run larger batches at once on the same GPU) is a good as it gets when you need to read the whole model from disk each token.
At a 125GB and a 2MB/s read (largest model, what I get from my ssd) that's 60 seconds per token (1 day per 1440 words), which isn't exactly practical. Which is really the issue here, if you need to stream the model from an SSD because you don't have enough RAM, it is just a fundamentally slow process.
You could probably optimize quite a bit for batch throughput if you're ok with the latency though.
-
Llama-CPU: Fork of Facebooks LLaMa model to run on CPU
I don't know about this fork specifically, but in general yes absolutely.
Even without enough ram, you can stream model weights from disk and run at [size of model/disk read speed] seconds per token.
I'm doing that on a small GPU with this code, but it should be easy to get this working with the CPU as compute instead (and at least with my disk/CPU, I'm not even sure that it would run even slower, I think disk read would probably still be the bottleneck)
https://github.com/gmorenz/llama/tree/ssd
SHARK
- Llama 2 on ONNX runs locally
-
[D] Confusion over AMD GPU Ai benchmarking
https://github.com/AUTOMATIC1111/stable-diffusion-webui, https://github.com/nod-ai/SHARK, those are the repos for the open source tools mentioned. u/CeFurkan has really nice tutorial videos on YouTube for stable diffusion. Automatic1111 is the most popular open source stable diffusion ui and has the biggest open source plug-in ecosystem currently. Nvidia’s compute driver is separate from normal driver and called cuda. Amd’s compute driver is called rocm. Most windows programs like games use apis like directx, Vulkan,metal, web gpu and not cuda. Most ml code was originally intended to run in on scientific computing systems that were Linux. Today the traditional windows gpu apis are tying to get better at gpu ml supports. Amd has no official windows ml code support and is Hoping that other developers figure it out for them but amd made their ml driver open source but no support for consumer graphics cards. Nvidia is proprietary ml driver but guaranteed support across all cards including consumer
-
Amd Gpu not utilised
I got it working using SHARK with an AMD RX 480 on Windows 10.
-
New to SD - Slow working
Here the link for shark, faster (uses vulkan) than automatic1111 with directml but has less functions https://github.com/nod-ai/SHARK
-
7900 XTX Stable Diffusion Shark Nod Ai performance on Windows 10. Seem to have gotten a bump with the latest prerelease drivers 23.10.01.41
I would recommend trying out Nod AI's Shark (That is the link for the most recent 786.exe release), and see how it works for you. From others I've read, it does 512x512 pics at around 3 it/s, which I know isn't mind blowing, but it's good enough to do a pic in about 30 seconds.
-
New here
Problem solve, i had it to work i simply put this nod's ai shark exe in my stabble diffusion folder and launch it instead of Webui-user -> Release nod.ai SHARK 20230623.786 · nod-ai/SHARK (github.com)
-
I built the easiest-to-use desktop application for running Stable Diffusion on your PC - and it's free for all of you
How does it compare with Shark SD (I am not affiliated with it in any way)? (https://github.com/nod-ai/SHARK)
-
after changing GPU from RX 470 4gb to RTX 3060 12GB, I decided to make a few cozy houses, and these are a few of them
you should if you want to run SD on your card https://github.com/nod-ai/SHARK
-
20 minute load time per image on high end pc?
Forgive me for not reading you whole comment. I suspect you're version of the SD eb UI doesn't recognize the AMD GPU., so you're using the CPU. AMD GPUs only work with a few web UIs. Try Nod.ai's Shark variant
- AMD support for Microsoft® DirectML optimization of Stable Diffusion
What are some alternatives?
llama.cpp - LLM inference in C/C++
stable-diffusion-webui - Stable Diffusion web UI
ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
stable-diffusion-webui-directml - Stable Diffusion web UI
llama-mps - Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]
AMD-Stable-Diffusion-ONNX-FP16 - Example code and documentation on how to get FP16 models running with ONNX on AMD GPUs [Moved to: https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16]
llama - Inference code for Llama models
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.