stable-diffusion
tvm
stable-diffusion | tvm | |
---|---|---|
20 | 16 | |
338 | 11,216 | |
- | 1.6% | |
0.0 | 9.9 | |
over 1 year ago | about 10 hours ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion
- [Machine Learning] [P] Exécutez une diffusion stable sur le GPU de votre M1 Mac
- High-performance image generation using Stable Diffusion in KerasCV
-
Charl-e: “Stable Diffusion on your Mac in 1 click”
SD on an Intel mac with Vega graphics runs pretty well though — I think it ran at something like ~3-5 iterations/s for me, which is decent. I ran either https://github.com/magnusviri/stable-diffusion or https://github.com/lstein/stable-diffusion which have MPS support
-
Stable Diffusion PR optimizes VRAM, generate 576x1280 images with 6 GB VRAM
https://github.com/magnusviri/stable-diffusion/commit/d0b168...
Copying this change fixed seeds on M1 for me.
-
Intel Mac User, How do I start?
You should be able to run it on a CPU. Maybe try this version. If MPS is supported on your Mac you can check this out.
-
[P] Run Stable Diffusion on your M1 Mac’s GPU
A group of open source hackers forked Stable Diffusion on GitHub and optimized the model to run on Apple's M1 chip, enabling images to be generated in ~ 15 seconds (512x512 pixels, 50 diffusion steps).
-
Run Stable Diffusion on Your M1 Mac’s GPU
Magnusviro [0], the original author of the SD M1 repo credited in this article, has merged his fork into the Lstein Stable Diffusion repo [1], and you can now run Lstein fork with M1 as of a few hours ago.
This adds a ton of functionality - GUI, Upscaling & Facial improvements, weighted subprompts etc.
This has been a big undertaking over the last few days, and I highly recommend checking it out.
[0] https://github.com/magnusviri/stable-diffusion
-
How are Mac people using Windows for A.I. stuff?
You can run it on an M1. Using a macbook M1 pro max with 32Gb I get 512x512 in about 50 seconds. use this branch https://github.com/magnusviri/stable-diffusion/tree/apple-mps-support
-
ResolvePackageNotFound
I had this error too, and I tried a ton of things to get cudatoolkit to install, without any luck. This fork has an environment-mac.yml file that actually got it working on my M1 Max: https://github.com/magnusviri/stable-diffusion/tree/apple-silicon-mps-support
-
If I set a seed value and re-run using the exact same settings, should I get the same image back each time?
But when I run it (locally, using the Mac M1 port), every time I run it creates a different image.
tvm
-
Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU
Yes. Web-llm is a wrapper of tvmjs: https://github.com/apache/tvm
Just wrappers all the way down
-
Making AMD GPUs competitive for LLM inference
Yes, this is coming! Myself and others at OctoML and in the TVM community are actively working on multi-gpu support in the compiler and runtime. Here are some of the merged and active PRs on the multi-GPU (multi-device) roadmap:
Support in TVM’s graph IR (Relax) - https://github.com/apache/tvm/pull/15447
-
VSL; Vlang's Scientific Library
Would it make sense to have a backend support for OpenXLA, Apache TVM, Jittor or other similar to get free GPU, TPU and other accelerators for free ?
- Apache TVM
-
MLC LLM - "MLC LLM is a universal solution that allows any language model to be deployed natively on a diverse set of hardware backends and native applications, plus a productive framework for everyone to further optimize model performance for their own use cases."
I have tried the iPhone app. It's fast. They're using Apache TVM which should allow better use of native accelerators on different devices. Like using metal on Apple and Vulcan or CUDA or whatever instead of just running the thing on the CPU like llama.cpp.
-
ONNX Runtime merges WebGPU back end
I was going to answer the same, I find the approach of machine learning compilers that directly compile models to host and device code better than having to bring a huge runtime. There are exciting projects in this area like TVM Unity, IREE [2], or torch.export [3]
[1] https://github.com/apache/tvm/tree/unity
[2] https://pytorch.org/get-started/pytorch-2.0/#inference-and-e...
[3] https://pytorch.org/get-started/pytorch-2.0/#inference-and-e...
-
Esp32 tensorflow lite
Apache TVM home page: https://tvm.apache.org/
-
Decompiling x86 Deep Neural Network Executables
It's pretty clear its referring to the output of Apache TVM and Meta's Glow
-
Run Stable Diffusion on Your M1 Mac’s GPU
As mentioned in sibling comments, Torch is indeed the glue in this implementation. Other glues are TVM[0] and ONNX[1]
These just cover the neural net though, and there is lots of surrounding code and pre-/post-processing that isn't covered by these systems.
For models on Replicate, we use Docker, packaged with Cog for this stuff.[2] Unfortunately Docker doesn't run natively on Mac, so if we want to use the Mac's GPU, we can't use Docker.
I wish there was a good container system for Mac. Even better if it were something that spanned both Mac and Linux. (Not as far-fetched as it seems... I used to work at Docker and spent a bit of time looking into this...)
[0] https://tvm.apache.org/
-
How to get started with machine learning.
Or use TVM, the idea is to compile your model into code that you can load at runtime. Similar to onnxruntime, it only does DNN inference; so you need domain-specific code.
What are some alternatives?
openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
stable-diffusion-webui-docker - Easy Docker setup for Stable Diffusion with user-friendly UI
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui]
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
rocm-build - build scripts for ROCm
nebuly - The user analytics platform for LLMs
stable-diffusion