diffusionbee-stable-diffusion-ui
swift-diffusion | diffusionbee-stable-diffusion-ui | |
---|---|---|
6 | 1 | |
413 | 1 | |
- | - | |
8.4 | 3.9 | |
about 1 month ago | about 1 year ago | |
Swift | JavaScript | |
BSD 3-clause "New" or "Revised" License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
swift-diffusion
-
Show HN: Run Stable Diffusion Directly on iPhone
I am going to put model related code we use in a public repo soon (it is very similar to https://github.com/liuliu/swift-diffusion but in NHWC format). ANE will be around 25s if it runs. DT's default only uses GPUs and 35s is on GPU (yes, like you said, upscaling would take extra 10s).
-
Some notes on porting SD2 over to iPhone (or other platforms)
The text encoder uses a new vocabulary set, make sure you copied them from open_clip repo: https://github.com/mlfoundations/open_clip (I have these also available at: https://github.com/liuliu/swift-diffusion/tree/liu/unet/examples/open_clip
-
Draw Things, Stable Diffusion in your pocket, 100% offline and free
Should be able too, if there is a need. I am more interested to support training hypernetwork from the app directly. The conversion script itself is open-source (https://github.com/liuliu/swift-diffusion/blob/main/examples/unet/main.swift), but not polished, and because Apple doesn't allow you to run Python on device, so I cannot make it as easy as typing a URL and get done. Need to figure out what the UX looks like without me providing a networked services ...
-
Show HN: Draw Things, Stable Diffusion in your pocket, 100% offline
Hi, this is the first app in a while (probably 10 years) that I submitted to AppStore. I've done this app in 3 weeks, so there are a lot to be polished. The technology that enables this I discussed in depth in an accompanied blog post: https://liuliu.me/eyes/stretch-iphone-to-its-limit-a-2gib-mo...
Some parts of it (or major parts) is also available at https://github.com/liuliu/swift-diffusion. I plan to port more stuff back to swift-diffusion and make a CLI tool out of it (it is a bit more work than the app because I need to consider CUDA compatibility there).
AMA!
diffusionbee-stable-diffusion-ui
-
Draw Things, Stable Diffusion in your pocket, 100% offline and free
Got the no-unpickling weight extractor working, you can see it here. Currently everything is in the two no_pickle_ files, but I'll probably be pushing a version up that puts them into convert_model.py and fake_torch.py, with an option passed to convert_model determining whether unpickling is used. I made another branch (visible from my GitHub profile) with a proper restricted unpickler, that the forthcoming push will merge into this.
What are some alternatives?
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
fickling - A Python pickling decompiler and static analyzer
stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models
ncnn - ncnn is a high-performance neural network inference framework optimized for the mobile platform
open_clip - An open source implementation of CLIP.