swift-coreml-diffusers
swift-diffusion | swift-coreml-diffusers | |
---|---|---|
6 | 1 | |
413 | 2,367 | |
- | 0.7% | |
8.4 | 6.1 | |
about 1 month ago | 3 months ago | |
Swift | Swift | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
swift-diffusion
-
Show HN: Run Stable Diffusion Directly on iPhone
I am going to put model related code we use in a public repo soon (it is very similar to https://github.com/liuliu/swift-diffusion but in NHWC format). ANE will be around 25s if it runs. DT's default only uses GPUs and 35s is on GPU (yes, like you said, upscaling would take extra 10s).
-
Some notes on porting SD2 over to iPhone (or other platforms)
The text encoder uses a new vocabulary set, make sure you copied them from open_clip repo: https://github.com/mlfoundations/open_clip (I have these also available at: https://github.com/liuliu/swift-diffusion/tree/liu/unet/examples/open_clip
-
Draw Things, Stable Diffusion in your pocket, 100% offline and free
Should be able too, if there is a need. I am more interested to support training hypernetwork from the app directly. The conversion script itself is open-source (https://github.com/liuliu/swift-diffusion/blob/main/examples/unet/main.swift), but not polished, and because Apple doesn't allow you to run Python on device, so I cannot make it as easy as typing a URL and get done. Need to figure out what the UX looks like without me providing a networked services ...
-
Show HN: Draw Things, Stable Diffusion in your pocket, 100% offline
Hi, this is the first app in a while (probably 10 years) that I submitted to AppStore. I've done this app in 3 weeks, so there are a lot to be polished. The technology that enables this I discussed in depth in an accompanied blog post: https://liuliu.me/eyes/stretch-iphone-to-its-limit-a-2gib-mo...
Some parts of it (or major parts) is also available at https://github.com/liuliu/swift-diffusion. I plan to port more stuff back to swift-diffusion and make a CLI tool out of it (it is a bit more work than the app because I need to consider CUDA compatibility there).
AMA!
swift-coreml-diffusers
-
Show HN: Run Stable Diffusion Directly on iPhone
- [2] https://github.com/huggingface/swift-coreml-diffusers
What are some alternatives?
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models
ncnn - ncnn is a high-performance neural network inference framework optimized for the mobile platform
diffusionbee-stable-diffusion-ui - Diffusion Bee
fickling - A Python pickling decompiler and static analyzer
open_clip - An open source implementation of CLIP.