maple-diffusion
Snappy
maple-diffusion | Snappy | |
---|---|---|
7 | 5 | |
781 | 6,004 | |
- | 0.7% | |
10.0 | 5.2 | |
over 1 year ago | 25 days ago | |
Swift | C++ | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
maple-diffusion
- World’s first on-device demonstration of Stable Diffusion on an Android phone
-
I made a Stable Diffusion for Anime app in your Pocket! Running 100% offline on your Apple Devices (iPhone, iPad, Mac)
Yup I used MPSGraph with Swift to make the app, based on this open source projcet Maple Diffusion: https://github.com/madebyollin/maple-diffusion
-
Stretch iPhone to Its Limit: 2GiB Stable Diffusion Model Runs Locally on Device
yeah, running the full decoder takes a while. though, since the "latent" is just 4 channels and pretty close to representing RGB, you can use a linear combination of latent channels and get a basic (grainy, low-res) preview image like this [0] without much trouble. I expect you could go further, and train a shallow conv-only decoder to get nicer preview results, but I'm not sure if anyone's bothered yet.
[0] https://github.com/madebyollin/maple-diffusion
- maple-diffusion is a super fast native iOS and/or Apple Silicon Mac client
- Stable Diffusion Inference on iOS
- Stable Diffusion inference on iOS / macOS using MPSGraph
Snappy
-
Why I enjoy using the Nim programming language at Reddit.
Another example of Nim being really fast is the supersnappy library. This library benchmarks faster than Google’s C or C++ Snappy implementation.
-
Stretch iPhone to Its Limit: 2GiB Stable Diffusion Model Runs Locally on Device
It doesn't destroy performance for the simple reason that nowadays memory access has higher latency than pure compute. If you need to use compute to produce some data to be stored in memory, your overall throughput could very well be faster than without compression.
There have been a large amount of innovation on fast compression in recent years. Traditional compression tools like gzip or xz are geared towards higher compression ratio, but memory compression tends to favor speed. Check out those algorithms:
* lz4: https://lz4.github.io/lz4/
* Google's snappy: https://github.com/google/snappy
* Facebook's zstd in fast mode: http://facebook.github.io/zstd/#benchmarks
-
Compression with best ratio and fast decompression
Google released Snappy, which is extremely fast and robust (both at compression and decompression), but it's definitely not nearly as good (in terms of compression ratio). Google mostly uses it for real-time compression, for example of network messages - not for long-term storage.
-
How to store item info?
Just compress it! Of course if you will you ZIP, players will able to just open this zip file and change whatever they want. But you can use less popular compression algorithms which are not supported by default Windows File Explorer. Snappy for example.
- What's the best way to compress strings?
What are some alternatives?
xnu
zstd - Zstandard - Fast real-time compression algorithm
stable-diffusion-webui - Stable Diffusion web UI
LZ4 - Extremely Fast Compression algorithm
List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words - List of Dirty, Naughty, Obscene, and Otherwise Bad Words
brotli - Brotli compression format
ZLib - A massively spiffy yet delicately unobtrusive compression library.
LZMA - (Unofficial) Git mirror of LZMA SDK releases
zlib-ng - zlib replacement with optimizations for "next generation" systems.
tiny_jpeg.h - Single header lib for JPEG encoding. Public domain. C99. stb style.
smaz - Small strings compression library
doboz