Stable-Diffusion-NCNN
intel-extension-for-transformers
Stable-Diffusion-NCNN | intel-extension-for-transformers | |
---|---|---|
8 | 3 | |
940 | 1,970 | |
- | 4.8% | |
4.9 | 9.9 | |
11 months ago | 1 day ago | |
C++ | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stable-Diffusion-NCNN
- Stable Diffusion implemented by ncnn framework based on C++, supported txt2img and img2img!
-
Is there an open source app for generating ai images?
It is possible, though a bit unpractical: Stable-Diffusion-NCNN
-
Stable Diffusion running on a Snapdragon mobile device. Processing on device, not cloud (phone's in Flight Mode).
Remember this project needs 8gb to do 512x512 on a phone https://github.com/EdVince/Stable-Diffusion-NCNN
-
Qualcomm demos fastest local AI image generation with Stable Diffusion on mobile
"World's first on-device demonstration of Stable Diffusion on an Android phone" Not really
-
Qualcomm Demos AI Art Generator Stable Diffusion Running on an Android Phone Processor First In The World
some already did it https://github.com/EdVince/Stable-Diffusion-NCNN , not first in the world
- world's first on-device demonstration of STABLE DIFFUSION on an android phone
-
Stable diffusion port to .NET c#
https://github.com/andreae293/Stable-Diffusion.NET-NCNN This project is a simple c# port of the already existing c++ stable diffusion using Ncnn Dotnet libraries. Visual studio 2022 is required to open it
intel-extension-for-transformers
- Intel Extension for Transformers
- How do you think LLM inference on CPUs?
- 📢Excited to announce https://github.com/intel/intel-extension-for-transformers v1.1 released. Congrats team! 🔥Supported efficient fine-tuning and inference on Xeon SPR and Habana Gaudi 🎯Enabled 4-bits LLM inference on Xeon (better than llama.cpp); improved lm-eval-harness for multiple frameworks
What are some alternatives?
TNN - TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts. TNN has been deployed in multiple Apps from Tencent, such as Mobile QQ, Weishi, Pitu, etc. Contributions are welcome to work in collaborative with us and make TNN a better framework.
diffusion-expert - A software for drawing with stable-diffusion support
NcnnDotNet - ncnn wrapper written in C++ and C# for Windows, MacOS, Linux, iOS and Android
athena - an open-source implementation of sequence-to-sequence based speech processing engine
Stable-Diffusion.NET-NCNN
lightseq - LightSeq: A High Performance Library for Sequence Processing and Generation
civitai - A repository of models, textual inversions, and more
xbyak_aarch64
ncnn - ncnn is a high-performance neural network inference framework optimized for the mobile platform
FasterTransformer - Transformer related optimization, including BERT, GPT
wenet - Production First and Production Ready End-to-End Speech Recognition Toolkit