optimum-intel
infery-examples
optimum-intel | infery-examples | |
---|---|---|
1 | 1 | |
328 | 50 | |
8.8% | - | |
9.6 | 0.0 | |
5 days ago | 5 months ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
optimum-intel
-
If Stable Diffusion "stores" images in lossy compression, as per the lawsuit's claim, how can you retrieve the original training images?
No I haven't. There's an article from Intel about doing it with some of their tools though (code is here).
infery-examples
What are some alternatives?
WhitenBlackBox - Towards Reverse-Engineering Black-Box Neural Networks, ICLR'18
tensorflow-onnx - Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
hand-gesture-recognition-mediapipe - This is a sample program that recognizes hand signs and finger gestures with a simple MLP using the detected key points. Handpose is estimated using MediaPipe.
fastT5 - ⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
x-stable-diffusion - Real-time inference for Stable Diffusion - 0.88s latency. Covers AITemplate, nvFuser, TensorRT, FlashAttention. Join our Discord communty: https://discord.com/invite/TgHXuSJEk6
nebuly - The user analytics platform for LLMs
timm-flutter-pytorch-lite-blogpost - PyTorch at the Edge: Deploying Over 964 TIMM Models on Android with TorchScript and Flutter.
EdgeSAM - Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"
keras_cv_stable_diffusion_to_tflite - Scripts for converting Keras CV Stable Diffusion to tflite
w2v2-how-to - How to use our public wav2vec2 dimensional emotion model
TNN - TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts. TNN has been deployed in multiple Apps from Tencent, such as Mobile QQ, Weishi, Pitu, etc. Contributions are welcome to work in collaborative with us and make TNN a better framework.