stable-diffusion-tensorflow
tensorflow
stable-diffusion-tensorflow | tensorflow | |
---|---|---|
18 | 223 | |
1,569 | 182,575 | |
- | 0.5% | |
0.0 | 10.0 | |
9 months ago | 5 days ago | |
Python | C++ | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-tensorflow
- Keras model SD or similar I can train from scratch?
-
Anyone attempted to convert stablediffusion tensorflow to tf lite?
was curious if someone attempted the conversion? I tried here https://github.com/divamgupta/stable-diffusion-tensorflow/issues/58 but having some input shapes error. First time trying the conversion here, would love to run it on a edge tpu.
-
Stable Diffusion Tensorflow to TF Lite
Checking here is someone tried to convert the tensorflow diffusion model into a tf lite?https://github.com/divamgupta/stable-diffusion-tensorflow/issues/58
-
SD on intel arc?
Actually I was just on GitHub trying to submit issues related to me testing Intel's PyTorch and Tensorflow extensions when I saw this; it seems that someone has already ported SD over to the tensorflow framework and so you can probably start using intel's extension for tensorflow with it immediately; and according to this article you can use Intel's extension within WSL under windows as well. But unfortunately given how the guy whose issue I linked to has been facing pretty serious performance issues of inferencing taking many minutes longer than it should when using an A770 to do SD-related inferencing, you might be better off waiting for intel's extension for tensorflow versions 1.2 and greater or something like that, so that when it's your turn to use it, Intel has already ironed out most of the major bugs within the software :)
-
Stable Diffusion with AMDGPU on WSL
tensorflow-stable-diffusion
-
Image2Image with AMD hardware?
# clone git clone https://github.com/divamgupta/stable-diffusion-tensorflow.git cd stable-diffusion-tensorflow # create venv python -m venv --prompt sdtf-windows-directml venv venv\Scripts\activate # verify venv is installed and activated pip --version # install deps pip install -r requirements.txt pip install tensorflow-directml-plugin # you should see DML debug output and at least one GPU python -c 'import tensorflow as tf; print(tf.config.list_physical_devices())' # run (show help) python text2image.py --help python text2image.py --prompt "a fluffy kitten"
-
I have no PC. Just DLed this for iOS
(Answers based on stable-diffusion open model) If you have a M1 processor: https://github.com/divamgupta/diffusionbee-stable-diffusion-ui (I've tested it) Or this claimed faster with TensorFlow: https://github.com/divamgupta/stable-diffusion-tensorflow
-
Keras Inpainting Colab
Added inpainting support to the original keras implementation: https://github.com/divamgupta/stable-diffusion-tensorflow Colab: https://colab.research.google.com/drive/1Bf-bNmAdtQhPcYNyC-guu0uTu9MYYfLu Github page: https://github.com/ShaunXZ/stable-diffusion-tensorflow
-
[N] Stable Diffusion reaches new record (with explanation + colab link)
I wonder if you mean 13 seconds per image because this implementation reports ~10s per image with mixed precision.
-
High-performance image generation using Stable Diffusion in KerasCV
On intel MacBookPro, CPU-only, the original one[1] using pytorch only utilized one core. A tensorflow implementation[2] with oneDNN support which utilized most of the cores ran at ~11sec/iteration. Another OpenVINO based implementation[3] ran at ~6.0sec/iteration.
[1] https://github.com/CompVis/stable-diffusion/
[2] https://github.com/divamgupta/stable-diffusion-tensorflow/
[3] https://github.com/bes-dev/stable_diffusion.openvino/
tensorflow
-
Side Quest Devblog #1: These Fakes are getting Deep
# L2-normalize the encoding tensors image_encoding = tf.math.l2_normalize(image_encoding, axis=1) audio_encoding = tf.math.l2_normalize(audio_encoding, axis=1) # Find euclidean distance between image_encoding and audio_encoding # Essentially trying to detect if the face is saying the audio # Will return nan without the 1e-12 offset due to https://github.com/tensorflow/tensorflow/issues/12071 d = tf.norm((image_encoding - audio_encoding) + 1e-12, ord='euclidean', axis=1, keepdims=True) discriminator = keras.Model(inputs=[image_input, audio_input], outputs=[d], name="discriminator")
-
Google lays off its Python team
[3]: https://github.com/tensorflow/tensorflow/graphs/contributors
- TensorFlow-metal on Apple Mac is junk for training
-
🔥🚀 Top 10 Open-Source Must-Have Tools for Crafting Your Own Chatbot 🤖💬
To get up to speed with TensorFlow, check their quickstart Support TensorFlow on GitHub ⭐
- One .gitignore to rule them all
-
10 Github repositories to achieve Python mastery
Explore here.
-
GitHub and Developer Ecosystem Control
Part of the major userbase pull in GitHub revolves around hosting a considerable number of popular projects including Angular, React, Kubernetes, cpython, Ruby, tensorflow, and well even the software that powers this site Forem.
-
Non-determinism in GPT-4 is caused by Sparse MoE
Right but that's not an inherent GPU determinism issue. It's a software issue.
https://github.com/tensorflow/tensorflow/issues/3103#issueco... is correct that it's not necessary, it's a choice.
Your line of reasoning appears to be "GPUs are inherently non-deterministic don't be quick to judge someone's code" which as far as I can tell is dead wrong.
Admittedly there are some cases and instructions that may result in non-determinism but they are inherently necessary. The author should thinking carefully before introducing non-determinism. There are many scenarios where it is irrelevant, but ultimately the issue we are discussing here isn't the GPU's fault.
-
Can someone explain how keras code gets into the Tensorflow package?
and things like y = layers.ELU()(y) work as expected. I wanted to see a list of the available layers so I went to the Tensorflow GitHub repository and to the keras directory. There's a warning in that directory that says:
-
Is it even possible to design a ML model without using Python or MATLAB? Like using C++, C or Java?
Exactly what language do you think TensorFlow is written in? :)
What are some alternatives?
fast-stable-diffusion - fast-stable-diffusion + DreamBooth
PaddlePaddle - PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
Prophet - Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth.
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
keras-cv - Industry-strength Computer Vision workflows with Keras
LightGBM - A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
intel-extension-for-tensorflow - Intel® Extension for TensorFlow*
scikit-learn - scikit-learn: machine learning in Python
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
LightFM - A Python implementation of LightFM, a hybrid recommendation algorithm.