stable-diffusion
dalle-flow
stable-diffusion | dalle-flow | |
---|---|---|
6 | 31 | |
67 | 2,825 | |
- | 0.1% | |
10.0 | 2.3 | |
over 1 year ago | almost 1 year ago | |
Python | Python | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion
- Stable Inference (k-samplers, multicond, inpainting) and YASD Discord Bot now support Stable Diffusion 2
-
Morphing auto-captioned images between each other or "art2art", code in comments
I used my own SD repo, but it should easily be portable to others, and the captioning step and be automated. There is current an issue open on the automatic1111 repo, if you found it neat.
-
[hlky’s/sd-webui] Announcing Sygil.dev & Project Nataili
fwiw I have my own implementation of the pipeline that is running over an n-many scalable gRPC implementation through jina flow, and I personally think that making the gRPC API first class with an implementation of CompVis stable-diffusion is a mistake.
-
Outpainting and inpainting (via clipseg) with the latest RunwayML 1.5 weights and VAE is out of beta and live on the LAION Discord Server
My stable-diffusion fork
-
There is a new model that brings SD inpainting/outpainting onto the level of Dall-E2. Thanks for Runway to make it open for everyone.
Thanks for the mention! The branch you're probably looking for is here. The code is pretty simple if you want to play with it:
dalle-flow
-
How to Personalize Stable Diffusion for ALL the Things
Jina AI is really into generative AI. It started out with DALL·E Flow, swiftly followed by DiscoArt. And then…🦗🦗*🦗🦗. At least for a while…
-
image generation API similar to Dall-E or Dall-E 2
you can host your own https://github.com/jina-ai/dalle-flow
-
[hlky’s/sd-webui] Announcing Sygil.dev & Project Nataili
For example for all the multimodal stuff like clipseg and upscalers, I'm using isolated executors through jina flow: https://github.com/jina-ai/dalle-flow/tree/main/executors
-
Who needs prompt2prompt anyway? SD 1.5 inpainting model with clipseg prompt for "hair" and various prompts for different hair colors
clipseg is an image segmentation method used to find a mask for an image from a prompt. I implemented it as an executor for dalle-flow and added it to my bot yasd-discord-bot.
-
Sequential token weighting invented by Birch-san@Github allows you to bypass the 77 token limit and use any amount of tokens you want, also allows you to sequentially alter an image
Merged into [dalle-flow](https://github.com/jina-ai/dalle-flow/pull/112) this morning and works on my Discord bot [yasd-discord-bot](https://github.com/AmericanPresidentJimmyCarter/yasd-discord-bot).
-
I made a discord bot for artsy ML stuff - just finished integrating SD
https://github.com/jina-ai/dalle-flow with ports of some code from https://github.com/lstein/stable-diffusion plus some stuff specific to my uses (mostly more exposed settings and meta data on the outputs).
-
AI generated picture "Beatles at Disneyland"
dalle flow - a more advanced version of dall-e mini, running dall-e mega and a diffusion model (free colab), free
- Comparison of DALL-E, Midjourney, Stable Diffusion and more
-
Running Dall-e mini on Windows? (Or: Are there any equivalent text-to-image AI's I can run on a windows PC with a 2080 TI?)
Another option is https://github.com/jina-ai/dalle-flow combines DALL-E Mini with some other image processing models, and they have a pre-built Docker image that you could run locally. However, because it loads additional image processing models, you'll need about 21 GB of GPU RAM which is more than a 2080 TI has. You could always try to edit their Dockerfile and re-build it to remove the other models.
-
Run Your Own DALL·E Mini (Craiyon) Server on EC2
For the second half of this article, we’ll use meadowdata/meadowrun-dallemini-demo which contains a notebook for running multiple models as sequential batch jobs to generate images using Meadowrun. The combination of models is inspired by jina-ai/dalle-flow.
What are some alternatives?
AI-Horde - A crowdsourced distributed cluster for AI art and text generation
dalle-mini - DALL·E Mini - Generate images from a text prompt
stable-diffusion - Latent Text-to-Image Diffusion
jina - ☁️ Build multimodal AI applications with cloud-native stack
stable-diffusion-grpcserver - An implementation of a server for the Stability AI Stable Diffusion API
BasicSR - Open Source Image and Video Restoration Toolbox for Super-resolution, Denoise, Deblurring, etc. Currently, it includes EDSR, RCAN, SRResNet, SRGAN, ESRGAN, EDVR, BasicVSR, SwinIR, ECBSR, etc. Also support StyleGAN2, DFDNet.
git-it-electron - :computer: :mortar_board: Git-it is a (Mac, Win, Linux) Desktop App for Learning Git and GitHub
example-app-store - App store search example, using Jina as backend and Streamlit as frontend
yasd-discord-bot - Yet Another Stable Diffusion Discord Bot
dalle-playground - A playground to generate images from any text prompt using Stable Diffusion (past: using DALL-E Mini)
stable-diffusion-webui - Stable Diffusion web UI
dalle2-in-python - Use DALL·E 2 in Python