g-diffuser-bot
dalle-flow
g-diffuser-bot | dalle-flow | |
---|---|---|
13 | 31 | |
134 | 2,825 | |
- | 0.1% | |
10.0 | 2.3 | |
over 1 year ago | about 1 year ago | |
Python | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
g-diffuser-bot
-
Stable Diffusion Infinite Zoom, Zlikwid Lightning.
I custom trained a model on my artwork using Dreambooth and then used it for infinite zoom using this https://github.com/parlance-zz/g-diffuser-bot/tree/0d3a239cd97762f0646ca9137e543809d890daed
-
Smooth infinite zoom
My project is here: https://github.com/parlance-zz/g-diffuser-bot
-
Out-painting Mk.3 Demo Gallery
This gallery of images was out-painted using the g-diffuser-bot (https://github.com/parlance-zz/g-diffuser-bot)
- List of SD Tutorials & Resources
-
[hlky’s/sd-webui] Announcing Sygil.dev & Project Nataili
It's currently used for the Idea2Art UI and the g-diffuser bot.
-
getimg.ai - I've made outpainting/inpainting editor publicly available
I'm testing it right now and the inpainting/outpainting is impressive. Are you using parlance's fourier-shaped noise out-painting for latent diffusion models method? https://github.com/parlance-zz/g-diffuser-bot
- Best Local Command-Line SD (non-optimized)?
-
xformers coming to Automatic1111
Likewise it is available in the g-diffuser discord bot or interactive CLI here, g-diffuser is built on-top of the GRPC server project: https://github.com/parlance-zz/g-diffuser-bot
- Huge out-painting in 1 step without erased colors or pre-selecting images
-
G-Diffuser-Bot In-painting Demo Reel
Yes, the code is here now: https://github.com/parlance-zz/g-diffuser-bot/tree/g-diffuser-bot-diffuserslib-beta
dalle-flow
-
How to Personalize Stable Diffusion for ALL the Things
Jina AI is really into generative AI. It started out with DALL·E Flow, swiftly followed by DiscoArt. And then…🦗🦗*🦗🦗. At least for a while…
-
image generation API similar to Dall-E or Dall-E 2
you can host your own https://github.com/jina-ai/dalle-flow
-
[hlky’s/sd-webui] Announcing Sygil.dev & Project Nataili
For example for all the multimodal stuff like clipseg and upscalers, I'm using isolated executors through jina flow: https://github.com/jina-ai/dalle-flow/tree/main/executors
-
Who needs prompt2prompt anyway? SD 1.5 inpainting model with clipseg prompt for "hair" and various prompts for different hair colors
clipseg is an image segmentation method used to find a mask for an image from a prompt. I implemented it as an executor for dalle-flow and added it to my bot yasd-discord-bot.
-
Sequential token weighting invented by Birch-san@Github allows you to bypass the 77 token limit and use any amount of tokens you want, also allows you to sequentially alter an image
Merged into [dalle-flow](https://github.com/jina-ai/dalle-flow/pull/112) this morning and works on my Discord bot [yasd-discord-bot](https://github.com/AmericanPresidentJimmyCarter/yasd-discord-bot).
-
I made a discord bot for artsy ML stuff - just finished integrating SD
https://github.com/jina-ai/dalle-flow with ports of some code from https://github.com/lstein/stable-diffusion plus some stuff specific to my uses (mostly more exposed settings and meta data on the outputs).
-
AI generated picture "Beatles at Disneyland"
dalle flow - a more advanced version of dall-e mini, running dall-e mega and a diffusion model (free colab), free
- Comparison of DALL-E, Midjourney, Stable Diffusion and more
-
Running Dall-e mini on Windows? (Or: Are there any equivalent text-to-image AI's I can run on a windows PC with a 2080 TI?)
Another option is https://github.com/jina-ai/dalle-flow combines DALL-E Mini with some other image processing models, and they have a pre-built Docker image that you could run locally. However, because it loads additional image processing models, you'll need about 21 GB of GPU RAM which is more than a 2080 TI has. You could always try to edit their Dockerfile and re-build it to remove the other models.
-
Run Your Own DALL·E Mini (Craiyon) Server on EC2
For the second half of this article, we’ll use meadowdata/meadowrun-dallemini-demo which contains a notebook for running multiple models as sequential batch jobs to generate images using Meadowrun. The combination of models is inspired by jina-ai/dalle-flow.
What are some alternatives?
Umi-AI-Embeds - Wildcards and Code for the Umi AI Engine
dalle-mini - DALL·E Mini - Generate images from a text prompt
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
jina - ☁️ Build multimodal AI applications with cloud-native stack
g-diffuser-lib - Discord bot and utilities for the diffusers library (stable-diffusion) [Moved to: https://github.com/parlance-zz/g-diffuser-bot]
BasicSR - Open Source Image and Video Restoration Toolbox for Super-resolution, Denoise, Deblurring, etc. Currently, it includes EDSR, RCAN, SRResNet, SRGAN, ESRGAN, EDVR, BasicVSR, SwinIR, ECBSR, etc. Also support StyleGAN2, DFDNet.
stable-diffusion-webui - Stable Diffusion web UI
example-app-store - App store search example, using Jina as backend and Streamlit as frontend
progrock-stable - Stable Diffusion with some Proggy Enhancements
dalle-playground - A playground to generate images from any text prompt using Stable Diffusion (past: using DALL-E Mini)
a1111-sd-webui-tagcomplete - Booru style tag autocompletion for AUTOMATIC1111's Stable Diffusion web UI
dalle2-in-python - Use DALL·E 2 in Python