dalle-flow
stable-diffusion
dalle-flow | stable-diffusion | |
---|---|---|
31 | 142 | |
2,823 | 2,438 | |
0.0% | - | |
2.3 | 9.8 | |
12 months ago | over 1 year ago | |
Python | Jupyter Notebook | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dalle-flow
-
How to Personalize Stable Diffusion for ALL the Things
Jina AI is really into generative AI. It started out with DALL·E Flow, swiftly followed by DiscoArt. And thenâŠđŠđŠ*đŠđŠ. At least for a whileâŠ
-
image generation API similar to Dall-E or Dall-E 2
you can host your own https://github.com/jina-ai/dalle-flow
-
[hlkyâs/sd-webui] Announcing Sygil.dev & Project Nataili
For example for all the multimodal stuff like clipseg and upscalers, I'm using isolated executors through jina flow: https://github.com/jina-ai/dalle-flow/tree/main/executors
-
Who needs prompt2prompt anyway? SD 1.5 inpainting model with clipseg prompt for "hair" and various prompts for different hair colors
clipseg is an image segmentation method used to find a mask for an image from a prompt. I implemented it as an executor for dalle-flow and added it to my bot yasd-discord-bot.
-
Sequential token weighting invented by Birch-san@Github allows you to bypass the 77 token limit and use any amount of tokens you want, also allows you to sequentially alter an image
Merged into [dalle-flow](https://github.com/jina-ai/dalle-flow/pull/112) this morning and works on my Discord bot [yasd-discord-bot](https://github.com/AmericanPresidentJimmyCarter/yasd-discord-bot).
-
I made a discord bot for artsy ML stuff - just finished integrating SD
https://github.com/jina-ai/dalle-flow with ports of some code from https://github.com/lstein/stable-diffusion plus some stuff specific to my uses (mostly more exposed settings and meta data on the outputs).
-
AI generated picture "Beatles at Disneyland"
dalle flow - a more advanced version of dall-e mini, running dall-e mega and a diffusion model (free colab), free
- Comparison of DALL-E, Midjourney, Stable Diffusion and more
-
Running Dall-e mini on Windows? (Or: Are there any equivalent text-to-image AI's I can run on a windows PC with a 2080 TI?)
Another option is https://github.com/jina-ai/dalle-flow combines DALL-E Mini with some other image processing models, and they have a pre-built Docker image that you could run locally. However, because it loads additional image processing models, you'll need about 21 GB of GPU RAM which is more than a 2080 TI has. You could always try to edit their Dockerfile and re-build it to remove the other models.
-
Run Your Own DALL·E Mini (Craiyon) Server on EC2
For the second half of this article, weâll use meadowdata/meadowrun-dallemini-demo which contains a notebook for running multiple models as sequential batch jobs to generate images using Meadowrun. The combination of models is inspired by jina-ai/dalle-flow.
stable-diffusion
- [Stable Diffusion] Aide nécessaire à l'augmentation de la taille du fichier maximum sur l'installation locale
- [Machine Learning] [P] Exécutez une diffusion stable sur le GPU de votre M1 Mac
- Its time!
-
Anybody running SD on a Macbook Pro? What are you using and how did you install it?
Yes, you can install it with Python! https://github.com/lstein/stable-diffusion works with macOS, and you can control all the common parameter via their WebUI or CLI :)
-
How do I save the arguments for images I create when using the terminal? (Apple M1 Pro)
I'm using lstein fork ("dream") and when I create an image from the terminal, it also writes back to the terminal like this:
- I Resurrected âUgly Sonicâ with Stable Diffusion Textual Inversion
-
AI Seamless Texture Generator Built-In to Blender
> Whenever I ask for something like âseamless tiling xxxxxxâ it kinda sorta gets the idea, but the resulting texture doesnât quite tile right.
Getting seamless tiling requires more than just have "seamless tiling" in the prompt. It also depends on if the fork you're using has that feature at all.
https://github.com/lstein/stable-diffusion has the feature, but you need to pass it outside the prompt. So if you use the `dream.py` prompt cli, you can pass it `"Hats on the ground" --seamless` and it should be perfectly tilable.
-
Auto SD Workflow - Update 0.2.0 - "Collections", Password Protection, Brand new UI + more
From https://github.com/lstein/stable-diffusion
-
Stable Diffusion GUIs for Apple Silicon
Stable Diffusion Dream Script: This is the original site/script for supporting macOS. I found this soon after Stable Diffusion was publicly released and it was the site which inspired me to try out using Stable Diffusion on a mac. They have a web-based UI (as well as command-line scripts) and a lot of documentation on how to get things working.
-
Still can't believe this technology is real. My talentless 2 minute sketch on the left.
Iâm pretty sure it works for M2 as well - basically the newer ARM-based Macs. The instructions to get it working are detailed! https://github.com/lstein/stable-diffusion
What are some alternatives?
dalle-mini - DALL·E Mini - Generate images from a text prompt
waifu-diffusion - stable diffusion finetuned on weeb stuff
jina - âïž Build multimodal AI applications with cloud-native stack
taming-transformers - Taming Transformers for High-Resolution Image Synthesis
BasicSR - Open Source Image and Video Restoration Toolbox for Super-resolution, Denoise, Deblurring, etc. Currently, it includes EDSR, RCAN, SRResNet, SRGAN, ESRGAN, EDVR, BasicVSR, SwinIR, ECBSR, etc. Also support StyleGAN2, DFDNet.
stable-diffusion-webui - Stable Diffusion web UI
example-app-store - App store search example, using Jina as backend and Streamlit as frontend
diffusers-uncensored - Uncensored fork of diffusers
dalle-playground - A playground to generate images from any text prompt using Stable Diffusion (past: using DALL-E Mini)
txt2imghd - A port of GOBIG for Stable Diffusion
dalle2-in-python - Use DALL·E 2 in Python
dream-textures - Stable Diffusion built-in to Blender