dalle-flow
imaginAIry
dalle-flow | imaginAIry | |
---|---|---|
31 | 56 | |
2,823 | 7,795 | |
0.0% | - | |
2.3 | 9.2 | |
12 months ago | 17 days ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dalle-flow
-
How to Personalize Stable Diffusion for ALL the Things
Jina AI is really into generative AI. It started out with DALL·E Flow, swiftly followed by DiscoArt. And then…🦗🦗*🦗🦗. At least for a while…
-
image generation API similar to Dall-E or Dall-E 2
you can host your own https://github.com/jina-ai/dalle-flow
-
[hlky’s/sd-webui] Announcing Sygil.dev & Project Nataili
For example for all the multimodal stuff like clipseg and upscalers, I'm using isolated executors through jina flow: https://github.com/jina-ai/dalle-flow/tree/main/executors
-
Who needs prompt2prompt anyway? SD 1.5 inpainting model with clipseg prompt for "hair" and various prompts for different hair colors
clipseg is an image segmentation method used to find a mask for an image from a prompt. I implemented it as an executor for dalle-flow and added it to my bot yasd-discord-bot.
-
Sequential token weighting invented by Birch-san@Github allows you to bypass the 77 token limit and use any amount of tokens you want, also allows you to sequentially alter an image
Merged into [dalle-flow](https://github.com/jina-ai/dalle-flow/pull/112) this morning and works on my Discord bot [yasd-discord-bot](https://github.com/AmericanPresidentJimmyCarter/yasd-discord-bot).
-
I made a discord bot for artsy ML stuff - just finished integrating SD
https://github.com/jina-ai/dalle-flow with ports of some code from https://github.com/lstein/stable-diffusion plus some stuff specific to my uses (mostly more exposed settings and meta data on the outputs).
-
AI generated picture "Beatles at Disneyland"
dalle flow - a more advanced version of dall-e mini, running dall-e mega and a diffusion model (free colab), free
- Comparison of DALL-E, Midjourney, Stable Diffusion and more
-
Running Dall-e mini on Windows? (Or: Are there any equivalent text-to-image AI's I can run on a windows PC with a 2080 TI?)
Another option is https://github.com/jina-ai/dalle-flow combines DALL-E Mini with some other image processing models, and they have a pre-built Docker image that you could run locally. However, because it loads additional image processing models, you'll need about 21 GB of GPU RAM which is more than a 2080 TI has. You could always try to edit their Dockerfile and re-build it to remove the other models.
-
Run Your Own DALL·E Mini (Craiyon) Server on EC2
For the second half of this article, we’ll use meadowdata/meadowrun-dallemini-demo which contains a notebook for running multiple models as sequential batch jobs to generate images using Meadowrun. The combination of models is inspired by jina-ai/dalle-flow.
imaginAIry
- Show HN: Launch StableStudio local inference in one commmand
- imaginairy 12.0. diffusion upscaling, image shuffling, controlnet 1.1 for any SD 1.5 model
-
I Used Stable Diffusion and Dreambooth to Create an Art Portrait of My Dog
Stable Diffusion works fine on a CPU - on an AMD Ryzen 5700, approx 90s per image (and I believe comparable or faster on my old i7-6700). If you want to kick off a batch in the background while you work on something else, that's plenty fast. (I use: https://github.com/brycedrennan/imaginAIry).
- ControlNet integrated with script-friendly imaginAIry
-
What files would I edit to change features / make new features in Automatic 1111?
Yeah, also, rather than messing around with the AUTO1111 UI, maybe learn some Python and how to use this library: https://github.com/brycedrennan/imaginAIry
- Any way to install and use SD via command line instead of GUI?
-
Master hacker used “AI via command prompt” to ask what “after death looks like”
this is a real thing https://github.com/brycedrennan/imaginAIry
- FLiP Stack Weekly 28 Jan 2023
- FLiP Stack Weekly 28-Jan-2023
- new in imaginAIry - animations!
What are some alternatives?
dalle-mini - DALL·E Mini - Generate images from a text prompt
CodeFormer - [NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
jina - ☁️ Build multimodal AI applications with cloud-native stack
stable-diffusion - Latent Text-to-Image Diffusion
BasicSR - Open Source Image and Video Restoration Toolbox for Super-resolution, Denoise, Deblurring, etc. Currently, it includes EDSR, RCAN, SRResNet, SRGAN, ESRGAN, EDVR, BasicVSR, SwinIR, ECBSR, etc. Also support StyleGAN2, DFDNet.
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
example-app-store - App store search example, using Jina as backend and Streamlit as frontend
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
dalle-playground - A playground to generate images from any text prompt using Stable Diffusion (past: using DALL-E Mini)
stable-diffusion-grpcserver - An implementation of a server for the Stability AI Stable Diffusion API
dalle2-in-python - Use DALL·E 2 in Python
stable-diffusion-webui - Stable Diffusion web UI