glid-3-xl
meadowrun
glid-3-xl | meadowrun | |
---|---|---|
1 | 2 | |
0 | 93 | |
- | - | |
1.8 | 9.1 | |
almost 2 years ago | 10 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
glid-3-xl
-
Run Your Own DALL·E Mini (Craiyon) Server on EC2
We’re referencing our three models directly as git repos (because they’re not available as packages in PyPI) in model_requirements.txt, but we had to make a few changes to make these repos work as pip packages. Pip looks for a setup.py file in the git repo to figure out which files from the repo need to be installed into the environment, as well as what the dependencies of that repo are. GLID-3-xl and latent-diffusion (another diffusion model that GLID-3-xl depends on) had setup.py files that needed tweaks to include all of the code needed to run the models. SwinIR didn’t have a setup.py file at all, so we added one. Finally, all of these setup.py files needed additional dependencies, which we just added to the model_requirements.txt file.
meadowrun
-
Run Your Own DALL·E Mini (Craiyon) Server on EC2
If you’re anything like us, though, you’ll feel compelled to poke around the code and run the model yourself. We’ll do that in this article using Meadowrun, an open-source library that makes it easy to run Python code in the cloud. For ML models in particular, we just added a feature for requesting GPU machines in a recent release. We’ll also feed the images generated by DALL·E Mini into additional image processing models (GLID-3-xl and SwinIR) to improve the quality of our generated images. Along the way we’ll deal with the speedbumps that come up when running open-source ML models on EC2.
-
Why Starting Python on a Fresh EC2 Instance Takes Over a Minute
So it is more reasonable to cache the download locally for up to 4 hours. That saves us 5–10 seconds on every run.
What are some alternatives?
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
dalle-playground - A playground to generate images from any text prompt using Stable Diffusion (past: using DALL-E Mini)
glid-3-xl - 1.4B latent diffusion model fine tuning
jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
dalle-flow - 🌊 A Human-in-the-Loop workflow for creating HD images from text
warehouse - The Python Package Index
distribution - Placeholder repository to allow filing of general bugs/issues/etc against the Clear Linux OS for Intel Architecture linux distribution
meadowrun-dallemini-demo - A demo of using Meadowrun to run DALL·E Mini, GLID3-XL, and SwinIR in an image generation pipeline
dalle-playground - A playground to generate images from any text prompt using DALL-E Mini and based on OpenAI's DALL-E https://openai.com/blog/dall-e/