dalle-playground
meadowrun
dalle-playground | meadowrun | |
---|---|---|
1 | 2 | |
6 | 93 | |
- | - | |
0.0 | 9.1 | |
over 1 year ago | 10 months ago | |
JavaScript | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dalle-playground
-
Run Your Own DALL·E Mini (Craiyon) Server on EC2
Next, we want the code in the https://github.com/hrichardlee/dalle-playground repo, and we want to construct a pip environment from the backend/requirements.txt file in that repo. We were almost able to use the saharmor/dalle-playground repo as-is, but we had to make one change to add the jax[cuda] package to the requirements.txt file. In case you haven’t seen jax before, jax is a machine-learning library from Google, roughly equivalent to Tensorflow or PyTorch. It combines Autograd for automatic differentiation and XLA (accelerated linear algebra) for JIT-compiling numpy-like code for Google’s TPUs or Nvidia’s CUDA API for GPUs. The CUDA support requires explicitly selecting the [cuda] option when we install the package.
meadowrun
-
Run Your Own DALL·E Mini (Craiyon) Server on EC2
If you’re anything like us, though, you’ll feel compelled to poke around the code and run the model yourself. We’ll do that in this article using Meadowrun, an open-source library that makes it easy to run Python code in the cloud. For ML models in particular, we just added a feature for requesting GPU machines in a recent release. We’ll also feed the images generated by DALL·E Mini into additional image processing models (GLID-3-xl and SwinIR) to improve the quality of our generated images. Along the way we’ll deal with the speedbumps that come up when running open-source ML models on EC2.
-
Why Starting Python on a Fresh EC2 Instance Takes Over a Minute
So it is more reasonable to cache the download locally for up to 4 hours. That saves us 5–10 seconds on every run.
What are some alternatives?
dalle-flow - 🌊 A Human-in-the-Loop workflow for creating HD images from text
dalle-playground - A playground to generate images from any text prompt using Stable Diffusion (past: using DALL-E Mini)
glid-3-xl - 1.4B latent diffusion model fine tuning
SwinIR - SwinIR: Image Restoration Using Swin Transformer (official repository)
warehouse - The Python Package Index
glid-3-xl - 1.4B latent diffusion model fine tuning
jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
distribution - Placeholder repository to allow filing of general bugs/issues/etc against the Clear Linux OS for Intel Architecture linux distribution
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models