autograd
dalle-flow
autograd | dalle-flow | |
---|---|---|
6 | 31 | |
6,797 | 2,823 | |
0.7% | 0.1% | |
6.0 | 2.3 | |
7 days ago | 12 months ago | |
Python | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
autograd
-
JAX ā NumPy on the CPU, GPU, and TPU, with great automatic differentiation
Actually, that's never been a constraint for JAX autodiff. JAX grew out of the original Autograd (https://github.com/hips/autograd), so differentiating through Python control flow always worked. It's jax.jit and jax.vmap which place constraints on control flow, requiring structured control flow combinators like those.
-
Autodidax: Jax Core from Scratch (In Python)
I'm sure there's a lot of good material around, but here are some links that are conceptually very close to the linked Autodidax.
There's [Autodidact](https://github.com/mattjj/autodidact), a predecessor to Autodidax, which was a simplified implementation of [the original Autograd](https://github.com/hips/autograd). It focuses on reverse-mode autodiff, not building an open-ended transformation system like Autodidax. It's also pretty close to the content in [these lecture slides](https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/slid...) and [this talk](http://videolectures.net/deeplearning2017_johnson_automatic_...). But the autodiff in Autodidax is more sophisticated and reflects clearer thinking. In particular, Autodidax shows how to implement forward- and reverse-modes using only one set of linearization rules (like in [this paper](https://arxiv.org/abs/2204.10923)).
Here's [an even smaller and more recent variant](https://gist.github.com/mattjj/52914908ac22d9ad57b76b685d19a...), a single ~100 line file for reverse-mode AD on top of NumPy, which was live-coded during a lecture. There's no explanatory material to go with it though.
-
Numba: A High Performance Python Compiler
XLA is "higher level" than what Numba produces.
You may be able to get the equivalent of jax via numba+numpy+autograd[1], but I haven't tried it before.
IMHO, jax is best thought of as a numerical computation library that happens to include autograd, vmapping, pmapping and provides a high level interface for XLA.
I have built a numerical optimisation library with it, and although a few things became verbose, it was a rather pleasant experience as the natural vmapping made everything a breeze, I didn't have to write the gradients for my testing functions, except for special cases that involved exponents and logs that needed a bit of delicate care.
[1] https://github.com/HIPS/autograd
-
Run Your Own DALLĀ·E Mini (Craiyon) Server on EC2
Next, we want the code in the https://github.com/hrichardlee/dalle-playground repo, and we want to construct a pip environment from the backend/requirements.txt file in that repo. We were almost able to use the saharmor/dalle-playground repo as-is, but we had to make one change to add the jax[cuda] package to the requirements.txt file. In case you havenāt seen jax before, jax is a machine-learning library from Google, roughly equivalent to Tensorflow or PyTorch. It combines Autograd for automatic differentiation and XLA (accelerated linear algebra) for JIT-compiling numpy-like code for Googleās TPUs or Nvidiaās CUDA API for GPUs. The CUDA support requires explicitly selecting the [cuda] option when we install the package.
-
Trade-Offs in Automatic Differentiation: TensorFlow, PyTorch, Jax, and Julia
> fun fact, the Jax folks at Google Brain did have a Python source code transform AD at one point but it was scrapped essentially because of these difficulties
I assume you mean autograd?
https://github.com/HIPS/autograd
-
JAX - COMPARING WITH THE BIG ONES
These four points lead to an enormous differentiation in the ecosystem: Keras, for example, was originally thought to be almost completely focused on point (4), leaving the other tasks to a backend engine. In 2015, on the other hand, Autograd focused on the first two points, allowing users to write code using only "classic" Python and NumPy constructs, providing subsequently many options for point (2). Autograd's simplicity greatly influenced the development of the libraries to follow, but it was penalized by the clear lack of the points (3) and (4), i.e. adequate techniques to speed up the code and sufficiently abstract modules for neural network development.
dalle-flow
-
How to Personalize Stable Diffusion for ALL the Things
Jina AI is really into generative AI. It started out with DALLĀ·E Flow, swiftly followed by DiscoArt. And thenā¦š¦š¦*š¦š¦. At least for a whileā¦
-
image generation API similar to Dall-E or Dall-E 2
you can host your own https://github.com/jina-ai/dalle-flow
-
[hlkyās/sd-webui] Announcing Sygil.dev & Project Nataili
For example for all the multimodal stuff like clipseg and upscalers, I'm using isolated executors through jina flow: https://github.com/jina-ai/dalle-flow/tree/main/executors
-
Who needs prompt2prompt anyway? SD 1.5 inpainting model with clipseg prompt for "hair" and various prompts for different hair colors
clipseg is an image segmentation method used to find a mask for an image from a prompt. I implemented it as an executor for dalle-flow and added it to my bot yasd-discord-bot.
-
Sequential token weighting invented by Birch-san@Github allows you to bypass the 77 token limit and use any amount of tokens you want, also allows you to sequentially alter an image
Merged into [dalle-flow](https://github.com/jina-ai/dalle-flow/pull/112) this morning and works on my Discord bot [yasd-discord-bot](https://github.com/AmericanPresidentJimmyCarter/yasd-discord-bot).
-
I made a discord bot for artsy ML stuff - just finished integrating SD
https://github.com/jina-ai/dalle-flow with ports of some code from https://github.com/lstein/stable-diffusion plus some stuff specific to my uses (mostly more exposed settings and meta data on the outputs).
-
AI generated picture "Beatles at Disneyland"
dalle flow - a more advanced version of dall-e mini, running dall-e mega and a diffusion model (free colab), free
- Comparison of DALL-E, Midjourney, Stable Diffusion and more
-
Running Dall-e mini on Windows? (Or: Are there any equivalent text-to-image AI's I can run on a windows PC with a 2080 TI?)
Another option is https://github.com/jina-ai/dalle-flow combines DALL-E Mini with some other image processing models, and they have a pre-built Docker image that you could run locally. However, because it loads additional image processing models, you'll need about 21 GB of GPU RAM which is more than a 2080 TI has. You could always try to edit their Dockerfile and re-build it to remove the other models.
-
Run Your Own DALLĀ·E Mini (Craiyon) Server on EC2
For the second half of this article, weāll use meadowdata/meadowrun-dallemini-demo which contains a notebook for running multiple models as sequential batch jobs to generate images using Meadowrun. The combination of models is inspired by jina-ai/dalle-flow.
What are some alternatives?
Enzyme - High-performance automatic differentiation of LLVM and MLIR.
dalle-mini - DALLĀ·E Mini - Generate images from a text prompt
SwinIR - SwinIR: Image Restoration Using Swin Transformer (official repository)
jina - āļø Build multimodal AI applications with cloud-native stack
jaxonnxruntime - A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.
BasicSR - Open Source Image and Video Restoration Toolbox for Super-resolution, Denoise, Deblurring, etc. Currently, it includes EDSR, RCAN, SRResNet, SRGAN, ESRGAN, EDVR, BasicVSR, SwinIR, ECBSR, etc. Also support StyleGAN2, DFDNet.
autodidact - A pedagogical implementation of Autograd
example-app-store - App store search example, using Jina as backend and Streamlit as frontend
fbpic - Spectral, quasi-3D Particle-In-Cell code, for CPU and GPU
dalle-playground - A playground to generate images from any text prompt using Stable Diffusion (past: using DALL-E Mini)
pure_numba_alias_sampling - Pure numba version of Alias sampling algorithm from L. Devroye's, "Non-Uniform Random Random Variate Generation"
dalle2-in-python - Use DALLĀ·E 2 in Python