ai-art-generator
pyttv
ai-art-generator | pyttv | |
---|---|---|
3 | 5 | |
627 | 38 | |
- | - | |
0.0 | 2.4 | |
about 1 year ago | about 1 year ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ai-art-generator
-
Cheap setup to run SD?
I have a github project that will help you set up large batches of prompts too.
-
Local AI art generation tool updated for Stable Diffusion
Hey all, just a note that I've updated my AI-art generator to work with Stable Diffusion (both txt2img and imgtoimg)! If you have a decent GPU (8GB VRAM+, though more is better), you should be able to use Stable Diffusion on your local computer.
-
Tesla M40 24GB GPU: very poor machine-learning performance?
I'm a software engineer, but a complete machine-learning noob (not exactly a linux guru, either). I'm trying to use the GPU for VQGAN+CLIP image generation. Running on an RTX 3060, I get almost 4 iterations per second, so a 512x512 image takes about 2 minutes to create with default settings. Running on the Tesla M40, I get about 0.4 iterations per second (~22 minutes per 512x512 image at the same settings). A full order of magnitude slower! I'd read that older Tesla GPUs are some of the top value picks when it comes to ML applications, but obviously with this level of performance that isn't the case at all. I figure I must be going wrong somewhere.
pyttv
-
Working on an audio-reactive tool for video generation, similar to deforum but more versatile. Here's an example of prompt audio-reactivity (the audio modulates the prompt attention)
This is the tool in question: https://github.com/sbaier1/pyttv
-
Extension Installation
is it this file init.sh ?
- I made a modified version of the Schism music video where a generative model tries to turn every single frame into an alex grey painting
-
made a music video/animation using stable diffusion
i used this tool i wrote for this: https://github.com/sbaier1/pyttv it integrates with auto's web UI to generate the frames and supports audio reactivity of different types to guide the animation.
What are some alternatives?
vqgan-clip-generator - Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, documentation, and smooth video creation.
AI-Image-PromptGenerator - A flexible UI script to help create and expand on prompts for generative AI art models, such as Stable Diffusion and MidJourney. Get inspired, and create.
Animender - An AI that recommends anime based on personal history.
discoart - 🪩 Create Disco Diffusion artworks in one line
TensorFlow2.0_Notebooks - Implementation of a series of Neural Network architectures in TensorFow 2.0
onnx-web - web UI for GPU-accelerated ONNX pipelines like Stable Diffusion, even on Windows and AMD
tensorflow-deep-learning - All course materials for the Zero to Mastery Deep Learning with TensorFlow course.
stable-diffusion-docker - Run the official Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint.
Deep-Learning-With-TensorFlow - All the resources and hands-on exercises for you to get started with Deep Learning in TensorFlow
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
ReVersion - ReVersion: Diffusion-Based Relation Inversion from Images
sd-webui-inpaint-anything - Inpaint Anything extension performs stable diffusion inpainting on a browser UI using masks from Segment Anything.