vqgan-clip-app
ai-art-generator
vqgan-clip-app | ai-art-generator | |
---|---|---|
3 | 3 | |
101 | 627 | |
- | - | |
0.0 | 0.0 | |
over 1 year ago | about 1 year ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vqgan-clip-app
-
How not to waste $1600?
If you want to try your hand at buggering your whole system - try playing with AI image generation as it uses all possible computer assets :D . There is a lot of forms and installations for those but I VQGANs from github the easiest. Problem is that some require familarity with shell, python and in some cases - you need to enable the Linux subsystem in Windows (is it called a subsystem? it is not exactly a VM). This one is the easiest to install out of all I tried. But I liked the results of Pixray most but I wrecked it. I use this one nowadays.
-
[P] Nvidia releases web app for GauGAN2, which generates landscape images via text description, inpainting, sketch, object type segmentation map, and style image
My attempt at centralizing models to be run locally looks like this: https://github.com/tnwei/vqgan-clip-app/, currently supports VQGAN-CLIP models and CLIP guided diffusion models.
- App for running VQGAN-CLIP and CLIP guided diffusion locally
ai-art-generator
-
Cheap setup to run SD?
I have a github project that will help you set up large batches of prompts too.
-
Local AI art generation tool updated for Stable Diffusion
Hey all, just a note that I've updated my AI-art generator to work with Stable Diffusion (both txt2img and imgtoimg)! If you have a decent GPU (8GB VRAM+, though more is better), you should be able to use Stable Diffusion on your local computer.
-
Tesla M40 24GB GPU: very poor machine-learning performance?
I'm a software engineer, but a complete machine-learning noob (not exactly a linux guru, either). I'm trying to use the GPU for VQGAN+CLIP image generation. Running on an RTX 3060, I get almost 4 iterations per second, so a 512x512 image takes about 2 minutes to create with default settings. Running on the Tesla M40, I get about 0.4 iterations per second (~22 minutes per 512x512 image at the same settings). A full order of magnitude slower! I'd read that older Tesla GPUs are some of the top value picks when it comes to ML applications, but obviously with this level of performance that isn't the case at all. I figure I must be going wrong somewhere.
What are some alternatives?
VQGAN-CLIP-Video - Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab.
vqgan-clip-generator - Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, documentation, and smooth video creation.
streamlit - Streamlit β A faster way to build and share data apps.
Animender - An AI that recommends anime based on personal history.
CLIP-Guided-Diffusion - Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
TensorFlow2.0_Notebooks - Implementation of a series of Neural Network architectures in TensorFow 2.0
jina-app-store-example - App store search example, using Jina as backend and Streamlit as frontend [Moved to: https://github.com/jina-ai/example-app-store]
tensorflow-deep-learning - All course materials for the Zero to Mastery Deep Learning with TensorFlow course.
Deep-Learning-With-TensorFlow - All the resources and hands-on exercises for you to get started with Deep Learning in TensorFlow
ReVersion - ReVersion: Diffusion-Based Relation Inversion from Images
TTS - πΈπ¬ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
pyttv - A tool for generating (music-)videos using generative models