pytorch_clip_guided_loss
ai-art-generator
pytorch_clip_guided_loss | ai-art-generator | |
---|---|---|
2 | 3 | |
77 | 627 | |
- | - | |
0.0 | 0.0 | |
over 2 years ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytorch_clip_guided_loss
-
[P] ClipRCNN: Tiny text-guided zero-shot object detector
This approach isn't perfect at all, but it is really simple and works after writing just a few lines of code. You can find our implementation of the ClipRCNN here: https://github.com/bes-dev/pytorch_clip_guided_loss/tree/master/examples/object_detection
-
The new library to make CLIP guided image generation simpler.
There are different ways to generate images by their text descriptions. But one of the most powerful approaches to generate synthetic art is CLIP guided image generation. We provide a new python library that incapsulates the whole logic of the CLIP guided loss into one PyTorch primitive with a simple API. We provide CLIP guided loss using different CLIP models (such as original CLIP models by OpenAI and ruCLIP model by SberAI), multiple prompts (texts or images) as targets for optimization, and automatic detection and translation of the input texts. Also, we provide our tiny implementation of the VQGAN-CLIP based on our library and VQVAE by SberAI (in my opinion, this is the best version of the VQGAN that is publicly available) to make text to image. Our library is all you need to integrate text-powered losses into your image synthesis pipelines by adding a few lines of code. You can find our library here (pypi package is available): https://github.com/bes-dev/pytorch_clip_guided_loss
ai-art-generator
-
Cheap setup to run SD?
I have a github project that will help you set up large batches of prompts too.
-
Local AI art generation tool updated for Stable Diffusion
Hey all, just a note that I've updated my AI-art generator to work with Stable Diffusion (both txt2img and imgtoimg)! If you have a decent GPU (8GB VRAM+, though more is better), you should be able to use Stable Diffusion on your local computer.
-
Tesla M40 24GB GPU: very poor machine-learning performance?
I'm a software engineer, but a complete machine-learning noob (not exactly a linux guru, either). I'm trying to use the GPU for VQGAN+CLIP image generation. Running on an RTX 3060, I get almost 4 iterations per second, so a 512x512 image takes about 2 minutes to create with default settings. Running on the Tesla M40, I get about 0.4 iterations per second (~22 minutes per 512x512 image at the same settings). A full order of magnitude slower! I'd read that older Tesla GPUs are some of the top value picks when it comes to ML applications, but obviously with this level of performance that isn't the case at all. I figure I must be going wrong somewhere.
What are some alternatives?
vqgan-clip-generator - Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, documentation, and smooth video creation.
Animender - An AI that recommends anime based on personal history.
TensorFlow2.0_Notebooks - Implementation of a series of Neural Network architectures in TensorFow 2.0
tensorflow-deep-learning - All course materials for the Zero to Mastery Deep Learning with TensorFlow course.
Deep-Learning-With-TensorFlow - All the resources and hands-on exercises for you to get started with Deep Learning in TensorFlow
ReVersion - ReVersion: Diffusion-Based Relation Inversion from Images
TTS - πΈπ¬ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
vqgan-clip-app - Local image generation using VQGAN-CLIP or CLIP guided diffusion
pyttv - A tool for generating (music-)videos using generative models
S2ML-Art-Generator - Multiple notebooks which allow the use of various machine learning methods to generate or modify multimedia content [Moved to: https://github.com/justin-bennington/S2ML-Generators]