S2ML-Generators
ai-art-generator
Our great sponsors
S2ML-Generators | ai-art-generator | |
---|---|---|
3 | 3 | |
178 | 627 | |
- | - | |
2.7 | 0.0 | |
7 months ago | about 1 year ago | |
Jupyter Notebook | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
S2ML-Generators
-
Sense of AI | Vqgan+Clip
At default settings_ https://github.com/justin-bennington/S2ML-Generators
-
The Grand Mausoleum, From an AI
Not OP but based on how this looks it is probably styleGAN. They took an image and combined it with a text prompt and an image dataset GAN to modify the image to a different style. So it's not exactly AI 'generated' images, it's more like images modified by an AI. Here's a colab notebook that you can use to run a VQGAN CLIP image generator: https://github.com/justin-bennington/S2ML-Generators/find/main
-
Asked a AI to draw "The Long Dark" Outcome is interesting.
I found a public google colab which seems to be the same implementation but not easy to use : https://github.com/justin-bennington/S2ML-Generators
ai-art-generator
-
Cheap setup to run SD?
I have a github project that will help you set up large batches of prompts too.
-
Local AI art generation tool updated for Stable Diffusion
Hey all, just a note that I've updated my AI-art generator to work with Stable Diffusion (both txt2img and imgtoimg)! If you have a decent GPU (8GB VRAM+, though more is better), you should be able to use Stable Diffusion on your local computer.
-
Tesla M40 24GB GPU: very poor machine-learning performance?
I'm a software engineer, but a complete machine-learning noob (not exactly a linux guru, either). I'm trying to use the GPU for VQGAN+CLIP image generation. Running on an RTX 3060, I get almost 4 iterations per second, so a 512x512 image takes about 2 minutes to create with default settings. Running on the Tesla M40, I get about 0.4 iterations per second (~22 minutes per 512x512 image at the same settings). A full order of magnitude slower! I'd read that older Tesla GPUs are some of the top value picks when it comes to ML applications, but obviously with this level of performance that isn't the case at all. I figure I must be going wrong somewhere.
What are some alternatives?
S2ML-Art-Generator - Multiple notebooks which allow the use of various machine learning methods to generate or modify multimedia content [Moved to: https://github.com/justin-bennington/S2ML-Generators]
vqgan-clip-generator - Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, documentation, and smooth video creation.
ArtLine - A Deep Learning based project for creating line art portraits.
Animender - An AI that recommends anime based on personal history.
TensorFlow2.0_Notebooks - Implementation of a series of Neural Network architectures in TensorFow 2.0
tensorflow-deep-learning - All course materials for the Zero to Mastery Deep Learning with TensorFlow course.
Deep-Learning-With-TensorFlow - All the resources and hands-on exercises for you to get started with Deep Learning in TensorFlow
ReVersion - ReVersion: Diffusion-Based Relation Inversion from Images
TTS - πΈπ¬ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
vqgan-clip-app - Local image generation using VQGAN-CLIP or CLIP guided diffusion
pyttv - A tool for generating (music-)videos using generative models