Txt2Vectorgraphics
Dreambooth-Stable-Diffusion
Txt2Vectorgraphics | Dreambooth-Stable-Diffusion | |
---|---|---|
30 | 47 | |
361 | 7,383 | |
- | - | |
3.7 | 0.0 | |
about 1 year ago | over 1 year ago | |
Python | Jupyter Notebook | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Txt2Vectorgraphics
-
If you'd told me a few years ago that one day I would use a 3d printer to create artwork generated by an AI for a robot vacuum. The future is now.
I use the earlier version of this https://github.com/GeorgLegato/Txt2Vectorgraphics and it's great for output destined for cricut, laser, or 2-color z-hop graphics on 3D printer.
-
How to mimic the flash effect in SD
Indeed. Same guy who created this cool extension: https://github.com/GeorgLegato/Txt2Vectorgraphics
- SD-webui: Beta for Vectorstudio-extension, please test; check video
-
Preview: Vectorstudio extension for SD-Webui, Part I
https://github.com/GeorgLegato/Txt2Vectorgraphics the repo will be renamed to vectorstudio containing the features of this video (svg editor, sendto controlnet:img2img )
- Best model for black line illustrated art
-
Question: is it possible to create such illustrations with stable diffusion?
In addition I used my Txt2Vectorgraphics script to produce svg graphics as side effect and to ensure b/w only GeorgLegato/Txt2Vectorgraphics: Custom Script for Automatics1111 StableDiffusion-WebUI. (github.com)
- Logos
- Update: Txt2Vectorgraphics 0.3 - ControlNet + SVG preview
-
Update: Txt2Vectorgraphics 0.3
Release 0.3: SVG in gallery, controlnet support ยท GeorgLegato/Txt2Vectorgraphics (github.com)
- Any good models or ideas for single line art/scribbles?
Dreambooth-Stable-Diffusion
- Where can I train my own LoRA?
-
I am having an error with ControlNet (RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`)
I did search online for an answer, but I am a PC noob, I didn't know what to do when I found this solution in this link: https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/issues/113
- True to life photorealism v2
-
How can to create a custom image generation model?
Do you know some projects or guided tutorials that could help me? How many drawings with the desired style I should then have to give to train the AI model? I found Dreambooth on Stable Diffusion but it seams to be for another use case.
-
How to Make Your Own Anime (Linux/Mac Tutorial follow along)
This seems to be an issue with the code and or the environment itself. There is an open bug for this where some suggestions are p provided by others on how to fix. https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/issues/47
-
AI generated portraits of Myself as different classes: Looking for opinion!
Could you provide some more detail on how this works? Did you just use this GitHub repository or did you put together your own implementation?
- Looking for an AI model to transform a video of me (full body) into an animated avatar. Does something like this exist?
-
Ray Liotta as Tommy Vercetti from GTA Vice City
I think the best way to do this would be to train Dreambooth on a number of photos of Ray Liotta first, and use Stable Diffusion instead. https://github.com/XavierXiao/Dreambooth-Stable-Diffusion
-
Luddites don't have a issue with AI, just that it "steals" from them (it doesn't). But they also have a issue with using your own child's drawings as a reference.
Dreambooth. There are other ways, but that is the gold standard. It takes even more Vram than regular stable diffusion, so if you don't have a very beefy card (e.g. 4090 with 25 GB VRAM) various websites let you do it onlin for a small fee. You then download a new model that has all the old stuff (e.g.the 4 gigabyte SD 1.5 file) plus your new images. Like I said, there are other ways that are easier, but when people show great results they are usually talking about Dreambooth.
-
Bunch of misinformation being spread in this thread
THE CODE (unofficial implementation, for the exact wording stating how little images you need read the paper) is designed with extremely little data in mind. I don't know how else to phrase it dude, do you think the training is a magic black box that runs with snail neurons? If you train a dreambooth model the jupyter ide makes calls to python files, those are the files. That is the code
What are some alternatives?
deforum-for-automatic1111-webui - Deforum extension script for AUTOMATIC1111's Stable Diffusion webui [Moved to: https://github.com/deforum-art/sd-webui-deforum]
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
vtracer - Raster to Vector Graphics Converter
stable-diffusion-webui - Stable Diffusion web UI
stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models
SHARK - SHARK - High Performance Machine Learning Distribution
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
SD-Regularization-Images-Style-Dreambooth
StableTuner - Finetuning SD in style.
sd-multi - Run multiple forks of Stable Diffusion
Dreambooth-SD-optimized - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion