Our great sponsors
-
Dreambooth-Stable-Diffusion
Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles. (by JoePenna)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
So, I did a little DreamBooth experiment and trained the model on some stills from Kurzgesagt videos. I've used DreamBooth this fork. Most results are from checkpoint with 2500 steps trained on 28 images. Training took around 70 minutes on 3090, locally. Applying the style results in vibrant, flat 2d images with rounded corners without outlines. Bonus content (nsfw).
You mean like this https://github.com/GeorgLegato/Txt2Vectorgraphics ?
I've used this repository for regularization images. And these options for training: --class_word "style" --token "kurzgesagt"