Fine-tuned the model on Kurzgesagt videos with DreamBooth. Here are some results.

This page summarizes the projects mentioned and recommended in the original post on /r/StableDiffusion

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • Dreambooth-Stable-Diffusion

    Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles. (by JoePenna)

  • So, I did a little DreamBooth experiment and trained the model on some stills from Kurzgesagt videos. I've used DreamBooth this fork. Most results are from checkpoint with 2500 steps trained on 28 images. Training took around 70 minutes on 3090, locally. Applying the style results in vibrant, flat 2d images with rounded corners without outlines. Bonus content (nsfw).

  • Txt2Vectorgraphics

    Custom Script for Automatics1111 StableDiffusion-WebUI.

  • You mean like this https://github.com/GeorgLegato/Txt2Vectorgraphics ?

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • I've used this repository for regularization images. And these options for training: --class_word "style" --token "kurzgesagt"

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts