What causes this type of artifacting during Lora training?

This page summarizes the projects mentioned and recommended in the original post on /r/StableDiffusion

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • kohya_ss

  • I'm using kohya_ss for LORA training. There is a (slightly outdated) video here to help you learn how to use it. It's not foolproof or straightforward though, hence my question!

  • sd_dreambooth_extension

  • hmm, looks like it's burnt. in the Dreambooth extension for A1111 there's a slider called " Learning Rate Warmup Steps" and putting this on 500 makes it not overtrain quickly. kohya_ss probably also has an option like that, from your screenshot, i'd guess it's "LR Warmup (% of steps)". for the scheduler i use "constant_with_warmup". this is the default in the dreambooth extension. and has worked fine for me.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • stable-diffusion-webui-ux

    Stable Diffusion web UI UX

  • I'm using Khoya_ss for training and A1111 UX for generating. I'm not sure what Kohya uses for it's own generating for the samples...

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts