custom-diffusion

Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023) (by adobe-research)

Custom-diffusion Alternatives

Similar projects and alternatives to custom-diffusion

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better custom-diffusion alternative or higher similarity.

custom-diffusion reviews and mentions

Posts with mentions or reviews of custom-diffusion. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-08.
  • ELITE: new fine-tuning technique that can be trained in less than a second
    2 projects | /r/StableDiffusion | 8 Mar 2023
    I think https://github.com/adobe-research/custom-diffusion
  • What's the best technology for training faces these days?
    1 project | /r/StableDiffusion | 15 Feb 2023
    I attempted Custom Diffusion which, too, did not yield anywhere near as photorealistic face outputs as Dreambooth.
  • [Discussion] Stable Diffusion Models with Subject/Keyword References
    1 project | /r/MachineLearning | 28 Jan 2023
  • Suggestions for creating prompts with two people that you've added via fine-tuning?
    1 project | /r/StableDiffusion | 12 Jan 2023
    Curious if anyone has better experiences? I've also been trying Adobe's Custom Diffusion code, which probably works the best out of Textual Inversion and Dreambooth, but the code they provide is super buggy and very hard to use with Automatic1111.
  • Version 0.1.0 of LoRA released! (alternative to Dreambooth, 3mb sharable files)
    8 projects | /r/StableDiffusion | 9 Jan 2023
    Well, actually it is https://github.com/adobe-research/custom-diffusion. It's Adobe, so this is probably only once in the lifetime :P
  • How would I go about creating an app like "Lensa" with Stable Diffusion?
    5 projects | /r/StableDiffusion | 7 Jan 2023
    A Few Comments - The "caveat" I mentioned above is that there are a few apps that are running Stable Diffusion locally on the user's device. Apple recently released a tool to convert Stable Diffusion models to CoreML. CoreML is their proprietary format for machine learning models, and runs insanely well on Apple devices with a Neural Engine (like newer Macs, iPhones, and iPads). However, this technology is in its infancy, almost certainly isn't capable of "training" a Stable Diffusion model, and isn't anywhere near as fast as running Dreambooth or Stable Diffusion on powerful servers. In the long-run, it might be possible to do all of this processing on a user's device, but it's likely that we're a long ways away from that. - Dreambooth itself isn't that hard to run and play around with yourself, nor is it that hard to integrate into an automated server pipeline, though what it does under the hood is pretty amazing. Dreambooth isn't the only way to "train" a Stable Diffusion model with custom photos, and other people/companies (like Adobe), have found other ways to create amazing AI-generated images with user-provided photos (see their Custom Diffusion GitHub). - Given how long Lensa has been around, and that they're pretty decently funded (they raised $6m in 2019, I believe), it's very likely they they've developed their own in-house way of training Stable Diffusion models, just like the Adobe reference above. But if any of us were to build an app that works like Lensa, a starting point would probably be to use Dreambooth since it's well-built out and easy to integrate, and get similar results. - A very popular way to run Dreambooth is to use a Google Colab notebook, like this one from TheLastBen's GitHub. Since the vast majority of us don't have super powerful GPU cards in our computer, and are just playing around with Dreambooth/Stable Diffusion, the Google Colab notebook lets you go step-by-step to "setup" an environment for doing Dreambooth, but runs it on Google's super-powerful servers. The cool thing about Google Colab, besides not having to have a super-powerful computer yourself to still do Dreambooth with great performance, is that you can look through the code of how the Google Colab notebook works, and that could be a foundation for an engineer to learn how to implement Dreambooth in your own scripts that run on the "Backend Server" for your app.
  • How does tiktok’s AI portrait filter work?
    1 project | /r/StableDiffusion | 5 Jan 2023
    If it is Stable Diffusion-based, I'd guess a few things. Given the small size that would be needed to handle this for every possible user, I wonder if it's using something that isn't Dreambooth based, like Adobe's [Custom Diffusion(https://github.com/adobe-research/custom-diffusion), or some in-house variant of that, that's able to basically generate a small file that could be processed for each user.
  • How to get the smallest models or portions of Dreambooth-trained models for a specific subject
    4 projects | /r/StableDiffusion | 26 Dec 2022
    Adobe Research also has "Custom Diffusion" out: https://github.com/adobe-research/custom-diffusion. It's got a similar goal of ~megabytes sized outputs. Warning that it's got a proprietary license.
  • Custom Diffusion - Adobe Research
    1 project | /r/StableDiffusion | 19 Dec 2022
  • What's the difference between custom-diffusion and Dreambooth?
    1 project | /r/StableDiffusion | 19 Dec 2022
  • A note from our sponsor - SaaSHub
    www.saashub.com | 29 Apr 2024
    SaaSHub helps you find the best software and product alternatives Learn more →

Stats

Basic custom-diffusion repo stats
11
1,778
5.6
4 months ago

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com