k-diffusion VS Fooocus

Compare k-diffusion vs Fooocus and see what are their differences.

k-diffusion

Karras et al. (2022) diffusion models for PyTorch (by crowsonkb)

Fooocus

Focus on prompting and generating (by lllyasviel)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
k-diffusion Fooocus
20 34
2,078 35,143
- -
8.4 9.8
6 days ago 2 days ago
Python Python
MIT License GNU General Public License v3.0 only
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

k-diffusion

Posts with mentions or reviews of k-diffusion. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-11.
  • Fooocus: OSS of prompts and generations based on A111 and ComfyUI
    2 projects | news.ycombinator.com | 11 Aug 2023
    Here's my attempt at an explanation without jargon, you can just read the last paragraph, the first 4 are just context.

    These image models are trained on 1000 steps of noise, where at 0 no noise is added to the training image and at 1000 the image is pure noise. The model's goal it to denoise the image, and it does this knowing how much noise the image has, this makes the model learn how much it should change the image, for example at high noise it changes a lot of pixels and starts building the overall "structure" of the image, and a low noise it changes less pixels and focuses on adding details.

    To use the model you start with pure noise, then the model iteratively denoises that noise until a clean image shows up. A naive approach would take 1000 steps, this means you run the model 1000 times, each time feeding the previous result and telling the model that the noise decreased by 1 until it reaches 0 noise. This takes a long time, up to 15 minutes to generate an image on a mid-range consumer GPU.

    Turns out when you give the model pure noise and tell it there's 1000 steps of noise, the result is not an image that has 999 steps of noise, but an image that looks like it has much less, this means that you can probably skip 50-100 steps of denoising per iteration and still get a very good picture, the issue is: what steps to pick? You could again take a naive approach and just skip every 50 steps for a total of 20 steps, but turns out there's better ways.

    This is where samplers come in, essentially a sampler takes the number of steps you want to take to denoise an image (usually ~20 steps) and it will--among other things--pick which steps to choose each iteration. The most popular samplers are the samplers in the k-diffusion repo[1] or k-samplers for short. Do note that samplers do much more than just pick the steps, they are actually responsible for doing the denoising process itself, some of them even add a small noise after a denoising step among other things.

    The newest open source model, SDXL, is actually 2 models. A base model that can generate images as normal, and a refiner model that is specialized on adding details to images. A typical workflow is to ask the base model for 25 steps of denoise, but only run the first 20, then use the refiner model to do the rest. According to the OP, this was being done without keeping the state of the sampler, that is they were running 2 samplers separately, one for the base model and then start one over for the refiner model. Since the samplers use historical data for optimization, the end result was not ideal.

    [1] https://github.com/crowsonkb/k-diffusion

  • Is it possible to install dpm++ 2s a karras on InvokeAI? 🙏
    1 project | /r/StableDiffusion | 8 Jun 2023
    I believe all the advanced samplers are defined upstream in this repo by crowsonkb. As for "loading them" into invoke, you would need to modify the invokeAI source code to define new samplers. The good news is since it's all in python, you don't need to do any compiling.
  • Why does UniPC sampler use DDIM for Hires Fix?
    3 projects | /r/StableDiffusion | 26 Mar 2023
  • Can someone ELI5 the differences between samplers?
    1 project | /r/StableDiffusion | 25 Feb 2023
    The K Diffusion samplers are probably the most advanced currently.
  • Is there a resource that has list of samplers for SD? Like https://upscale.wiki/wiki/Model_Database for upscalers?
    1 project | /r/StableDiffusion | 12 Feb 2023
    I don't know of any Sampler that is not already in A1111, and this is the closest thing to a "list of Samplers for SD".
  • Different Samplers?
    1 project | /r/StableDiffusion | 22 Jan 2023
    This is the main source of all the Samplers we see in the various SD UI's. The source code has references to published papers behind the samplers. Aside from this, I haven't found a wiki for them.
  • Image editing with just text prompt. New Instruct2Pix2Pix paper. Demo link in comments
    4 projects | /r/StableDiffusion | 21 Jan 2023
    git clone https://github.com/crowsonkb/k-diffusion.git
  • The sampler vibe started with LMS, then there was a big migration to using EULER A. Are many now moving to DPM++ e.g. DPM++ 2S a Karras and why?
    1 project | /r/StableDiffusion | 15 Dec 2022
    Am curious in seeing what drives these choices. I think LMS was the default in Dreamstudio when Stable Diffusion was released. Then Euler A became the default in AUTOMATIC1111 which I think explained a lot. But now that many people are more literate about samplers it looks like these decision are more deliberate. With a lot more samplers implemented in https://github.com/crowsonkb/k-diffusion and added to AUTOMATIC1111, is speed the main driver (DPM++ is a lot about speed https://arxiv.org/abs/2211.01095), what about image quality? What are your thoughts?
  • Can anyone explain differences between sampling methods and their uses to me in simple terms, because all the info I've found so far is either very contradicting or complex and goes over my head
    2 projects | /r/StableDiffusion | 9 Dec 2022
    Almost all other samplers come from work done by @RiversHaveWings or Katherine Crowson, which is mostly contained in her work at this repository. She is listed as the principal researcher at Stability AI. Her notes for those samplers are as follows:
  • K-diffusion: Karras et al. (2022) diffusion models for PyTorch
    1 project | news.ycombinator.com | 4 Dec 2022

Fooocus

Posts with mentions or reviews of Fooocus. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-28.
  • AI, but at what cost? The energy-inefficient AI era is already here
    2 projects | dev.to | 28 Mar 2024
    But we can come to a pretty realistic (although not as accurate) conclusion if we put our minds to it. I chose Fooocus for this example, which is the most straightforward (and I believe popular) stable diffusion GUI out there. Let's start simple:
  • How to Persist Data in Google Colab Using JuiceFS
    1 project | dev.to | 28 Mar 2024
    # Install the JuiceFS client. !curl -sSL https://d.juicefs.com/install | sh - # Mount the JuiceFS file system. !juicefs mount rediss://:[email protected]/1 myjfs -d # Create the directory structure for Fooocus models in JuiceFS. !mkdir -p myjfs/models/{checkpoints,loras,embeddings,vae_approx,upscale_models,inpaint,controlnet,clip_vision} # Clone the Fooocus repository. !git clone https://github.com/lllyasviel/Fooocus.git
  • Stable Cascade
    8 projects | news.ycombinator.com | 13 Feb 2024
    That looks very impressive unless the demo is cherrypicked, would be great if this could be implemented into a frontend like Fooocus https://github.com/lllyasviel/Fooocus
  • Stable Code 3B: Coding on the Edge
    7 projects | news.ycombinator.com | 16 Jan 2024
    You might be thinking of Fooocus: https://github.com/lllyasviel/Fooocus

    The Stable Diffusion web interface that got a lot of people's attention originally was Automatic1111: https://github.com/AUTOMATIC1111/stable-diffusion-webui

    Fooocus is definitely more beginner friendly. It does a lot of the prompt engineering for you. Automatic1111 has a ton of plugins, most notably ControlNet which gives you fine grained control over the images, but there is a learning curve.

  • Ask HN: How are you using ChatGPT for yourself?
    2 projects | news.ycombinator.com | 26 Dec 2023
    I just installed this last night on my laptop:

    https://github.com/lllyasviel/Fooocus

    Highly recommend:

    >"Looking up from the deck of golden gate bridge at the towers and metal work, the towers rise and arch back in an ominous and foreboding manner. more artistic, like an alphonse mucha propaganda poster - slightly fish-eye feeling" -- https://i.imgur.com/vyNg79f.jpg

    the local UI and 1.27.0.0.1 - https://i.imgur.com/wRwghuN.jpg

  • It took SEVEN MINUTES to do this using fooocus on a 1060 3g with 16 ram. Can I make it faster?
    1 project | /r/StableDiffusion | 11 Dec 2023
    Why not use the Google Colab notebook while it's still a free option at: https://github.com/lllyasviel/Fooocus It's not bad. I've been using the Colab Fooocus notebook and A1111 on Sage with the 4 free hours of GPU time. The Colab has the Juggernaut model preloaded, but I combined some code from another notebook to add other models and loras.
  • Could I use SDXL on a 4gb VRAM?
    1 project | /r/StableDiffusion | 10 Dec 2023
  • Looking for an open source Image generator with no limits
    1 project | /r/ImageGenerators | 9 Dec 2023
    i'm trying to test the abilities of image generators and the risks that can comes with it. and i'm looking for an image generator that work locally and has no limits. i used the Fooocus project from github and the juggernut model and it's capable of generating nude pictures but not fully nude pictures. and it doesn't work will with bloody scenes. any recommendation for a better model
  • What is the licensing of SD models/frameworks?
    1 project | /r/StableDiffusion | 9 Dec 2023
    I recently saw this video from Fireship and I started wondering about licensing of SD models and frameworks. Fireship shows Fooocus and advertises it as a cool solution. What I started wondering about is, Fooocus downloads a couple of models: Juggernaut XL, some control nets, some loras. What licensing is tied to all of this? One I am most insterested in is JuggernautXL, on civitai it's listed as having CreativeML Open RAIL++-M license but in the description there's remark: For business inquires, commercial licensing, custom models, and consultation contact me under [email protected] There's a lot of separate parts going on in AI frameworks and it's a bit unclear to understand if I can use this commercialy.
  • What AI is best for this kind of pictures?
    1 project | /r/civitai | 7 Dec 2023
    For running locally w/o a lot of "hazzle" i would recommend using https://github.com/lllyasviel/Fooocus and the "Sticker" style, which is available within the UI (Advanced tab). If you need lot of text directly in you images, you may have difficulties with SD and other AI models. In this case LoRas could help. Example: https://civitai.com/posts/880523 (check details for used prompt) In this case i used the following LoRa (a LoRa is kind of specialized submodel for style, concept or person) .

What are some alternatives?

When comparing k-diffusion and Fooocus you can also consider the following projects:

stable-diffusion - k_diffusion wrapper included for k_lms sampling. fixed for notebook.

ComfyUI-AIT