improved-aesthetic-predictor
stable-diffusion
improved-aesthetic-predictor | stable-diffusion | |
---|---|---|
8 | 142 | |
717 | 2,438 | |
- | - | |
3.6 | 9.8 | |
7 months ago | over 1 year ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
improved-aesthetic-predictor
- [D] Is accurately estimating image quality even possible?
-
[D] Looking for publicly available generative image datasets labeled with human preferences or scores. Any recommendations?
Perhaps you could use an "aesthetic-evaluator": https://github.com/christophschuhmann/improved-aesthetic-predictor (probably not SoTA but it works)
-
What are the current must-have extensions for Automatic1111?
"Calculates aesthetic score for generated images using CLIP+MLP Aesthetic Score Predictor based on Chad Scorer "
- Is there an AI which can judge the “aesthetic potential” of an image? [D]
-
Dreamer's Guide to Getting Started w/ Stable Diffusion!
stable-diffusion-v1-2: Resumed from stable-diffusion-v1-1. 515,000 steps at resolution 512x512 on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size >= 512x512, estimated aesthetics score > 5.0, and an estimated watermark probability < 0.5. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an improved aesthetics estimator).
-
[P] Waifu-Diffusion: a Stable Diffusion model finetuned on 56k Danbooru images
The data used for fine-tuning has come from a random sample of 56k Danbooru images, which were filtered based on CLIP Aesthetic Scoring where only images with an aesthetic score greater than 6.0 were used.
-
Waifu-Diffusion v1-2: A SD 1.4 model finetuned on 56k Danbooru images for 5 epochs
All they've said is they randomly picked 56,000 images that had an aesthetic score greater than 6.0. The score is created by this model. https://github.com/christophschuhmann/improved-aesthetic-predictor
-
How to combine stable diffusion with a model which predicts aesthetics score?
Does anyone know how you could combine a model like Aesthetic Score Predictor with stable diffusion? They used this model to filter training images by score. Iit seems like a lot of people just tune their prompt to make images more aesthetic, by adding certain words.
stable-diffusion
- [Stable Diffusion] Aide nécessaire à l'augmentation de la taille du fichier maximum sur l'installation locale
- [Machine Learning] [P] Exécutez une diffusion stable sur le GPU de votre M1 Mac
- Its time!
-
Anybody running SD on a Macbook Pro? What are you using and how did you install it?
Yes, you can install it with Python! https://github.com/lstein/stable-diffusion works with macOS, and you can control all the common parameter via their WebUI or CLI :)
-
How do I save the arguments for images I create when using the terminal? (Apple M1 Pro)
I'm using lstein fork ("dream") and when I create an image from the terminal, it also writes back to the terminal like this:
- I Resurrected “Ugly Sonic” with Stable Diffusion Textual Inversion
-
AI Seamless Texture Generator Built-In to Blender
> Whenever I ask for something like ‘seamless tiling xxxxxx’ it kinda sorta gets the idea, but the resulting texture doesn’t quite tile right.
Getting seamless tiling requires more than just have "seamless tiling" in the prompt. It also depends on if the fork you're using has that feature at all.
https://github.com/lstein/stable-diffusion has the feature, but you need to pass it outside the prompt. So if you use the `dream.py` prompt cli, you can pass it `"Hats on the ground" --seamless` and it should be perfectly tilable.
-
Auto SD Workflow - Update 0.2.0 - "Collections", Password Protection, Brand new UI + more
From https://github.com/lstein/stable-diffusion
-
Stable Diffusion GUIs for Apple Silicon
Stable Diffusion Dream Script: This is the original site/script for supporting macOS. I found this soon after Stable Diffusion was publicly released and it was the site which inspired me to try out using Stable Diffusion on a mac. They have a web-based UI (as well as command-line scripts) and a lot of documentation on how to get things working.
-
Still can't believe this technology is real. My talentless 2 minute sketch on the left.
I’m pretty sure it works for M2 as well - basically the newer ARM-based Macs. The instructions to get it working are detailed! https://github.com/lstein/stable-diffusion
What are some alternatives?
waifu-diffusion - stable diffusion finetuned on weeb stuff
DiffusionToolkit - Metadata-indexer and Viewer for AI-generated images
taming-transformers - Taming Transformers for High-Resolution Image Synthesis
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements.
stable-diffusion-webui - Stable Diffusion web UI
discord-rpc-for-automatic1111-webui - Silent extension (no tab) for AUTOMATIC1111's Stable Diffusion WebUI adding connection to Discord RPC, so it would show a fancy table in the Discord profile.
diffusers-uncensored - Uncensored fork of diffusers
sd-wildcards - A collection of wildcards for Stable Diffusion
txt2imghd - A port of GOBIG for Stable Diffusion
stable-diffusion-webui-aesthetic-image-scorer
dream-textures - Stable Diffusion built-in to Blender