civitai
stablediffusion
Our great sponsors
civitai | stablediffusion | |
---|---|---|
638 | 108 | |
5,474 | 35,536 | |
5.7% | 4.5% | |
10.0 | 5.4 | |
7 days ago | 3 months ago | |
TypeScript | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
civitai
-
JavaScript Bloat in 2024
Remember The Website Obesity Crisis [1] article from 2015, since then [2] things only got worse, and it is been almost 10 years already (at the end of 2024).
Is it foolish to say that in 10 more years you wont be able to navigate the web on a circa 2015 PC ? If nothing changes seems like it.
My old macbook from 2013 with latest Firefox is already can not handle loading https://civitai.com web page with 23.98 MB of JavaScript, it is just hangs for half a minute while trying to render this disaster of web frontend.
[1] https://idlewords.com/talks/website_obesity.htm
[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
-
Ask HN: Those making $500/month on side projects in 2024 – Show and tell
Soz, I don't like to cross link my accounts, especially with all this TSWift shenanigans going around. But, if you look at https://civitai.com/ plenty of people have links to their ko-fi accounts where you can commission them (heck you may even find me somewhere on there).
- Google Imagen 2
-
‘Nudify’ Apps That Use AI to ‘Undress’ Women in Photos Are Soaring in Popularity
Tons of models to pick from on Civitai. I have a bunch downloaded for making NPCs for DnD.
-
[serious] we as a community need to do something about this.
With places you can get upgraded/specialized diffusion models or LORA, who themselves have extensions to tie in and make that experience more seamless
-
Sir Nicolas Cage Owner of Jewelry shop
do you have a good pc/laptop with a good GPU? is so start with this. A1111 WebUI is no longer being updated so heres a new one https://github.com/LykosAI/StabilityMatrix/ this site you can download checkpoints and loras you have to sign up (its free, and once you do that click on the eye and click everything) https://civitai.com/ you can get prompts from this site (use the search) https://playgroundai.com/ Edited: took out the direct link out and reposted
-
IT Veteran... why am I struggling with all of this?
You should also try dabbling in AI art. Full motion video is becoming increasingly prevalent (albeit a bit rough as it's still growing). Stable Diffusion Automatic1111 is free. Get to downloading, and try LoRA's with a Stable Diffusion XL checkpoint from Civitai. The future is now, old man.
-
SDXL Turbo: A Real-Time Text-to-Image Generation Model
> Civit dot ai
The site you are thinking of is https://civitai.com/ not "civit dot ai".
-
Stable Video Diffusion
> Haven't you seen the insane quality of videos on civitai?
I have not, so I went to https://civitai.com/ which I guess is what you're talking about? But I cannot find a single video there, just images and models.
-
Best ai image generator without a nsfw filter?
You can get many user created models from https://civitai.com/ You'll need to login to see all models. You'll see some models labled XL. This means they are using SDXL, a newer version of Stable Diffusion. These are much more resource intensive to use than Stable Diffusion 1.5.
stablediffusion
-
Generating AI Images from your own PC
With this tutorial's help, you can generate images with AI on your own computer with Stable Diffusion.
-
Reimagine XL: this is just Controlnet with a credit system right?
New stable diffusion finetune (Stable unCLIP 2.1, Hugging Face) at 768x768 resolution, based on SD2.1-768. This model allows for image variations and mixing operations as described in Hierarchical Text-Conditional Image Generation with CLIP Latents, and, thanks to its modularity, can be combined with other models such as KARLO. Comes in two variants: Stable unCLIP-L and Stable unCLIP-H, which are conditioned on CLIP ViT-L and ViT-H image embeddings, respectively. Instructions are available here.
-
What am I doing wrong please?
Another question, if that's ok? Stable Diffusion 2.0 - https://github.com/Stability-AI/stablediffusion - if I wanted to use that, do I follow along their instructions and it will work on the M1 still, or you advise against it?
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
Stable Diffusion (2D Image Generation and Animation) https://github.com/CompVis/stable-diffusion (Stable Diffusion V1) https://huggingface.co/CompVis/stable-diffusion (Stable Diffusion Checkpoints 1.1-1.4) https://huggingface.co/runwayml/stable-diffusion-v1-5 (Stable Diffusion Checkpoint 1.5) https://github.com/Stability-AI/stablediffusion (Stable Difusion V2) https://huggingface.co/stabilityai/stable-diffusion-2-1/tree/main (Stable Diffusion Checkpoint 2.1) Stable Diffusion Automatic 1111 Webui and Extensions https://github.com/AUTOMATIC1111/stable-diffusion-webui (WebUI - Easier to use) PLEASE NOTE, MANY EXTENSIONS CAN BE INSTALLED FROM THE WEBUI BY CLICK "AVAILABLE" OR "INSTALL FROM URL" BUT YOU MAY STILL NEED TO DOWNLOAD THE MODEL CHECKPOINTS! https://github.com/Mikubill/sd-webui-controlnet (Control Net Extension - Use various models to control your image generation, useful for animation and temporal consistency) https://huggingface.co/lllyasviel/ControlNet/tree/main/models (Control Net Checkpoints -Canny, Normal, OpenPose, Depth, ect.) https://github.com/thygate/stable-diffusion-webui-depthmap-script (Depth Map Extension - Generate high-resolution depthmaps and animated videos or export to 3d modeling programs) https://github.com/graemeniedermayer/stable-diffusion-webui-normalmap-script (Normal Map Extension - Generate high-resolution normal maps for use in 3d programs) https://github.com/d8ahazard/sd_dreambooth_extension (Dream Booth Extension - Train your own objects, people, or styles into Stable Diffusion) https://github.com/deforum-art/sd-webui-deforum (Deforum - Generate Weird 2D animations) https://github.com/deforum-art/sd-webui-text2video (Deforum Text2Video - Generate videos from texts prompts using ModelScope or VideoCrafter)
-
Leaked deck raises questions over Stability AI’s Series A pitch to investors
Most of the latent and stable diffusion authors also work at Stability AI, as do many other generative AI research leaders in media.
Naming rights on the model were no part of the compute grant, we give them incredibly freely and also support. Naming was suggested by the researchers in this case.
We don't just put out compute, but made sure to clear up everything from Stable Diffusion 2 onwards 100% trained by Robin and team: https://github.com/Stability-AI/stablediffusion
- I Used Stable Diffusion and Dreambooth to Create an Art Portrait of My Dog
- Futurism: "The Company Behind Stable Diffusion Appears to Be At Risk of Going Under"
-
ComfyUI now supports unCLIP and I figured out how to create unCLIP checkpoints from normal SD2.1 768-v checkpoints.
For those that don't know what unCLIP is it's a way of using images as concepts in your prompt in addition to text. CLIPVision extracts the concepts from the input images and those concepts are what is passed to the model.
- Does anyone know a white-label Automatic1111 equivalent?
-
How to convert SD checkpoint file to format required by HF diffuses library?
https://github.com/Stability-AI/stablediffusion/blob/main/configs/stable-diffusion/v2-inference.yaml (SD2.0/SD2.1)
What are some alternatives?
stable-diffusion-webui-colab - stable diffusion webui colab
huggingface_hub - The official Python client for the Huggingface Hub.
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
stable.art - Photoshop plugin for Stable Diffusion with Automatic1111 as backend (locally or with Google Colab)
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.
A1111-Web-UI-Installer - Complete installer for Automatic1111's infamous Stable Diffusion WebUI
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.