Graphite
stablediffusion
Our great sponsors
Graphite | stablediffusion | |
---|---|---|
45 | 108 | |
5,503 | 35,536 | |
5.4% | 4.5% | |
9.6 | 5.4 | |
1 day ago | 3 months ago | |
Rust | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Graphite
-
Any good beginner open source projects for a guy with a math background?
If you're interested in either computational geometry, layout/packing/constraints, or functional programming language concepts, those are all the math-related concepts that we're currently interacting with for Graphite, a 2D vector graphics editor that's aiming to become the next Blender (but for 2D instead of 3D). If that sounds interesting, I'd love to help get you started if you want to join our Discord and I can explain the math-related work that we need to get done. Cheers!
-
Things I wish I knew before moving 50K lines of code to React Server Components
Not sure which web-based spreadsheet app you're talking about, because there are many that do use these frameworks. Here's a PS/AI clone built with a Svelte frontend: https://graphite.rs
- Graphite: In-development raster and vector 2D graphics editor that is FOSS
-
What’s everyone working on this week (25/2023)?
Wanted to contribute to a good Rust-based project last week, started searching and found a good Reddit thread featuring several great projects. Looked at and found Graphite. I liked the concept though I know almost nothing about graphic design.
-
Any open source projects willing to take in juniors?
If you're interested in helping us build a 2D graphics editing suite for designers and artists, consider contributing to Graphite. Getting started instructions are here. We code review PRs closely and give feedback to help you improve, and offer advice and mentorship via our Discord while you're learning and coding.
-
Contributing to Open Source
If graphical apps suit your fancy, the Graphite tries hard to make new contributors feel at home.
-
Rust = most fun language?
Yesterday I just submitted my first contribution to open source. https://github.com/GraphiteEditor/Graphite
-
SD just released an open source version of their GUI called StableStudio
I run an open source 2D graphics editor project and our license is Apache 2.0 (which is basically the same as MIT) which provides much more freedom than the GPL does, since it's not copyleft. We have a Stable Diffusion feature built in, and we want to provide a hosted component so users can utilize that feature without self-hosting. A1111 being AGPL likely means we have to find an alternate backend. I'm looking into other options like SHARK (and would love some ideas if anyone else has suggestions).
-
Any new Opensource projects in (rust) looking for contributors. I want to start my journey as an OSS contributor.
Graphite is an in-development 2D creative tool for vector and raster graphics editing (basically, the goal is to make a better Inkscape and Gimp, plus way more). If that's interesting to you, we try really hard to have an inviting community that makes it approachable to get up and running with contributing to the project. Come say hi on our Discord and I can help get you set up. Or read our quick contributing tutorial/intro.
-
What’s everyone working on this week (19/2023)?
And remember to give the project a ⭐ on the 🐙🐈 repo! https://github.com/GraphiteEditor/Graphite
stablediffusion
-
Generating AI Images from your own PC
With this tutorial's help, you can generate images with AI on your own computer with Stable Diffusion.
-
Reimagine XL: this is just Controlnet with a credit system right?
New stable diffusion finetune (Stable unCLIP 2.1, Hugging Face) at 768x768 resolution, based on SD2.1-768. This model allows for image variations and mixing operations as described in Hierarchical Text-Conditional Image Generation with CLIP Latents, and, thanks to its modularity, can be combined with other models such as KARLO. Comes in two variants: Stable unCLIP-L and Stable unCLIP-H, which are conditioned on CLIP ViT-L and ViT-H image embeddings, respectively. Instructions are available here.
-
What am I doing wrong please?
Another question, if that's ok? Stable Diffusion 2.0 - https://github.com/Stability-AI/stablediffusion - if I wanted to use that, do I follow along their instructions and it will work on the M1 still, or you advise against it?
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
Stable Diffusion (2D Image Generation and Animation) https://github.com/CompVis/stable-diffusion (Stable Diffusion V1) https://huggingface.co/CompVis/stable-diffusion (Stable Diffusion Checkpoints 1.1-1.4) https://huggingface.co/runwayml/stable-diffusion-v1-5 (Stable Diffusion Checkpoint 1.5) https://github.com/Stability-AI/stablediffusion (Stable Difusion V2) https://huggingface.co/stabilityai/stable-diffusion-2-1/tree/main (Stable Diffusion Checkpoint 2.1) Stable Diffusion Automatic 1111 Webui and Extensions https://github.com/AUTOMATIC1111/stable-diffusion-webui (WebUI - Easier to use) PLEASE NOTE, MANY EXTENSIONS CAN BE INSTALLED FROM THE WEBUI BY CLICK "AVAILABLE" OR "INSTALL FROM URL" BUT YOU MAY STILL NEED TO DOWNLOAD THE MODEL CHECKPOINTS! https://github.com/Mikubill/sd-webui-controlnet (Control Net Extension - Use various models to control your image generation, useful for animation and temporal consistency) https://huggingface.co/lllyasviel/ControlNet/tree/main/models (Control Net Checkpoints -Canny, Normal, OpenPose, Depth, ect.) https://github.com/thygate/stable-diffusion-webui-depthmap-script (Depth Map Extension - Generate high-resolution depthmaps and animated videos or export to 3d modeling programs) https://github.com/graemeniedermayer/stable-diffusion-webui-normalmap-script (Normal Map Extension - Generate high-resolution normal maps for use in 3d programs) https://github.com/d8ahazard/sd_dreambooth_extension (Dream Booth Extension - Train your own objects, people, or styles into Stable Diffusion) https://github.com/deforum-art/sd-webui-deforum (Deforum - Generate Weird 2D animations) https://github.com/deforum-art/sd-webui-text2video (Deforum Text2Video - Generate videos from texts prompts using ModelScope or VideoCrafter)
-
Leaked deck raises questions over Stability AI’s Series A pitch to investors
Most of the latent and stable diffusion authors also work at Stability AI, as do many other generative AI research leaders in media.
Naming rights on the model were no part of the compute grant, we give them incredibly freely and also support. Naming was suggested by the researchers in this case.
We don't just put out compute, but made sure to clear up everything from Stable Diffusion 2 onwards 100% trained by Robin and team: https://github.com/Stability-AI/stablediffusion
- I Used Stable Diffusion and Dreambooth to Create an Art Portrait of My Dog
- Futurism: "The Company Behind Stable Diffusion Appears to Be At Risk of Going Under"
-
ComfyUI now supports unCLIP and I figured out how to create unCLIP checkpoints from normal SD2.1 768-v checkpoints.
For those that don't know what unCLIP is it's a way of using images as concepts in your prompt in addition to text. CLIPVision extracts the concepts from the input images and those concepts are what is passed to the model.
- Does anyone know a white-label Automatic1111 equivalent?
-
How to convert SD checkpoint file to format required by HF diffuses library?
https://github.com/Stability-AI/stablediffusion/blob/main/configs/stable-diffusion/v2-inference.yaml (SD2.0/SD2.1)
What are some alternatives?
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
civitai - A repository of models, textual inversions, and more
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
egui - egui: an easy-to-use immediate mode GUI in Rust that runs on both web and native
stable-diffusion-webui - Stable Diffusion web UI
SHARK - SHARK - High Performance Machine Learning Distribution
Txt2Vectorgraphics - Custom Script for Automatics1111 StableDiffusion-WebUI.
waifu-diffusion - stable diffusion finetuned on weeb stuff
Method-Draw - Method Draw, the SVG Editor for Method of Action