Grounded-Segment-Anything
sd-webui-segment-everything
Grounded-Segment-Anything | sd-webui-segment-everything | |
---|---|---|
11 | 2 | |
13,615 | 690 | |
3.5% | - | |
8.0 | 10.0 | |
about 1 month ago | about 1 year ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Grounded-Segment-Anything
- Tooling for bulk image data set manipulation?
-
Is there a way to do segmentation of a person's clothing?
Grounded SAM is a project that tries to combine these steps in a single workflow, but I am not sure how far it has come along. Might be worth checking out, but it also isn't too difficult to combine the three models by hand.
-
[P] ImageBind with SAM: A simple demo the generate mask with different modalities
We build a simple demo ImageBind-SAM here which aims to segment with different modalities
-
why isnt grounding dino working?
GroundingDINO install failed. Please submit an issue to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues.
-
You can now use text+SAM+SD inpainting/LoRA Training in SD-WebUI-Segment-Anything Extension
This is because C++ is somehow not compiled. Check https://github.com/IDEA-Research/Grounded-Segment-Anything/issues/53 too see whether it’s working, otherwise search through similar issues. Let me know which solution works. Remember that ‘export’ on windows is ‘set’, and you should make sure that CUDA_HOME exists in your environment variable.
- [D] Data Annotation Done by Machine Learning/AI?
-
SD Webui + Segment Everything
I'm glad to do so after I implement this.
- [R] Grounded-Segment-Anything: Automatically Detect , Segment and Generate Anything with Image and Text Inputs
-
[P] Grounded-Segment-Anything: Zero-shot Detection and Segmentation
here is the GitHub link: https://github.com/IDEA-Research/Grounded-Segment-Anything
sd-webui-segment-everything
-
SD Webui + Segment Everything
I just created an extension to use SAM for stable diffusion impainting. It seems that the result is pretty cool. The extension is at https://github.com/continue-revolution/sd-webui-segment-everything
-
Stable Diffusion + Meta AI's SAM
I have created an extension. See https://github.com/continue-revolution/sd-webui-segment-everything
What are some alternatives?
segment-anything - The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
stable-diffusion-webui-vae-tile-infer - Yet another vae tiling inferer, extension script for AUTOMATIC1111/stable-diffusion-webui.
ABG_extension
sd-webui-segment-anything - Segment Anything for Stable Diffusion WebUI
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
batch-face-swap - Automaticaly detects faces and replaces them
GroundingDINO - Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
sd-webui-easy-tag-insert - An Extension for Automatic1111 Webui that helps inserting prompts
a1111-batch-interrogate - Example batch scripts using the A1111 SD Webui API [Moved to: https://github.com/d3x-at/a1111-api-examples]
stable-diffusion-webui-vid2vid - Translate a video to some AI generated stuff, extension script for AUTOMATIC1111/stable-diffusion-webui.
dddetailer - Detection Detailer hijack edition
multi-subject-render - Generate multiple complex subjects all at once!