segment-anything
ComfyUI-extension-tutorials | segment-anything | |
---|---|---|
3 | 59 | |
433 | 44,983 | |
- | 2.1% | |
8.7 | 0.0 | |
7 days ago | 6 days ago | |
Jupyter Notebook | ||
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ComfyUI-extension-tutorials
- Having trouble getting FaceDetailer to work, help plz!
- SAM SEGS Detector workflows?
-
Generate new version of a living-room with specific furniture
I suggest using ComfyUI for this. The Impact-Pack offers several useful nodes for this workflow. Here is more info.
segment-anything
-
Documenting my pin collection with Segment Anything: Part 1
pip install 'git+https://github.com/facebookresearch/segment-anything.git'
-
Show HN: Shortbread App – AI-powered, romantic comics for women
In Shortbread Studio, artists can start from a text prompt, a sketch, a web image or a pose reference as a basis, create an initial panel and quickly modify and regenerate until they get what they want. If you spend 5 seconds, you get a decent panel. If you spend 10 minutes, you can push the limits and get pro results.
Built-in Post-Processing: The editor’s features include liquify, upscale, remove background, and outpainting to extend an image. This allows artists to remove, add, or modify parts of an image on a pixel level without drawing by hand. We combine this with segmentation models like Segment Anything (https://github.com/facebookresearch/segment-anything) to support intelligently selecting and editing a part of an image.
Google-docs like collaboration: The Studio runs in the browser and supports comments and collaboration.
LLM Powered Copy Editor: Comics need text. An AI agent proofreads speech bubbles, fixes lettering and identifies grammar mistakes.
Read our comics: https://shortbreadapp.com/ (free to download + read, iOS + Android)
All of the above are built by a team of 3 engineers including myself. I will be around to answer any questions in this thread!
-
SamGIS - Some notes about Segment Anything
"SAM" is a foundation model aiming for performing "zero-shot" image segmentation:
-
What things are happening in ML that we can't hear oer the din of LLMs?
- segment anything: https://github.com/facebookresearch/segment-anything
-
Zero-Shot Prediction Plugin for FiftyOne
In computer vision, this is known as zero-shot learning, or zero-shot prediction, because the goal is to generate predictions without explicitly being given any example predictions to learn from. With the advent of high quality multimodal models like CLIP and foundation models like Segment Anything, it is now possible to generate remarkably good zero-shot predictions for a variety of computer vision tasks, including:
-
Generate new version of a living-room with specific furniture
Render a new living room using a controlnet model of your choice to keep the basic structure. Load the original living room image and look for the furniture you want to change with a Segment Anything Model to create a mask. Use that mask on the new living room to inpaint new furniture.
-
How Do I read Github Pages? It is so exhausting, I always struggle, oh and I am on windows
Hello,So I am trying to run some programs, python scripts from this page: https://github.com/facebookresearch/segment-anything, and found myself spending hours without succeeding in even understanding what's is written on that page. And I think this is ultimately related to programming.
-
Autodistill: A new way to create CV models
Some of the foundation/base models include: * GroundedSAM (Segment Anything Model) * DETIC * GroundingDINO
-
How to Fine-Tune Foundation Models to Auto-Label Training Data
Webinar from last week on how to fine-tune VFMs, specifically Meta's Segment Anything Model (SAM).
What you'll need to follow along the fine-tuning walkthrough:
Images, ground-truth masks, and optionally, prompts from the Stamp Verification (StaVer) Dataset on Kaggle (https://www.kaggle.com/datasets/rtatman/stamp-verification-s...)
Download the model weights for SAM the official GitHub repo (https://github.com/facebookresearch/segment-anything)
Good understanding of the model architecture Segment Anything paper (https://ai.meta.com/research/publications/segment-anything/)
GPU infra the NVIDIA A100 should do for this fine-tuning.
Data curation and model evaluation tool Encord Active (https://github.com/encord-team/encord-active)
Colab walkthrough for fine-tuning: https://colab.research.google.com/github/encord-team/encord-...
I'd love to get your thoughts and feedback. Thank you.
-
Deploying a ML model (segment-anything) to GCP - how would you do it?
I now want users to be able to use the segment-anything model (https://github.com/facebookresearch/segment-anything) in my app. It's in pytorch if that matters. How it should work is that
What are some alternatives?
Segment-Everything-Everywhere-All-At-Once - [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
backgroundremover - Background Remover lets you Remove Background from images and video using AI with a simple command line interface that is free and open source.
stable-diffusion-webui-Layer-Divider - Layer-Divider, an extension for stable-diffusion-webui using the segment-anything model (SAM)
Grounded-Segment-Anything - Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
GroundingDINO - Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
mmdetection - OpenMMLab Detection Toolbox and Benchmark
ultralytics - NEW - YOLOv8 🚀 in PyTorch > ONNX > OpenVINO > CoreML > TFLite
Layer-Divider-WebUI - Gradio based WebUI with a SAM (segment-anything)
yolo_tracking - BoxMOT: pluggable SOTA tracking modules for segmentation, object detection and pose estimation models
napari-segment-anything - Segment Anything Model (SAM) native Qt UI