Track-Anything
sd-webui-segment-anything
Track-Anything | sd-webui-segment-anything | |
---|---|---|
16 | 17 | |
6,113 | 3,204 | |
- | - | |
8.1 | 6.3 | |
3 months ago | 8 days ago | |
Python | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Track-Anything
-
Keying/masking person on a footage
I was doing rotoscoping for a silhouette of a girl dancing in front of a building, then I saw this amazing tool: https://github.com/gaomingqi/Track-Anything
-
Advice for multi-animal tracking for scientific research?
My question is, how can we modernize this pipeline? We've experimented a bit with the new SAM-based track-anything tool, and it seems promising, but we actually don't want to "track anything", we only want to track fishes. What would you do in 2023, to extract tracks of one specific class of object from long video datasets? I'm hoping for any advice at all.
-
[D] Which open source models can replicate wonder dynamics's drag'n'drop cg characters?
The Track-Anything tool already implements this
- Tutorial for Track-Anything, an interactive tool to segment, track, and inpaint anything in videos.
- Github for Track Anything
-
Segment Anything for Video - Track Anything! 🤖
Segment Anything for Video - Track Anything! 🤖 With this tool, you can automatically isolate objects, make edits using inpainting, and track objects with precision. It's a game-changer for creative projects. Even though it does not work well with the shadows yet, we expect a rapid evolution of these technologies. Github : https://github.com/gaomingqi/Track-Anything)
-
How to adapt an existing Python project to my specific use case without bringing in unnecessary dependencies or reinventing the wheel
I would like to know if there are any guidelines to follow when adapting an existing Python project for my own use-case. Specifically, I want to customize the output of the Track-Anything project by incorporating my own processing steps. However, I do not want to import the entire codebase. Rather, I only want to import the minimum amount of code necessary to produce the same output with object tracking, without having to reimplement functions that are already available.
- Track-Anything should get implemented in kdelive
-
SUSTech VIP Lab Proposes Track Anything Model (TAM) That Achieves High-Performance Interactive Tracking and Segmentation in Videos
Here is the GitHub: https://github.com/gaomingqi/track-anything
sd-webui-segment-anything
-
Textual inversion. The best way to prepare photos of a person?
One idea would be to use Segment Anything to cut out the character/face from the background and then replace with random backgrounds that you generate with stable diffusion. Here's an extension for Automatic1111 :) https://github.com/continue-revolution/sd-webui-segment-anything
-
How hard is it to "code" a tool based on segment-anything and Stable diffusion ?
Checkout this code https://github.com/continue-revolution/sd-webui-segment-anything
- Can I use Interrogate CLIP or something similar to get image position data?
- Best way to mask images automatically?
-
Information is currently available.
Segment anything is the extension that you're looking for.
- What's your favorite small tweaks to make? I'll go first
-
Show HN: Image background removal without annoying subscriptions
If anyone is already running auto1111, or simply uninterested in paying, there's an addon that does this very well available here https://github.com/KutsuyaYuki/ABG_extension, additionally I've had very good results using the masks generated by Facebook's SAM, which is also available as an addon here https://github.com/continue-revolution/sd-webui-segment-anyt...
- The main reason why people will keep using open source vs Photoshop and other big-tech generative AIs
-
Stable Diffusion + Segment Anything App and Tutorial
There’s an A111 extension already that I think does the same thing (I’ve had it installed for a few weeks now). https://github.com/continue-revolution/sd-webui-segment-anything
-
YourVision: Stable Diffusion + Segment Anything
use this and inpainting https://github.com/continue-revolution/sd-webui-segment-anything
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
segment-anything - The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
stable-diffusion-webui-rembg - Removes backgrounds from pictures. Extension for webui.
sam-clip - Use Grounding DINO, Segment Anything, and CLIP to label objects in images.
stable-diffusion-webui-directml - Stable Diffusion web UI
XMem - [ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
ddetailer
Auto-Photoshop-StableDiffusion-Plugin - A user-friendly plug-in that makes it easy to generate stable diffusion images inside Photoshop using either Automatic or ComfyUI as a backend.
sd-webui-segment-everything - Segment Anything for Stable Diffusion Webui [Moved to: https://github.com/continue-revolution/sd-webui-segment-anything]
EditAnything - Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)