Track-Anything
E2FGVI
Track-Anything | E2FGVI | |
---|---|---|
16 | 1 | |
6,113 | 952 | |
- | 0.0% | |
8.1 | 1.3 | |
3 months ago | about 1 year ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Track-Anything
-
Keying/masking person on a footage
I was doing rotoscoping for a silhouette of a girl dancing in front of a building, then I saw this amazing tool: https://github.com/gaomingqi/Track-Anything
-
Advice for multi-animal tracking for scientific research?
My question is, how can we modernize this pipeline? We've experimented a bit with the new SAM-based track-anything tool, and it seems promising, but we actually don't want to "track anything", we only want to track fishes. What would you do in 2023, to extract tracks of one specific class of object from long video datasets? I'm hoping for any advice at all.
-
[D] Which open source models can replicate wonder dynamics's drag'n'drop cg characters?
The Track-Anything tool already implements this
- Tutorial for Track-Anything, an interactive tool to segment, track, and inpaint anything in videos.
- Github for Track Anything
-
Segment Anything for Video - Track Anything! 🤖
Segment Anything for Video - Track Anything! 🤖 With this tool, you can automatically isolate objects, make edits using inpainting, and track objects with precision. It's a game-changer for creative projects. Even though it does not work well with the shadows yet, we expect a rapid evolution of these technologies. Github : https://github.com/gaomingqi/Track-Anything)
-
How to adapt an existing Python project to my specific use case without bringing in unnecessary dependencies or reinventing the wheel
I would like to know if there are any guidelines to follow when adapting an existing Python project for my own use-case. Specifically, I want to customize the output of the Track-Anything project by incorporating my own processing steps. However, I do not want to import the entire codebase. Rather, I only want to import the minimum amount of code necessary to produce the same output with object tracking, without having to reimplement functions that are already available.
- Track-Anything should get implemented in kdelive
-
SUSTech VIP Lab Proposes Track Anything Model (TAM) That Achieves High-Performance Interactive Tracking and Segmentation in Videos
Here is the GitHub: https://github.com/gaomingqi/track-anything
E2FGVI
-
[D] Which open source models can replicate wonder dynamics's drag'n'drop cg characters?
Use Segmentation Model (SAM) combined with Inpainting model (E2FGVI) and Xmem to cut out the live action subject.
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
XMem - [ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
segment-anything - The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
ProPainter - [ICCV 2023] ProPainter: Improving Propagation and Transformer for Video Inpainting
sam-clip - Use Grounding DINO, Segment Anything, and CLIP to label objects in images.
RePaint - Official PyTorch Code and Models of "RePaint: Inpainting using Denoising Diffusion Probabilistic Models", CVPR 2022
sd-webui-segment-anything - Segment Anything for Stable Diffusion WebUI
lama - 🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022
unimatch - [TPAMI'23] Unifying Flow, Stereo and Depth Estimation
openpose - OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation