Track-Anything
XMem
Track-Anything | XMem | |
---|---|---|
16 | 11 | |
6,113 | 1,596 | |
- | - | |
8.1 | 6.3 | |
3 months ago | about 2 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Track-Anything
-
Keying/masking person on a footage
I was doing rotoscoping for a silhouette of a girl dancing in front of a building, then I saw this amazing tool: https://github.com/gaomingqi/Track-Anything
-
Advice for multi-animal tracking for scientific research?
My question is, how can we modernize this pipeline? We've experimented a bit with the new SAM-based track-anything tool, and it seems promising, but we actually don't want to "track anything", we only want to track fishes. What would you do in 2023, to extract tracks of one specific class of object from long video datasets? I'm hoping for any advice at all.
-
[D] Which open source models can replicate wonder dynamics's drag'n'drop cg characters?
The Track-Anything tool already implements this
- Tutorial for Track-Anything, an interactive tool to segment, track, and inpaint anything in videos.
- Github for Track Anything
-
Segment Anything for Video - Track Anything! 🤖
Segment Anything for Video - Track Anything! 🤖 With this tool, you can automatically isolate objects, make edits using inpainting, and track objects with precision. It's a game-changer for creative projects. Even though it does not work well with the shadows yet, we expect a rapid evolution of these technologies. Github : https://github.com/gaomingqi/Track-Anything)
-
How to adapt an existing Python project to my specific use case without bringing in unnecessary dependencies or reinventing the wheel
I would like to know if there are any guidelines to follow when adapting an existing Python project for my own use-case. Specifically, I want to customize the output of the Track-Anything project by incorporating my own processing steps. However, I do not want to import the entire codebase. Rather, I only want to import the minimum amount of code necessary to produce the same output with object tracking, without having to reimplement functions that are already available.
- Track-Anything should get implemented in kdelive
-
SUSTech VIP Lab Proposes Track Anything Model (TAM) That Achieves High-Performance Interactive Tracking and Segmentation in Videos
Here is the GitHub: https://github.com/gaomingqi/track-anything
XMem
-
[D] Which open source models can replicate wonder dynamics's drag'n'drop cg characters?
Use Segmentation Model (SAM) combined with Inpainting model (E2FGVI) and Xmem to cut out the live action subject.
-
Track-Anything: a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything and XMem.
Nvm just found the occlusion video on https://github.com/hkchengrex/XMem holy shit
- XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
-
[D] Most important AI Paper´s this year so far in my opinion + Proto AGI speculation at the end
XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model ( Added because of the Atkinson-Shiffrin Memory Model ) Paper: https://arxiv.org/abs/2207.07115 Github: https://github.com/hkchengrex/XMem
- [D] Most Popular AI Research July 2022 pt. 2 - Ranked Based On GitHub Stars
- Most Popular AI Research July 2022 pt. 2 - Ranked Based On GitHub Stars
-
I trained a neural net to watch Super Smash Bros
Yeah MiVOS would speed up your tagging a lot. I also was curious if you saw XMem which just came out. I found that worked really well too.
-
University of Illinois Researchers Develop XMem; A Long-Term Video Object Segmentation Architecture Inspired By Atkinson-Shiffrin Memory Model
Continue reading | Check out the paper and github link.
-
[R] Unicorn: 🦄 : Towards Grand Unification of Object Tracking(Video Demo)
Have you check XMem?
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
yolov7 - Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
segment-anything - The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
flash-attention - Fast and memory-efficient exact attention
sam-clip - Use Grounding DINO, Segment Anything, and CLIP to label objects in images.
NAFNet - The state-of-the-art image restoration model without nonlinear activation functions.
sd-webui-segment-anything - Segment Anything for Stable Diffusion WebUI
deeplab2 - DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.
Cream - This is a collection of our NAS and Vision Transformer work. [Moved to: https://github.com/microsoft/AutoML]
EfficientZero - Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021.
multiface - Hosts the Multiface dataset, which is a multi-view dataset of multiple identities performing a sequence of facial expressions.
NUWA - A unified 3D Transformer Pipeline for visual synthesis