rtic-gcn-pytorch
GroundingDINO
rtic-gcn-pytorch | GroundingDINO | |
---|---|---|
2 | 5 | |
20 | 5,075 | |
- | 8.3% | |
0.0 | 6.3 | |
over 2 years ago | 5 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rtic-gcn-pytorch
GroundingDINO
-
Autodistill: A new way to create CV models
Some of the foundation/base models include: * GroundedSAM (Segment Anything Model) * DETIC * GroundingDINO
-
Is there a way to do segmentation of a person's clothing?
While Segment Anything can detect objects based on text prompts, that's not its strong suite. To get best results, folks usually combine it with Grounding DINO, which is a great object detection model. You run Grounding DINO with text prompt "skirt", this gives you a bounding box that you pass to Segment Anything, which gives you a segmentation mask that you can then use for inpainting with SD.
-
Searching for Guidance on Developing an AI Bot for SSBU Training
Now, let's delve into the technological aspects of this project. The combination of Facebook's Segment Anything and Grounding Dino tools will automate annotations for image processing, which is key to this AI endeavor. I'm also intrigued by Mojo, a new programming language designed specifically for AI developers, which will soon be open-source.
-
[D] Object Detection Machine Learning
Right now we are trying out grouding dino on this but it is giving a lot of noise and detecting things that are not cracks.
- [D] Data Annotation Done by Machine Learning/AI?
What are some alternatives?
clean-code-dotnet - :bathtub: Clean Code concepts and tools adapted for .NET
segment-anything - The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Detic - Code release for "Detecting Twenty-thousand Classes using Image-level Supervision".
clean-code-javascript - :bathtub: Clean Code concepts adapted for JavaScript
ultralytics - NEW - YOLOv8 🚀 in PyTorch > ONNX > OpenVINO > CoreML > TFLite
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
SmashBot - The AI that beats you at Melee
super-gradients - Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.
LAVIS - LAVIS - A One-stop Library for Language-Vision Intelligence
OFA - Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
autodistill - Images to inference with no labeling (use foundation models to train supervised models).