detectron2
openpose
Our great sponsors
detectron2 | openpose | |
---|---|---|
49 | 36 | |
28,671 | 29,802 | |
1.9% | 1.3% | |
7.5 | 5.2 | |
6 days ago | 8 days ago | |
Python | C++ | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
detectron2
-
Ask HN: How to train an image recognition AI
I don’t do AI professionally but as a hobby, so this may not be the best way. But the way you described, it seems the user maybe taking the picture a bit further away and there may be other objects in the frame. So you may want to look into some sort of segmentation or have bounding box. This could help the user make sure they are looking at documents for the correct machine.
I think something like detectron2 [1] could help. It is Apache2 license, so commercial friendly. That said the pre-trained weights may not be used for commercial purposes, so you’ll want to check on that.
[1] https://github.com/facebookresearch/detectron2
-
Instance segmentation of small objects in grainy drone imagery
And not enough true positives either. Add more augmentations in the config. Also make sure the config is set correctly, so that Detectron2 isn't skipping background images: https://github.com/facebookresearch/detectron2/issues/80
- Openpose alternatives (humanSD & Densepose)
-
Probelms with importing tensormask from detectron2.projects
I followed the setup of https://github.com/facebookresearch/detectron2/tree/main/projects/TensorMask. But still I can not import it. As I can with from detectron2.projects import point_rend easily from PointRend projects
-
Problems with Lazy Config detectron2 (MViTv2)
I have to use this config file with the dataloader which is in https://github.com/facebookresearch/detectron2/blob/main/projects/MViTv2/configs/common/coco_loader.py. I figured that i can use cfg.dataloader.train.dataset.names = "my_dataset_train" for this.
-
"[D]" Problems with Lazy Config detectron2 (MViTv2)
I want to use this config file https://github.com/facebookresearch/detectron2/blob/main/projects/MViTv2/configs/mask_rcnn_mvitv2_t_3x.py like the beneath typical way I use a yaml config file. But giving so many errors one after another that, I even failed to count at this point.
-
AI Real Time (lgd for cn)
Which is built on https://github.com/facebookresearch/detectron2
-
List of AI-Models
Click to Learn more...
-
good computer vision or deep learning projects in github
Detectron2 (GitHub: https://github.com/facebookresearch/detectron2) is a Facebook AI Research library with state-of-the-art object detection and segmentation algorithms in PyTorch.
- Object Detection using PyTorch: Would you recommend a Framework (Detectron2, MMDetection, ...) or a project from scratch ?
openpose
-
AI "Artists" Are Lazy, and the Ultimate Goal of AI Image Generation (hint: its sloth)
Open Pose, a multi-person keypoint detection library for body, face, hands, and foot estimation [10], is used for posing generated characters;
-
Analyze defects and errors in the created images
OpenPose
-
[D] Which open source models can replicate wonder dynamics's drag'n'drop cg characters?
Perhaps something like OpenPose for pose estimation?
-
Do we have Locally Run AI mocap yet?
OpenPose looks like what you're looking for, it seems to have plugins for Unity. I can't say I've used it though.
-
Let's take a break!
You are correct. Open Pose has two keypoints for the eyes and two more for the ears. By saying were the ears are you automatically influence the angle of the head. You can see more about it on this github page. Just scroll a tiny bit and you can see a gif of the nodes overlapped on humans
-
Accelerate Machine Learning Local Development and Test Workflows with Nvidia Docker
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04 # https://hub.docker.com/r/nvidia/cuda ENV DEBIAN_FRONTEND=noninteractive # install the dependencies for building OpenPose RUN apt-get update && # The rest is ignored for brevity. RUN pip3 install --no-cache-dir # The rest is ignored for brevity. # install cmake, clone OpenPose and download models RUN wget https://cmake.org/files/v3.20/cmake-3.20.2-linux-x86_64.tar.gz && \ # The rest is ignored for brevity. WORKDIR /openpose/build RUN alias python=python3 && cmake -DBUILD_PYTHON=OFF -DWITH_GTK=OFF -DUSE_CUDNN=ON .. # Build OpenPose. Cudnn 8 causes memory issues this is why we are using base with CUDA 10 and Cudnn 7 # Fix for CUDA 10.0 and Cudnn 7 based on the post below. # https://github.com/CMU-Perceptual-Computing-Lab/openpose/issues/1753#issuecomment-792431838 RUN sed -ie 's/set(AMPERE "80 86")/#&/g' ../cmake/Cuda.cmake && \ sed -ie 's/set(AMPERE "80 86")/#&/g' ../3rdparty/caffe/cmake/Cuda.cmake && \ make -j`nproc` && \ make install WORKDIR /openpose
- nub needs some directions
-
full body tracking with WiFi signals by utilizing deep learning architectures
One of the best cam only libraries (no depth sensor) I've seen is openpose, I ran it through a 360 camera and it was able to track body, face, and fingers really well even with spherical distortion from the 360 cam. example 360
- How to do body tracking for (real) camera
- How to get rotation (yaw/pitch/roll) from face detection keypoints?
What are some alternatives?
mmdetection - OpenMMLab Detection Toolbox and Benchmark
mediapipe - Cross-platform, customizable ML solutions for live and streaming media.
yolov5 - YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
AlphaPose - Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System
U-2-Net - The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."
mmpose - OpenMMLab Pose Estimation Toolbox and Benchmark.
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
lightweight-human-pose-estimation.pytorch - Fast and accurate human pose estimation in PyTorch. Contains implementation of "Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose" paper.
rembg - Rembg is a tool to remove images background
BlazePose-tensorflow - A third-party Tensorflow Implementation for paper "BlazePose: On-device Real-time Body Pose tracking".
deep-text-recognition-benchmark - Text recognition (optical character recognition) with deep learning methods, ICCV 2019
MocapNET - We present MocapNET, a real-time method that estimates the 3D human pose directly in the popular Bio Vision Hierarchy (BVH) format, given estimations of the 2D body joints originating from monocular color images. Our contributions include: (a) A novel and compact 2D pose NSRM representation. (b) A human body orientation classifier and an ensemble of orientation-tuned neural networks that regress the 3D human pose by also allowing for the decomposition of the body to an upper and lower kinematic hierarchy. This permits the recovery of the human pose even in the case of significant occlusions. (c) An efficient Inverse Kinematics solver that refines the neural-network-based solution providing 3D human pose estimations that are consistent with the limb sizes of a target person (if known). All the above yield a 33% accuracy improvement on the Human 3.6 Million (H3.6M) dataset compared to the baseline method (MocapNET) while maintaining real-time performance