openpose
freemocap
Our great sponsors
openpose | freemocap | |
---|---|---|
36 | 11 | |
29,802 | 3,065 | |
1.3% | 3.2% | |
5.2 | 8.6 | |
9 days ago | 6 days ago | |
C++ | Python | |
GNU General Public License v3.0 or later | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
openpose
-
AI "Artists" Are Lazy, and the Ultimate Goal of AI Image Generation (hint: its sloth)
Open Pose, a multi-person keypoint detection library for body, face, hands, and foot estimation [10], is used for posing generated characters;
-
Analyze defects and errors in the created images
OpenPose
-
[D] Which open source models can replicate wonder dynamics's drag'n'drop cg characters?
Perhaps something like OpenPose for pose estimation?
-
Do we have Locally Run AI mocap yet?
OpenPose looks like what you're looking for, it seems to have plugins for Unity. I can't say I've used it though.
-
Let's take a break!
You are correct. Open Pose has two keypoints for the eyes and two more for the ears. By saying were the ears are you automatically influence the angle of the head. You can see more about it on this github page. Just scroll a tiny bit and you can see a gif of the nodes overlapped on humans
-
Accelerate Machine Learning Local Development and Test Workflows with Nvidia Docker
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04 # https://hub.docker.com/r/nvidia/cuda ENV DEBIAN_FRONTEND=noninteractive # install the dependencies for building OpenPose RUN apt-get update && # The rest is ignored for brevity. RUN pip3 install --no-cache-dir # The rest is ignored for brevity. # install cmake, clone OpenPose and download models RUN wget https://cmake.org/files/v3.20/cmake-3.20.2-linux-x86_64.tar.gz && \ # The rest is ignored for brevity. WORKDIR /openpose/build RUN alias python=python3 && cmake -DBUILD_PYTHON=OFF -DWITH_GTK=OFF -DUSE_CUDNN=ON .. # Build OpenPose. Cudnn 8 causes memory issues this is why we are using base with CUDA 10 and Cudnn 7 # Fix for CUDA 10.0 and Cudnn 7 based on the post below. # https://github.com/CMU-Perceptual-Computing-Lab/openpose/issues/1753#issuecomment-792431838 RUN sed -ie 's/set(AMPERE "80 86")/#&/g' ../cmake/Cuda.cmake && \ sed -ie 's/set(AMPERE "80 86")/#&/g' ../3rdparty/caffe/cmake/Cuda.cmake && \ make -j`nproc` && \ make install WORKDIR /openpose
- nub needs some directions
-
full body tracking with WiFi signals by utilizing deep learning architectures
One of the best cam only libraries (no depth sensor) I've seen is openpose, I ran it through a 360 camera and it was able to track body, face, and fingers really well even with spherical distortion from the 360 cam. example 360
- How to do body tracking for (real) camera
- How to get rotation (yaw/pitch/roll) from face detection keypoints?
freemocap
- Motion Capture AI judge - Where my app devs at?
-
Questions about motion capture and 3D animation
You could try with https://freemocap.org/ before to invest in a Rokoko
-
3d anime with mocap
I didn't try it yet, but maybe it could be useful for you https://freemocap.org/
- FreeMoCap – Free Motion Capture for Everyone
-
Mocap suit for those who want to develop animation or 3D game.
There's also FreeMoCap, a markerless mocap solution that you can run using just couple of cheap webcams. It's also a non-profit research project with the most open 'copyleft' license you can find.
-
What to use for simple body/head tracking (non-VR)
FreeMoCap (nvm, requires Charuco board)
- FreeMoCap - A free open source markerless motion capture system for everyone ✨💀✨
-
Collaboration on computer vision juggling trackers
Thank you for your hard work! I thenk the most promising work right now is this: FreeMoCap not only tracks balls and hand, but the whole body too. I know the approach is different as it can only work by multiple view camera with a static tracker, but Jon (the creator of it) made also a Discord and people are actively giving suggestions, building tools (tike to transfer the recorded position data to maki rigging ik Blender) so i think more than the the projects of hundred of people alone doing the same thing every time, the fact that there is a community pushing forward the same project is so great.
-
Are there any cheap ways to do mo-cap?
I would take a look at the FreeMoCap project. It's still in development and very rough around the edges, but they're working towards a more user-friendly version in hopefully the next few months.
-
Create 3D poses from and image - AI motion capture
Markerless motion capture has been developed various times in open source projects. Here's one that allows you to record motion and import it into blender: https://github.com/jonmatthis/freemocap, and the subreddit: r/FreeMoCap
What are some alternatives?
mediapipe - Cross-platform, customizable ML solutions for live and streaming media.
juggling-vision-py
AlphaPose - Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System
hawkeye - Hawkeye juggling video analysis
detectron2 - Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
JugglingTracker_Python - Python program using OpenCV which tracks juggling balls, calculates and displays the velocity of each ball. Designed and built as a final project in Computer Vision academic course.
mmpose - OpenMMLab Pose Estimation Toolbox and Benchmark.
Juggling-Coach - OpenCV.js Juggling Motion Tracker
lightweight-human-pose-estimation.pytorch - Fast and accurate human pose estimation in PyTorch. Contains implementation of "Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose" paper.
BlazePose-tensorflow - A third-party Tensorflow Implementation for paper "BlazePose: On-device Real-time Body Pose tracking".
MocapNET - We present MocapNET, a real-time method that estimates the 3D human pose directly in the popular Bio Vision Hierarchy (BVH) format, given estimations of the 2D body joints originating from monocular color images. Our contributions include: (a) A novel and compact 2D pose NSRM representation. (b) A human body orientation classifier and an ensemble of orientation-tuned neural networks that regress the 3D human pose by also allowing for the decomposition of the body to an upper and lower kinematic hierarchy. This permits the recovery of the human pose even in the case of significant occlusions. (c) An efficient Inverse Kinematics solver that refines the neural-network-based solution providing 3D human pose estimations that are consistent with the limb sizes of a target person (if known). All the above yield a 33% accuracy improvement on the Human 3.6 Million (H3.6M) dataset compared to the baseline method (MocapNET) while maintaining real-time performance
jetson-inference - Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.