PeopleSansPeople
Unity's privacy-preserving human-centric synthetic data generator (by Unity-Technologies)
VirtualHumanBatchProcessing
By DavidLSmyth
PeopleSansPeople | VirtualHumanBatchProcessing | |
---|---|---|
5 | 1 | |
294 | 7 | |
2.4% | - | |
3.0 | 10.0 | |
2 months ago | about 3 years ago | |
C# | Python | |
Apache License 2.0 | - |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
PeopleSansPeople
Posts with mentions or reviews of PeopleSansPeople.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-11-07.
-
PI wants me to make a synthetic dataset.
Also, check this Unity repo out
-
Generating human motion synthetic data ?
I was trying to train a model which goes on top of one of the pose estimation models(posenet, movenet, mediapipe) which detects the action performed(waving, swipe right, etc), and I was planning on generating synthetic data for it. I saw that there's a project for unity PeopleSansPeople, but it's not right to train a model for action recognition. I would like something that either simulates a human doing a simple action, to which I would be able to add randomness to it. I was thinking to either use Unity or maybe write something that would model the human keypoints(the output of pose estimation) and simulate them.. I am wondering if there already exists something that you guys might know about??
- [P] Can't finish my master's thesis. What to do?
-
[R] PeopleSansPeople: Unity's Human-Centric Synthetic Data Generator. GitHub link in comments.
Source code: https://github.com/Unity-Technologies/PeopleSansPeople
-
[R] PeopleSansPeople: Unity's Human-Centric Synthetic Data Generator
Webpage: https://unity-technologies.github.io/PeopleSansPeople/ Paper: https://arxiv.org/abs/2112.09290 Source code: https://github.com/Unity-Technologies/PeopleSansPeople Papers with code: https://paperswithcode.com/paper/peoplesanspeople-a-synthetic-data-generator https://paperswithcode.com/dataset/peoplesanspeople Demo video: https://youtu.be/mQ_DUdB70dc Summary: PeopleSansPeople is a human-centric data generator provided by Unity Technologies that contains highly-parametric and simulation-ready 3D human assets, parameterized lighting and camera system, parameterized environment generators, and fully-manipulable and extensible domain randomizers. PeopleSansPeople can generate RGB images with sub-pixel-perfect 2D/3D bounding box, COCO-compliant human keypoints, and semantic/instance segmentation masks in JSON annotation files. All packaged in macOS and Linux executable binaries capable of generating 1M+ datasets. In addition we release a template Unity environment for lowering the barrier of entry and getting you started with creating your own highly-parameterized human-centric synth data generator. We affectionately named our synthetic data generator PeopleSansPeople, as it is a data generator aimed at human-centric computer vision without using human data which bears serious privacy, safety, ethical, bias, and legal concerns. Benchmarks: The domain randomization we used for our benchmarks are naïve, brute-forced sweeps through the pre-chosen range of parameters; as such we end up generating psychedelic-looking scenes, which turned out to train more performant models for human-centric computer vision.Using PeopleSansPeople we benchmarked a Detectron2 Keypoint R-CNN variant. Results indicate synthetic pre-training with our data outperforms results of training on real data alone or pre-training with ImageNet, both in limited and abundant data regimes.We envisage that this freely-available data generator should enable a wide range of research into the emerging field of simulation to real transfer learning in the critical area of human-centric computer vision.
VirtualHumanBatchProcessing
Posts with mentions or reviews of VirtualHumanBatchProcessing.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-05-05.
-
Generating human motion synthetic data ?
I was indirectly involved in some similar research work, maybe these repos will help a little : https://github.com/DavidLSmyth/VirtualHumanBatchProcessing and https://github.com/DavidLSmyth/MotionCaptureClassifier. I'd advise using blender for this. The Bvh animation file format will probably be the easiest for you to work with, you can get joint positions for each frame pretty easily. The cmu mocap dataset might be suitable
What are some alternatives?
When comparing PeopleSansPeople and VirtualHumanBatchProcessing you can also consider the following projects:
Robotics-Object-Pose-Estimation - A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.
com.unity.perception - Perception toolkit for sim2real training and validation in Unity
ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
tdk-demo - This is a collection of TDK demo projects that use different databases and options