pytorch-AdaIN
PeopleSansPeople
pytorch-AdaIN | PeopleSansPeople | |
---|---|---|
1 | 5 | |
1,001 | 294 | |
- | 2.4% | |
0.0 | 3.0 | |
3 months ago | about 2 months ago | |
Python | C# | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytorch-AdaIN
-
[P] Can't finish my master's thesis. What to do?
Thanks for the comment. I am using the 3d views directly. I tried to use some form of style transfer (https://github.com/naoto0804/pytorch-AdaIN) from real to synth data, but the result was'nt that appealing, so I moved on. I guess that it could be due to the fact that generally the real data that I have has low resolution (120x40 tipically). I ll have a check on the model that you suggested though, seems interesting.
PeopleSansPeople
-
PI wants me to make a synthetic dataset.
Also, check this Unity repo out
-
Generating human motion synthetic data ?
I was trying to train a model which goes on top of one of the pose estimation models(posenet, movenet, mediapipe) which detects the action performed(waving, swipe right, etc), and I was planning on generating synthetic data for it. I saw that there's a project for unity PeopleSansPeople, but it's not right to train a model for action recognition. I would like something that either simulates a human doing a simple action, to which I would be able to add randomness to it. I was thinking to either use Unity or maybe write something that would model the human keypoints(the output of pose estimation) and simulate them.. I am wondering if there already exists something that you guys might know about??
- [P] Can't finish my master's thesis. What to do?
-
[R] PeopleSansPeople: Unity's Human-Centric Synthetic Data Generator. GitHub link in comments.
Source code: https://github.com/Unity-Technologies/PeopleSansPeople
-
[R] PeopleSansPeople: Unity's Human-Centric Synthetic Data Generator
Webpage: https://unity-technologies.github.io/PeopleSansPeople/ Paper: https://arxiv.org/abs/2112.09290 Source code: https://github.com/Unity-Technologies/PeopleSansPeople Papers with code: https://paperswithcode.com/paper/peoplesanspeople-a-synthetic-data-generator https://paperswithcode.com/dataset/peoplesanspeople Demo video: https://youtu.be/mQ_DUdB70dc Summary: PeopleSansPeople is a human-centric data generator provided by Unity Technologies that contains highly-parametric and simulation-ready 3D human assets, parameterized lighting and camera system, parameterized environment generators, and fully-manipulable and extensible domain randomizers. PeopleSansPeople can generate RGB images with sub-pixel-perfect 2D/3D bounding box, COCO-compliant human keypoints, and semantic/instance segmentation masks in JSON annotation files. All packaged in macOS and Linux executable binaries capable of generating 1M+ datasets. In addition we release a template Unity environment for lowering the barrier of entry and getting you started with creating your own highly-parameterized human-centric synth data generator. We affectionately named our synthetic data generator PeopleSansPeople, as it is a data generator aimed at human-centric computer vision without using human data which bears serious privacy, safety, ethical, bias, and legal concerns. Benchmarks: The domain randomization we used for our benchmarks are naïve, brute-forced sweeps through the pre-chosen range of parameters; as such we end up generating psychedelic-looking scenes, which turned out to train more performant models for human-centric computer vision.Using PeopleSansPeople we benchmarked a Detectron2 Keypoint R-CNN variant. Results indicate synthetic pre-training with our data outperforms results of training on real data alone or pre-training with ImageNet, both in limited and abundant data regimes.We envisage that this freely-available data generator should enable a wide range of research into the emerging field of simulation to real transfer learning in the critical area of human-centric computer vision.
What are some alternatives?
pytorch-neural-style-transfer - Reconstruction of the original paper on neural style transfer (Gatys et al.). I've additionally included reconstruction scripts which allow you to reconstruct only the content or the style of the image - for better understanding of how NST works.
Robotics-Object-Pose-Estimation - A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.
contrastive-unpaired-translation - Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
com.unity.perception - Perception toolkit for sim2real training and validation in Unity
neural-style-pt - PyTorch implementation of neural style transfer algorithm
VirtualHumanBatchProcessing
prism - High Resolution Style Transfer in PyTorch with Color Control and Mixed Precision :art:
ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
stylized-neural-painting - Official Pytorch implementation of the preprint paper "Stylized Neural Painting", in CVPR 2021.
tdk-demo - This is a collection of TDK demo projects that use different databases and options
style-transfer-app - An asynchronous dual application (web + Telegram bot) for stylization images.
style-transfer-pytorch - Neural style transfer in PyTorch.