PeopleSansPeople
stable-diffusion
PeopleSansPeople | stable-diffusion | |
---|---|---|
5 | 383 | |
294 | 65,624 | |
2.4% | 1.3% | |
3.0 | 0.0 | |
2 months ago | 30 days ago | |
C# | Jupyter Notebook | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
PeopleSansPeople
-
PI wants me to make a synthetic dataset.
Also, check this Unity repo out
-
Generating human motion synthetic data ?
I was trying to train a model which goes on top of one of the pose estimation models(posenet, movenet, mediapipe) which detects the action performed(waving, swipe right, etc), and I was planning on generating synthetic data for it. I saw that there's a project for unity PeopleSansPeople, but it's not right to train a model for action recognition. I would like something that either simulates a human doing a simple action, to which I would be able to add randomness to it. I was thinking to either use Unity or maybe write something that would model the human keypoints(the output of pose estimation) and simulate them.. I am wondering if there already exists something that you guys might know about??
- [P] Can't finish my master's thesis. What to do?
-
[R] PeopleSansPeople: Unity's Human-Centric Synthetic Data Generator. GitHub link in comments.
Source code: https://github.com/Unity-Technologies/PeopleSansPeople
-
[R] PeopleSansPeople: Unity's Human-Centric Synthetic Data Generator
Webpage: https://unity-technologies.github.io/PeopleSansPeople/ Paper: https://arxiv.org/abs/2112.09290 Source code: https://github.com/Unity-Technologies/PeopleSansPeople Papers with code: https://paperswithcode.com/paper/peoplesanspeople-a-synthetic-data-generator https://paperswithcode.com/dataset/peoplesanspeople Demo video: https://youtu.be/mQ_DUdB70dc Summary: PeopleSansPeople is a human-centric data generator provided by Unity Technologies that contains highly-parametric and simulation-ready 3D human assets, parameterized lighting and camera system, parameterized environment generators, and fully-manipulable and extensible domain randomizers. PeopleSansPeople can generate RGB images with sub-pixel-perfect 2D/3D bounding box, COCO-compliant human keypoints, and semantic/instance segmentation masks in JSON annotation files. All packaged in macOS and Linux executable binaries capable of generating 1M+ datasets. In addition we release a template Unity environment for lowering the barrier of entry and getting you started with creating your own highly-parameterized human-centric synth data generator. We affectionately named our synthetic data generator PeopleSansPeople, as it is a data generator aimed at human-centric computer vision without using human data which bears serious privacy, safety, ethical, bias, and legal concerns. Benchmarks: The domain randomization we used for our benchmarks are naïve, brute-forced sweeps through the pre-chosen range of parameters; as such we end up generating psychedelic-looking scenes, which turned out to train more performant models for human-centric computer vision.Using PeopleSansPeople we benchmarked a Detectron2 Keypoint R-CNN variant. Results indicate synthetic pre-training with our data outperforms results of training on real data alone or pre-training with ImageNet, both in limited and abundant data regimes.We envisage that this freely-available data generator should enable a wide range of research into the emerging field of simulation to real transfer learning in the critical area of human-centric computer vision.
stable-diffusion
-
Top 7 Text-to-Image Generative AI Models
Stable Diffusion: It is based on a kind of diffusion model called a latent diffusion model, which is trained to remove noise from images in an iterative process. It is one of the first text-to-image models that can run on consumer hardware and has its code and model weights publicly available.
-
Go is bigger than crab!
Which is a 1-click install of Stable Diffusion with an alternative web interface. You can choose a different approach but this one is pretty simple and I am new to this stuff.
-
Why & How to check Invisible Watermark
an invisible watermarking of the outputs, to help viewers identify the images as machine-generated.
-
How to create an Image generating AI?
It sounds like you just want to set up Stable Diffusion to run locally. I don't think your computer's specs will be able to do it. You need a graphics card with a decent amount of VRAM. Stable diffusion is in Python as is almost every AI open source project I've seen. If you can get your hands on a system with an Nvidia RTX card with as much VRAM as possible, you're in business. I have an RTX 3060 with 12 gigs of VRAM and I can run stable diffusion and a whole variety of open source LLMs as well as other projects like face swap, Roop, tortoise TTS, sadtalker, etc...
-
Two video cards...one dedicated to Stable Diffusion...the other for everything else on my PC?
Use specific GPU on multi GPU systems · Issue #87 · CompVis/stable-diffusion · GitHub
- Automatic1111 - Multiple GPUs
- Ist Google inzwischen einfach unbrauchbar?
-
Why are people so against compensation for artists?
I dealt with this in one of my posts. At least SD 1.1 till 1.5 are all trained on a batch size of 2048. The version pretty much everyone uses (1.5) is first pretrained at a resolution of 256x256 for 237K steps on laion2B-en, at the end of those training steps it will have seen roughly 500M images in laion2B-en. After that it is pre-trained for 194K steps on laion-high-resolution at a resolution of 512x512, which is a subset of 170M images from laion5B. Finally it is trained for 1.110K steps on LAION aesthetic v2 5+. This is easily verified by taking a glance at the model card of SD 1.5. Though that one doesn't specify for part of the training exactly which aesthetic set was used for part of the training, for that you have to look at the CompVis github repo. Thus at the end of it all both the most recent images and the majority of images will have come from LAION aesthetic v2 5+ (seeing every image approx 4 times). Realistically a lot of the weights obtained from pretraining on 2B will have been lost, and only provided a good starting point for the weights.
-
Is SDXL really open-source?
stable diffusion · CompVis/stable-diffusion@2ff270f · GitHub
- I want to ask the AI to draw me as a Pokemon anime character then draw six of Pokemon of my choice next to me. What are my best free, 15$ or under and 30$ or under choices?
What are some alternatives?
Robotics-Object-Pose-Estimation - A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.
GFPGAN - GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.
com.unity.perception - Perception toolkit for sim2real training and validation in Unity
Real-ESRGAN - Real-ESRGAN aims at developing Practical Algorithms for General Image/Video Restoration.
VirtualHumanBatchProcessing
diffusers-uncensored - Uncensored fork of diffusers
ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
tdk-demo - This is a collection of TDK demo projects that use different databases and options
VQGAN-CLIP - Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
onnx - Open standard for machine learning interoperability
fast-stable-diffusion - fast-stable-diffusion + DreamBooth