MotionBERT
[ICCV 2023] PyTorch Implementation of "MotionBERT: A Unified Perspective on Learning Human Motion Representations" (by Walter0807)
ShaderMotion
By lox9973
MotionBERT | ShaderMotion | |
---|---|---|
2 | 2 | |
1,154 | - | |
2.6% | - | |
2.5 | - | |
3 months ago | - | |
Python | ||
Apache License 2.0 | - |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
MotionBERT
Posts with mentions or reviews of MotionBERT.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-02-08.
-
How to transmit custom signal to vrchat world (AI guided 3d keypoints)
Thank you for answer. The method I am using is one of the monocular 3d pose estimation studies, and is as follows. https://github.com/Walter0807/MotionBERT
-
Code and models now available for 'MotionBERT: Unified Pretraining for Human Motion Analysis'
Code: https://github.com/Walter0807/MotionBERT
ShaderMotion
Posts with mentions or reviews of ShaderMotion.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-02-08.
-
How to transmit custom signal to vrchat world (AI guided 3d keypoints)
This only works for your own avatar though. If you want to make something that is displayed in a single world you can look into ShaderMotion, which is a system to bring humanoid poses into VRC worlds. The data is sent via a video feed (as blinking pixels) and a shader then reads out those pixels and "animates" the model accordingly.
-
Anyone know of a good way to free cam and record?
If you really want to get into it, you can capture your motions with ShaderMotion and then play the animations back on your avatar as actions.
What are some alternatives?
When comparing MotionBERT and ShaderMotion you can also consider the following projects:
UNet - Network system for VRChat UDON
StyleDomain - Official Implementation for "StyleDomain: Efficient and Lightweight Parameterizations of StyleGAN for One-shot and Few-shot Domain Adaptation" (ICCV 2023)
OSCMotion
MotioNet - A deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video [ToG 2020]
midi-websocket - Send midi through websockets