add-thin
MotionDiffuse
add-thin | MotionDiffuse | |
---|---|---|
1 | 1 | |
12 | 784 | |
- | - | |
5.5 | 10.0 | |
2 months ago | over 1 year ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
add-thin
MotionDiffuse
-
[R] MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model + Gradio Demo
github: https://github.com/mingyuan-zhang/MotionDiffuse
What are some alternatives?
prolificdreamer - Official code of ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation (NeurIPS 2023 Spotlight)
StableVideo - [ICCV 2023] StableVideo: Text-driven Consistency-aware Diffusion Video Editing
text-to-motion - Official implementation for "Generating Diverse and Natural 3D Human Motions from Texts (CVPR2022)."
text2room - Text2Room generates textured 3D meshes from a given text prompt using 2D text-to-image models (ICCV2023).
AvatarCLIP - [SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars
Make-It-3D - [ICCV 2023] Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior
MotionGPT - [NeurIPS 2023] MotionGPT: Human Motion as a Foreign Language, a unified motion-language generation model using LLMs