Make-It-3D
MotionDiffuse
Make-It-3D | MotionDiffuse | |
---|---|---|
1 | 1 | |
1,693 | 784 | |
- | - | |
6.9 | 10.0 | |
7 months ago | about 1 year ago | |
Python | Python | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Make-It-3D
-
Meet Make-it-3D: An Artificial Intelligence (AI) Framework For High-Fidelity 3D Object Generation From A Single Image
Github: https://github.com/junshutang/Make-It-3D
MotionDiffuse
-
[R] MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model + Gradio Demo
github: https://github.com/mingyuan-zhang/MotionDiffuse
What are some alternatives?
DreamCraft3D - [ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior
StableVideo - [ICCV 2023] StableVideo: Text-driven Consistency-aware Diffusion Video Editing
NeuralRecon - Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral
text-to-motion - Official implementation for "Generating Diverse and Natural 3D Human Motions from Texts (CVPR2022)."
SegmentAnythingin3D - Segment Anything in 3D with NeRFs (NeurIPS 2023)
text2room - Text2Room generates textured 3D meshes from a given text prompt using 2D text-to-image models (ICCV2023).
learning-topology-synthetic-data - Tensorflow implementation of Learning Topology from Synthetic Data for Unsupervised Depth Completion (RAL 2021 & ICRA 2021)
AvatarCLIP - [SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars
AvatarPoser - Official Code for ECCV 2022 paper "AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion Sensing"
MotionGPT - [NeurIPS 2023] MotionGPT: Human Motion as a Foreign Language, a unified motion-language generation model using LLMs
unsupervised-depth-completion-visual-inertial-odometry - Tensorflow and PyTorch implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020)