|8 months ago||10 months ago|
|GNU General Public License v3.0 or later||-|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Make-A-Video is a state-of-the-art AI system that generates videos from text
I actually have been trying it out this week, and in fact it's currently trying to process the video generation, like their example shows. While I was able to follow their steps for training using their dataset, and generate the lighting/depth maps for the milkcarton example, the video generation is taking a long time (over 24hours, using a 3070Ti with 8GB VRAM).
From what I understand with NeROIC, it's not particularly meant to be able to generate an 3D model that can be imported into Blender (or other software). It requires more work to take the meshes it generates to do something with it. See https://github.com/snap-research/NeROIC/issues/10
- Make-A-Video is a state-of-the-art AI system that generates videos from text
Who is building StableDiffusion/DALL-E but for 3D assets?
6 projects | /r/machinelearningnews | 7 Sep 2022
[D] Diffusers applied to 3D model generation
7 projects | /r/MachineLearning | 23 Aug 2022
What are some alternatives?
text2mesh - 3D mesh stylization driven by a text input in PyTorch
jukebox - Code for the paper "Jukebox: A Generative Model for Music"
make-a-video-pytorch - Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
CenterSnap - Pytorch code for ICRA'22 paper: "Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation"