Thin-Plate-Spline-Motion-Model VS stable-diffusion-webui-depthmap-script

Compare Thin-Plate-Spline-Motion-Model vs stable-diffusion-webui-depthmap-script and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
Thin-Plate-Spline-Motion-Model stable-diffusion-webui-depthmap-script
28 64
3,289 1,582
- -
1.9 8.3
3 months ago about 1 month ago
Jupyter Notebook Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Thin-Plate-Spline-Motion-Model

Posts with mentions or reviews of Thin-Plate-Spline-Motion-Model. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-12.

stable-diffusion-webui-depthmap-script

Posts with mentions or reviews of stable-diffusion-webui-depthmap-script. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-09.

What are some alternatives?

When comparing Thin-Plate-Spline-Motion-Model and stable-diffusion-webui-depthmap-script you can also consider the following projects:

first-order-model - This repository contains the source code for the paper First Order Motion Model for Image Animation

MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"

Wav2Lip - This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs

a1111-sd-zoe-depth - a1111 sd WebUI extention version of ZoeDepth

DFL-Colab - DeepFaceLab fork which provides IPython Notebook to use DFL with Google Colab

multi-subject-render - Generate multiple complex subjects all at once!

SadTalker - [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation

depthmap2mask - Create masks out of depthmaps in img2img

articulated-animation - Code for Motion Representations for Articulated Animation paper

point-e - Point cloud diffusion for 3D model synthesis

CVPR2022-DaGAN - Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation

stable-diffusion-webui-dataset-tag-editor - Extension to edit dataset captions for SD web UI by AUTOMATIC1111