Thin-Plate-Spline-Motion-Model VS dressing-in-order

Compare Thin-Plate-Spline-Motion-Model vs dressing-in-order and see what are their differences.

dressing-in-order

(ICCV'21) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing" by Aiyu Cui, Daniel McKee and Svetlana Lazebnik (by cuiaiyu)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
Thin-Plate-Spline-Motion-Model dressing-in-order
28 1
3,289 490
- -
1.9 3.8
3 months ago 5 months ago
Jupyter Notebook Jupyter Notebook
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Thin-Plate-Spline-Motion-Model

Posts with mentions or reviews of Thin-Plate-Spline-Motion-Model. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-12.

dressing-in-order

Posts with mentions or reviews of dressing-in-order. We have used some of these posts to build our list of alternatives and similar projects.
  • Opinion for AR try-on application
    1 project | /r/augmentedreality | 16 Dec 2021
    Thank you for your reply. What I plan to do is in 2d space. I found something similar on Github https://github.com/cuiaiyu/dressing-in-order. The idea is about making sure you can find the best combination of different clothes.

What are some alternatives?

When comparing Thin-Plate-Spline-Motion-Model and dressing-in-order you can also consider the following projects:

first-order-model - This repository contains the source code for the paper First Order Motion Model for Image Animation

HugsVision - HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision

Wav2Lip - This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs

SpecVQGAN - Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)

DFL-Colab - DeepFaceLab fork which provides IPython Notebook to use DFL with Google Colab

SadTalker - [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation

articulated-animation - Code for Motion Representations for Articulated Animation paper

stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI

CVPR2022-DaGAN - Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation

vid2vid - Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.

EasyMocap - Make human motion capture easier.

sd_dreambooth_extension