SaaSHub helps you find the best software and product alternatives Learn more →
Video-Motion-Customization Alternatives
Similar projects and alternatives to Video-Motion-Customization based on common topics and language
-
TokenFlow
Official Pytorch Implementation for "TokenFlow: Consistent Diffusion Features for Consistent Video Editing" presenting "TokenFlow" (ICLR 2024)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
storyteller
Multimodal AI Story Teller, built with Stable Diffusion, GPT, and neural text-to-speech (by jaketae)
-
LAMP
Official implement code of LAMP: Learn a Motion Pattern by Few-Shot Tuning a Text-to-Image Diffusion Model (Few-shot-based text-to-video diffusion)
-
Awesome-Video-Diffusion
A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
-
Gen-L-Video
The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".
Video-Motion-Customization reviews and mentions
- Code for video motion customization has been released!
-
VMC: Video Motion Customization
Text-to-video diffusion models have advanced video generation significantly. However, customizing these models to generate videos with tailored motions presents a substantial challenge. In specific, they encounter hurdles in (a) accurately reproducing motion from a target video, and (b) creating diverse visual variations. For example, straightforward extensions of static image customization methods to video often lead to intricate entanglements of appearance and motion data. To tackle this, here we present the Video Motion Customization (VMC) framework, a novel one-shot tuning approach crafted to adapt temporal attention layers within video diffusion models. Our approach introduces a novel motion distillation objective using residual vectors between consecutive frames as a motion reference. The diffusion process then preserves low-frequency motion trajectories while mitigating high-frequency motion-unrelated noise in image space. We validate our method against state-of-the-art video generative models across diverse real-world motions and contexts. Our codes, data and the project demo can be found at https://video-motion-customization.github.io/
Code: https://github.com/HyeonHo99/Video-Motion-Customization
-
A note from our sponsor - SaaSHub
www.saashub.com | 27 Apr 2024
Stats
HyeonHo99/Video-Motion-Customization is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of Video-Motion-Customization is Python.
Popular Comparisons
Sponsored