Wav2Lip
TTS
Wav2Lip | TTS | |
---|---|---|
34 | 231 | |
9,308 | 29,420 | |
- | 4.0% | |
4.8 | 9.4 | |
15 days ago | 9 days ago | |
Python | Python | |
- | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Wav2Lip
-
Show HN: Sync (YC W22) – an API for fast and affordable lip-sync at scale
Hey HN, we’re sync. (https://synclabs.so/). We’re building fast + lightweight audio-visual models to create, modify, and understand humans in video.
You can check our more about us and our company in this video here: https://bit.ly/3TV27rd
Our first api lets you lip-sync a person in a video to an audio in any language in zero-shot. You can check out some examples here (https://bit.ly/3IT3UXk)
Here’s a demo showing how it works and how to sync your first video / audio: https://bit.ly/4ablRwo
Our playground + api is live, you can play with our models here: https://app.synclabs.so/
Four years ago we open-sourced Wav2lip (https://github.com/Rudrabha/Wav2Lip), the first model to lipsync anyone to any audio w/o having to train for each speaker. Even now, it’s the most prolific lipsyncing model to date (almost 9k GitHub stars).
Human lip-sync enables interesting features for many products – you can use it to seamlessly translate videos from one language to another, create personalized ads / video messages to send to your customers, or clone yourself so you never have to record a piece of content again.
We’re excited about this area of research / the models we’re building because they can be impactful in many ways:
[1] we can dissolve language as a barrier
check out how we used it to dub the entire 2-hour Tucker Carlson interview with Putin speaking fluent English: https://vimeo.com/914605299
imagine millions gaining access to knowledge, entertainment, and connection — regardless of their native tongue.
realtime at the edge takes us further — live multilingual broadcasts + video calls, even walking around Tokyo w/ a Vision Pro 2 speaking English while everyone else Japanese.
[2] we can move the human-computer interface beyond text-based-chat
keyboard / mice are lossy + low bandwidth. human communication is rich and goes beyond just the words we say. what if we could compute w/ a face-to-face interaction?
Many people get carried away w/ the fact LLMs can generate, but forget they can also read. The same is true for these audio/visual models — generation unlocks a portion of the use-cases, but understanding humans from video unlocks huge potential.
Embedding context around expressions + body language in inputs / outputs would help us interact w/ computers in a more human way.
[3] and more
powerful models small enough to run at the edge could unlock a lot:
eg.
-
Ideas to recreate audio
If your technically inclined you can use https://github.com/Rudrabha/Wav2Lip to sync the lip movements to the new audio.
-
How to make deep fake lip sync using Wav2Lip
This is the Github link : https://github.com/Rudrabha/Wav2Lip
-
Dark Brandon going hard
Video mapping onto Audio: Now you have Audio with coherent back and forth dialogue. To get the looped video puppets, you find a relatively stable interview clip (in this channel and many of Athenes other ones, the clips of the people just stay in one place). Then feed the audio + video clip into a lipsync algorithm like this https://bhaasha.iiit.ac.in/lipsync/
- Is it possible to sync a lip and facial expression animation with audio in real time?
-
A little bedtime story by the AI nanny | Stable Diffusion + GPT = a match made in latent space
It's not animating really, just lip sync and face restoration, here I used: https://github.com/Rudrabha/Wav2Lip and https://github.com/TencentARC/GFPGAN respectively.
-
Elevenlabs voice clone and janky avatarify with wav2lip added.
I just used the web based wav2lip demo. https://bhaasha.iiit.ac.in/lipsync/ Haven’t used the plan in a while, however the colab gives much better results. This was just a quick dusty example done all in the phone.
- retromash - The Tide is High / Thinking Out Loud (Blondie, Ed Sheeran)
-
Who knows how to create long-form & cheap AI avatar content? The three main platforms (Synthesia, Movio, & D-ID) all charge over $20 a month for ~ 15 minutes of content, but this TikTok user streamed for 90 hours… how did he pull that off?
https://github.com/Rudrabha/Wav2Lip Demo: https://youtu.be/0fXaDCZNOJc
- Video editing with AI
TTS
-
OpenAI deems its voice cloning tool too risky for general release
lol this marketing technique is getting very old. https://github.com/coqui-ai/TTS is already amazing and open source.
-
What things are happening in ML that we can't hear oer the din of LLMs?
Not sure how relevant this is but note that Coqui TTS (the realistic TTS) has already shut down
https://coqui.ai
-
Base TTS (Amazon): The largest text-to-speech model to-date
I've used coqui.ai's TTS models[0] and library[1] to great success. I was able to get cloned voice to be rendered in about 80% of the audio clip length, and I believe you can also stream the response. Do note the model license for XTTS, it is one they wrote themselves that has some restrictions.
[0] https://huggingface.co/coqui/XTTS-v2
[1] https://github.com/coqui-ai/TTS
- FLaNK Stack Weekly 12 February 2024
- Coqui Is Shutting Down
-
Coqui.ai Is Shutting Down
My only exposure to Coqui was their text to speech software. If I remember correctly the website was a commercialized service with TTS and probably some other related things. I hope the software work continues in the open.
https://github.com/coqui-ai/TTS
-
Hello guys, any selfhosted alternative to eleven labs?
Coqui.ai TTS (https://github.com/coqui-ai/TTS)
-
Demo of Anagnorisis - completely local recommendation system powered by Llama 2. Radio mode. Work in progress.
"tts_models/multilingual/multi-dataset/xtts_v2" model from https://github.com/coqui-ai/TTS. It gives pretty good results and works with references, so it's pretty easy to change the voice. By the way the source code of the project is open: https://github.com/volotat/Anagnorisis but be ready, the code is pretty raw for now.
-
XTTS voice cloning with only a seconds of audio
A recent update to their GitHub also has a no-code gradio ui to facilitate fine-tuning and inferencing locally. https://github.com/coqui-ai/TTS/releases/tag/v0.21.3
-
At a loss trying to get coqui_tts extension to load
No API token found for 🐸Coqui Studio voices - https://coqui.ai
What are some alternatives?
stylegan2 - StyleGAN2 - Official TensorFlow Implementation
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality
Thin-Plate-Spline-Motion-Model - [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
first-order-model - This repository contains the source code for the paper First Order Motion Model for Image Animation
silero-models - Silero Models: pre-trained speech-to-text, text-to-speech and text-enhancement models made embarrassingly simple
chatgpt-raycast - ChatGPT raycast extension
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
DeepFaceLive - Real-time face swap for PC streaming or video calls
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
GFPGAN - GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.
piper - A fast, local neural text to speech system