Awesome-Video-Diffusion
SpeechT5
Awesome-Video-Diffusion | SpeechT5 | |
---|---|---|
1 | 4 | |
2,478 | 1,037 | |
9.3% | 6.7% | |
8.9 | 7.1 | |
14 days ago | 20 days ago | |
Python | ||
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Awesome-Video-Diffusion
SpeechT5
-
[HELP] Speech2Speech translator with speaker voice preservation
Hey! I’m doing a somewhat similar project but for TTS / voice cloning. This might not be too relevant for you but it might be one way to solve your problem. We based our project onSpeecht5 which is a multimodal setup that can take in audio or text and output audio or text. It uses speaker embeddings to handle multiple speakers, so you could use Metas S2ST to translate audio and this model to preserve the voice by doing audio to audio speech conversion. Here’s a hugging tutorial which mentions speech conversion with speecht5 https://huggingface.co/blog/speecht5
- Nvidia Text2Video
- Foundation models for speech analysis/synthesis/modification
-
[2210.03730] SpeechUT: Bridging Speech and Text with Hidden-Unit for Encoder-Decoder Based Speech-Text Pre-training
The idea to separate text from speech is important. Models released today: https://github.com/microsoft/SpeechT5/tree/main/SpeechUT SpeechUT: Bridging Speech and Text with Hidden-Unit for Encoder-Decoder Based Speech-Text Pre-training SpeechUT: Bridging Speech and Text with Hidden-Unit for Encoder-Decoder Based Speech-Text Pre-training
What are some alternatives?
awesome-speech-recognition-speech-synthesis-papers - Automatic Speech Recognition (ASR), Speaker Verification, Speech Synthesis, Text-to-Speech (TTS), Language Modelling, Singing Voice Synthesis (SVS), Voice Conversion (VC)
PaddleSpeech - Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.
video-diffusion-pytorch - Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch
espnet - End-to-End Speech Processing Toolkit
storyteller - Multimodal AI Story Teller, built with Stable Diffusion, GPT, and neural text-to-speech
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
ReuseAndDiffuse - Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation
CLAP - Contrastive Language-Audio Pretraining
Gen-L-Video - The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".
AudioMAE - This repo hosts the code and models of "Masked Autoencoders that Listen".