ubisoft-laforge-daft-exprt
EmotiVoice
ubisoft-laforge-daft-exprt | EmotiVoice | |
---|---|---|
3 | 5 | |
114 | 6,369 | |
0.0% | - | |
0.0 | 8.9 | |
about 1 year ago | 3 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ubisoft-laforge-daft-exprt
-
Using deepfake voice programms for devolopement - Possible/practical?
Ubisoft has their Daft-Exprt stuff on github that does a tolerable job of prosody/tone transfer, which is pretty much necessary to naturalize shit if you're going to be doing a cloning pipeline that isn't using a service's packaged voices. Without this I wouldn't even consider an ai speech pipeline due to how hardly constrained the range of tone is even with something like replicant studios actor shit.
-
Using A.I voices or Sound Fonts (i.e. Undertale or Animal Crossing)
Ubisoft has some stuff that works to naturalize pretty well via prosody transfer https://github.com/ubisoft/ubisoft-laforge-daft-exprt
-
Anyone have experience with AI voices?
Prosody transfer (I use https://github.com/ubisoft/ubisoft-laforge-daft-exprt), use an example speech segment to change the timing, intonation, and other properties of a different segment of speech. Such as taking evenly paced ML generated speech, and turning it into Captain Kirk iambic pentameter.
EmotiVoice
- FLaNK Stack Weekly 12 February 2024
-
WhisperSpeech โ An Open Source text-to-speech system built by inverting Whisper
Interested to see how it performs for Mandarin Chinese speech synthesis, especially with prosody and emotion. The highest quality open source model I've seen so far is EmotiVoice[0], which I've made a CLI wrapper around to generate audio for flashcards.[1] For EmotiVoice, you can apparently also clone your own voice with a GPU, but I have not tested this.[2]
[0] https://github.com/netease-youdao/EmotiVoice
[1] https://github.com/siraben/emotivoice-cli
[2] https://github.com/netease-youdao/EmotiVoice/wiki/Voice-Clon...
-
Microsoft releases Windows AI studio to run and fine tune models locally
Interesting. I'll have to check to be sure, but I think maybe something is happening automagically if you have reasonably up to date nvidia drivers on the host OS, because I was able to run the EmotiVoice TTS docker (which requires nvidia gpu) from WSL2.
https://github.com/netease-youdao/EmotiVoice
- FLaNK Stack Weekly for 13 November 2023
- EmotiVoice: A Multi-Voice and Prompt-Controlled TTS Engine
What are some alternatives?
TTS - ๐ธ๐ฌ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
Cgml - GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.
hifi-gan - HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis
STYLER - Official repository of STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech, INTERSPEECH 2021
draw-a-ui - Draw a mockup and generate html for it
vits - VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
MockingBird - ๐AIๆๅฃฐ: 5็งๅ ๅ ้ๆจ็ๅฃฐ้ณๅนถ็ๆไปปๆ่ฏญ้ณๅ ๅฎน Clone a voice in 5 seconds to generate arbitrary speech in real-time
Parallel-Tacotron2 - PyTorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling
lhotse - Tools for handling speech data in machine learning projects.
voice100 - Voice100 includes neural TTS/ASR models. Inference of Voice100 is low cost as its models are tiny and only depend on CNN without autoregression.
clipea - ๐๐ข Like Clippy but for the CLI. A blazing fast AI helper for your command line