seamless_communication
dragonfly
seamless_communication | dragonfly | |
---|---|---|
11 | 17 | |
10,423 | 374 | |
1.9% | 0.3% | |
8.6 | 8.1 | |
13 days ago | 5 days ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 or later | GNU Lesser General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
seamless_communication
- FLaNK Stack for 04 December 2023
-
This week in AI - all the Major AI developments in a nutshell
Meta AI introduced a suite of AI language translation models that preserve expression and improve streaming [Details | GitHub]: SeamlessExpressive enables the transfer of tones, emotional expression and vocal styles in speech translation. You can try a demo of SeamlessExpressive using your own voice as an input here. SeamlessStreaming, a new model that enables streaming speech-to-speech and speech-to-text translations with <2 seconds of latency and nearly the same accuracy as an offline model. In contrast to conventional systems which translate when the speaker has finished their sentence, SeamlessStreaming translates while the speaker is still talking. t intelligently decides when it has enough context to output the next translated segment. SeamlessM4T v2, a foundational multilingual & multitask model for both speech & text. It's the successor to SeamlessM4T, demonstrating performance improvements across ASR, speech-to-speech, speech-to-text & text-to-speech tasks. Seamless, a model that merges capabilities from SeamlessExpressive, SeamlessStreaming and SeamlessM4T v2 into one.
-
Seamless: Meta's New Speech Models
The license details are listed on the project GitHub
https://github.com/facebookresearch/seamless_communication#l...
-
Open Source Libraries
facebookresearch/seamless_communication: Speech translation
- FLaNK Stack Weekly 28 August 2023
-
Meta: Code Llama, an AI Tool for Coding
I wish that Meta would release models like SeamlessM4T[0] under the same license as llama, or an even better one.
There seem to be opportunities for people to use technology like this to improve lives, if it were licensed correctly, but I don't see how any commercial offering would compete with anything that Meta does.
Whisper is licensed more permissively and does a great job with speech to text in some languages, and it can translate to English only, but it can't translate between a large number of languages, and it doesn't have any kind of text to speech or speech to speech capabilities.
[0]: https://github.com/facebookresearch/seamless_communication
-
Meta introduces SeamlessM4T, a foundational multimodal model that seamlessly translates and transcribes across speech and text for up to 100 languages
What does it take to create the Babel Fish, a tool that can help individuals translate speech between any two languages? While recent breakthroughs in text-based models have pushed machine translation coverage beyond 200 languages, unified speech-to-speech translation models have yet to achieve similar strides. More specifically, conventional speech-to-speech translation systems rely on cascaded systems composed of multiple subsystems performing translation progressively, putting scalable and high-performing unified speech translation systems out of reach. To address these gaps, we introduce SeamlessM4T—Massively Multilingual & Multimodal Machine Translation—a single model that supports speech- to-speech translation, speech-to-text translation, text-to-speech translation, text-to-text translation, and automatic speech recognition for up to 100 languages. To build this, we used 1 million hours of open speech audio data to learn self-supervised speech representations with w2v-BERT 2.0. Subsequently, we created a multimodal corpus of automatically aligned speech translations, dubbed SeamlessAlign. Filtered and combined with human- labeled and pseudo-labeled data (totaling 406,000 hours), we developed the first multilingual system capable of translating from and into English for both speech and text. On Fleurs, SeamlessM4T sets a new standard for translations into multiple target languages, achieving an improvement of 20% BLEU over the previous state-of-the-art in direct speech-to-text translation. Compared to strong cascaded models, SeamlessM4T improves the quality of into-English translation by 1.3 BLEU points in speech-to-text and by 2.6 ASR-BLEU points in speech-to-speech. On CVSS and compared to a 2-stage cascaded model for speech- to-speech translation, SeamlessM4T-Large’s performance is stronger by 58%. Preliminary human evaluations of speech-to-text translation outputs evinced similarly impressive results; for translations from English, XSTS scores for 24 evaluated languages are consistently above 4 (out of 5). For into English directions, we see significant improvement over Whisper- Large-v2’s baseline for 7 out of 24 languages. To further evaluate our system, we developed Blaser 2.0, which enables evaluation across speech and text with similar accuracy compared to its predecessor when it comes to quality estimation. Tested for robustness, our system performs better against background noises and speaker variations in speech-to-text tasks (average improvements of 38% and 49%, respectively) compared to the current state-of-the-art model. Critically, we evaluated SeamlessM4T on gender bias and added toxicity to assess translation safety. Compared to the state-of-the-art, we report up to 63% of reduction in added toxicity in our translation outputs. Finally, all contributions in this work—including models, inference code, finetuning recipes backed by our improved modeling toolkit Fairseq2, and metadata to recreate the unfiltered 470,000 hours of SeamlessAlign —are open-sourced and accessible at https://github.com/facebookresearch/seamless_communication
- Seamless Communication – new translation (text, speech) model from Facebook
-
Meta Releases SeamlessM4T, a Multimodal AI Model for Speech and Text Translation
281M and 235M param models too.
https://github.com/facebookresearch/seamless_communication/b...
I don't really know how the metrics they list compare to whisper, I'm very curious if these are fast enough for realtime speech2text? I think whisper technically could but it was difficult to do or something like that?
-
SeamlessM4T: All-in-one multimodal translation model
code: https://github.com/facebookresearch/seamless_communication
paper: https://ai.meta.com/research/publications/seamless-m4t/
demo: https://seamless.metademolab.com/
dragonfly
- Ways to make gaming less painful?
-
Seamless: Meta's New Speech Models
https://github.com/dictation-toolbox/dragonfly
-
Ask HN: How do you get started with adding voice commands to a computer system?
https://github.com/dictation-toolbox/dragonfly
https://github.com/daanzu/kaldi-active-grammar
-
If you're interested in eye-tracking, I'm interested in funding you
As someone who suffered some severe mobility impairment a few years ago and relied extensively on eye tracking for just over a year, https://precisiongazemouse.org/ (Windows) and https://talonvoice.com/ (multiplatform) are great. In my experience the hardware is already surprisingly good, in that you get accuracy to within an inch or half an inch depending on your training. Rather, it's all about the UX wrapped around it, as a few other comments have raised.
IMO Talon wins* for that by supporting voice recognition and mouth noises (think lip popping), which are less fatiguing than one-eye blinks for common actions like clicking. The creator is active here sometimes.
(* An alternative is to roll your own sort of thing with https://github.com/dictation-toolbox/dragonfly and other tools as I did, but it's a lot more effort)
-
Ask HN: Would you recommend OpenAI Whisper for Speech to text?
I've experimented with whisper. I don't know of a way to do commands without parsing dictation. Bottom line, the model has to pass 30 seconds of audio to my knowledge. So say if you're utterance is 5 seconds, you'll need 25 seconds of silence.
Depending on the platform you're targeting.
https://github.com/dictation-toolbox/dragonfly
- Software I’m Thankful For
- Whisper – open source speech recognition by OpenAI
-
Found out I have an enchondroma tumour in my hand & it's impacting my typing abilities
What you don't have years of experience typing one handed? Oh well you'll become an expert now. Ive seen this tool used to program python with dragon naturally speaking, maybe give it a go... https://github.com/dictation-toolbox/dragonfly
-
Ask HN: Anyone voice code? I had a stroke and can't use my left side
I have been coding entirely by voice for approximately 10 years now (by hand long before that). Most of that time I have been using the Dragonfly (https://github.com/dictation-toolbox/dragonfly) library to construct my own customized voice coding system. The library is highly flexible and open source, allowing you to easily customize everything to suit what you need to be productive. It is perhaps the power user analogue to Dragon Naturally Speaking. With it, you can certainly be highly productive coding by voice. In fact, I develop kaldi-active-grammar (https://github.com/daanzu/kaldi-active-grammar), a free and open source speech recognition backend usable by Dragonfly, itself entirely by voice. There's also a community of voice coders using Dragonfly and other tools that build on top of it, such as Caster (https://github.com/dictation-toolbox/Caster).
- Ask HN: Who Wants to Collaborate?
What are some alternatives?
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
community - Voice command set for Talon, community-supported.
lmdeploy - LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
kaldi-active-grammar - Python Kaldi speech recognition with grammars that can be set active/inactive dynamically at decode-time
supervision - We write your reusable computer vision tools. 💜
Caster - Dragonfly-Based Voice Programming and Accessibility Toolkit
ai-town - A MIT-licensed, deployable starter kit for building and customizing your own version of AI town - a virtual town where AI characters live, chat and socialize.
Diverse-Stardew-Valley
aider - aider is AI pair programming in your terminal
crkbd - Corne keyboard, a split keyboard with 3x6 column staggered keys and 3 thumb keys.
llama-gpt - A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!
openai-whisper-realtime - A quick experiment to achieve almost realtime transcription using Whisper.