open_flamingo
speechbrain
open_flamingo | speechbrain | |
---|---|---|
4 | 26 | |
3,493 | 7,948 | |
2.8% | 3.2% | |
6.8 | 9.8 | |
8 days ago | 8 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
open_flamingo
-
Are there any multimodal AI models I can use to provide a paired text *and* image input, to then generate an expanded descriptive text output? [D]
Maybe the recent OpenFlamingo gives you better results (they have a demo on HF).
- [D] Multi modal for visual qna based on a given image. Need suggestions.
- Open Flamingo: An open-source framework for training large multimodal models
-
Announcing OpenFlamingo: An open-source framework for training vision-language models with in-context learning | LAION
Code here: https://github.com/mlfoundations/open_flamingo
speechbrain
- SpeechBrain 1.0: A free and open-source AI toolkit for all things speech
- FLaNK Stack Weekly 22 January 2024
-
[D] Training ASR model using SpeechBrain
You likely have a very broken sample in one of your batches. It looks like your training actually went through a few batches before it horked the error at you. A quick google shows a similar issue in the github repo: https://github.com/speechbrain/speechbrain/issues/649 .
-
Whisper.cpp
https://github.com/ggerganov/whisper.cpp https://speechbrain.github.io/
-
[D] What is the best open source text to speech model?
I don't know if it's the best, but Speechbrain is supposed to be state of the art.
-
[D] What's stopping you from working on speech and voice?
- https://github.com/speechbrain/speechbrain
- Specific Voice recognition
- How to get high-quality, low-cost Speech-to-Text transcription?
- [D] Speech Enhancement SOTA
- Speaker diarization
What are some alternatives?
transformers - ð€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
espnet - End-to-End Speech Processing Toolkit
Emu - Emu Series: Generative Multimodal Models from BAAI
pyannote-audio - Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
pykale - Knowledge-Aware machine LEarning (KALE): accessible machine learning from multiple sources for interdisciplinary research, part of the ð¥PyTorch ecosystem. â Star to support our work!
Resemblyzer - A python package to analyze and compare voices with deep learning
icl-ceil - [ICML 2023] Code for our paper âCompositional Exemplars for In-context Learningâ.
ukrainian-onnx-model - An ONNX model for speech recognition of the Ukrainian language
SincNet - SincNet is a neural architecture for efficiently processing raw audio samples.
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
speech-to-text-benchmark - speech to text benchmark framework
Kaldi Speech Recognition Toolkit - kaldi-asr/kaldi is the official location of the Kaldi project.