Whisper
whisper
Our great sponsors
Whisper | whisper | |
---|---|---|
32 | 343 | |
7,126 | 60,303 | |
- | 5.9% | |
6.5 | 6.4 | |
6 months ago | 16 days ago | |
C++ | Python | |
Mozilla Public License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Whisper
-
Nvidia Speech and Translation AI Models Set Records for Speed and Accuracy
I've been using WhisperDesktop ( https://github.com/Const-me/Whisper ) with great success on a 3090 for fast & accurate transcription of often poor quality euro-english hours long multispeaker audio files. If there's an easy way to compare I'm certainly going to give this a try.
-
AMD's CDNA 3 Compute Architecture
Why would you want OpenCL? Pretty sure D3D11 compute shaders gonna be adequate for a Torch backend, and they even work on Linux with Wine: https://github.com/Const-me/Whisper/issues/42 Native Vulkan compute shaders would be even better.
Why would you want unified address space? At least in my experience, it’s often too slow to be useful. DMA transfers (CopyResource in D3D11, copy command queue in D3D12, transfer queue in VK) are implemented by dedicated hardware inside GPUs, and are way more efficient.
-
Amazon Bedrock Is Now Generally Available
https://github.com/ggerganov/whisper.cpp
https://github.com/Const-me/Whisper
I had fun with both of these. They will both do realtime transcription. Bit you will have to download the training data sets…
-
Why Nvidia Keeps Winning: The Rise of an AI Giant
Gamers don’t care about FP64 performance, and it seems nVidia is using that for market segmentation. The FP64 performance for RTX 4090 is 1.142 TFlops, for RTX 3090 Ti 0.524 TFlops. AMD doesn’t do that, FP64 performance is consistently better there, and have been this way for quite a few years. For example, the figure for 3090 Ti (a $2000 card from 2022) is similar to Radeon RX Vega 56, a $400 card from 2017 which can do 0.518 TFlops.
And another thing: nVidia forbids usage of GeForce cards in data centers, while AMD allows that. I don’t know how specifically they define datacenter, whether it’s enforceable, or whether it’s tested in courts of various jurisdictions. I just don’t want to find out answers to these questions at the legal expenses of my employer. I believe they would prefer to not cut corners like that.
I think nVidia only beats AMD due to the ecosystem: for GPGPU that’s CUDA (and especially the included first-party libraries like BLAS, FFT, DNN and others), also due to the support in popular libraries like TensorFlow. However, it’s not that hard to ignore the ecosystem, and instead write some compute shaders in HLSL. Here’s a non-trivial open-source project unrelated to CAE, where I managed to do just that with decent results: https://github.com/Const-me/Whisper That software even works on Linux, probably due to Valve’s work on DXVK 2.0 (a compatibility layer which implements D3D11 on top of Vulkan).
-
Ask HN: What is your recommended speech to text/audio transcription tool?
Currently, I use a GUI for Whisper AI (https://github.com/Const-me/Whisper) to upload MP3s of interviews to get text transcripts. However, I'm hoping to find another tool that would recognize and split out the text per speaker.
Does such a thing exist?
- Da audio a testo, consigli?
-
Ask HN: Any recommendations for cheap, high-quality transcription software
I just used Whisper over the weekend to transcribe 5 hours of meeting, worked nicely and it can be run on a single GPU locally. https://github.com/ggerganov/whisper.cpp
There are a few wrappers available with GUI like https://github.com/Const-me/Whisper
- Voice recognition software for German
- Const-me/Whisper: High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
- I built a massive search engine to find video clips by spoken text
whisper
-
Why I Care Deeply About Web Accessibility And You Should Too
Let’s not talk about local models as the hardware requirements are way beyond most of these people’s reach. I have a MacBook Air with an M2 chip and 8GB of RAM and can hardly run Whisper locally, so I use this HuggingFace space.
-
How I built NotesGPT – a full-stack AI voice note app
Last week, I launched notesGPT, a free and open source voice note app that has 35,000 visitors, 7,000 users, and over 1,000 GitHub stars so far in the last week. It allows you to record a voice note, transcribes it uses Whisper, and uses Mixtral via Together to extract action items and display them in an action items view. It’s also fully open source and comes equipped with authentication, storage, vector search, action items, and is fully responsive on mobile for ease of use.
-
Ask HN: Can AI break a speech audio into individual words?
I found a pretty good discussion in the topic here:
https://github.com/openai/whisper/discussions/1243
-
WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
There is a plot of language performance on their repo: https://github.com/openai/whisper
I am not aware of a multi-lingual leaderboard for speech recognition models.
- Ask HN: AI that allows you to make phone calls in a language you don't speak?
-
Ask HN: Favorite Podcast Episodes of 2023?
I don't know how OP does it, but here's how I'd do it:
* Generate a transcript by runing Whisper against the podcast audio file: https://github.com/openai/whisper
* Upload transcript to ChatGPT and ask it to summarize.
* Automate all the above.
-
Need advice
Ahh, that makes sense. I've been building something like that, but only from other languages into English using Whisper
-
Subtitle is now open-source
Whisper already generates subtitles[0], supporting VTT and SRT so this is just a thin wrapper around that.
[0]: https://github.com/openai/whisper/blob/e58f28804528831904c3b...
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
> although it does require you to wear headphones so the bot doesn't hear itself and get interrupted.
Maybe you can rely on some sort of speaker identification to sort this out?
https://github.com/openai/whisper/discussions/264
-
Federated Finetuning of OpenAI's Whisper on Raspberry Pi 5
v3 only comes in one flavor: large.
I don’t think you’re going to have a good time running the large model on a Pi of any kind.
The large models are 32x slower than the tiny models, roughly.[0]
I’m seeing people report that the Pi 4 can transcribe 30 seconds of audio in somewhere between 30 seconds and 60 seconds with the tiny model.
You can do the math… 32x = 16 minutes to 32 minutes to transcribe 30 seconds of audio with the large model. Not a good time for most people.
The Pi 5 could be 2x to 3x faster.
I should benchmark my Pi 4 sometime (or Pi 5, if it ever shows up).
[0]: https://github.com/openai/whisper/blob/main/README.md#availa...
What are some alternatives?
whisper.cpp - Port of OpenAI's Whisper model in C/C++
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
TransformerEngine - A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
silero-vad - Silero VAD: pre-trained enterprise-grade Voice Activity Detector
just-an-email - App to share files & texts between your devices without installing anything
buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
ggml - Tensor library for machine learning
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
beaker - An experimental peer-to-peer Web browser
cookwherever - Cook Wherever is an open source project to attempt to making cooking more accessible and engaging for everyone.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.