frogbase
Whisper
frogbase | Whisper | |
---|---|---|
14 | 32 | |
754 | 7,282 | |
- | - | |
4.3 | 6.5 | |
7 months ago | 7 months ago | |
Python | C++ | |
MIT License | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
frogbase
-
For people who tried whisper
If you’re looking to use a local deployment and are comfortable with python projects, this deployment was fairly easy to use: https://github.com/hayabhay/whisper-ui
-
I have a two-step process for taking notes on transcripts that I'd like to share, but am also looking for feedback for the final step
I recommend using chat GPT to learn some very basic python/programming tho. I like this one https://github.com/hayabhay/whisper-ui which uses streamlit for easy to use UI for bulk transcriptions.
-
(Preferably) Self Hosted Podcasts with searchable transcripts
This tool 2 out of the 4 items you mentioned: https://github.com/hayabhay/whisper-ui
-
Whispers AI Modular Future
What utilities related to Whisper do you wish existed? What have you had to build yourself?
On the end user application side, I wish there was something that let me pick a podcast of my choosing, get it fully transcribed, and get an embeddings search plus answer q&a on top of that podcast or set of chosen podcasts. I've seen ones for specific podcasts, but I'd like one where I can choose the podcast. (Probably won't build it)
Also on the end user side, I wish there was an Otter alternative (still paid $30/mo, but unlimited minutes per month) that had longer transcription limits. (Started building this, not much interest from users though)
Things I've seen on the dev tool side:
Gladia (API call version of Whisper)
Whisper.cpp
Whisper webservice (https://github.com/ahmetoner/whisper-asr-webservice) - via this thread
Live microphone demo (not real time, it still does it in chunks) https://github.com/mallorbc/whisper_mic
Streamlit UI https://github.com/hayabhay/whisper-ui
Whisper playground https://github.com/saharmor/whisper-playground
Real time whisper https://github.com/shirayu/whispering
Whisper as a service https://github.com/schibsted/WAAS
Improved timestamps and speaker identification https://github.com/m-bain/whisperX
MacWhisper https://goodsnooze.gumroad.com/l/macwhisper
Crossplatform desktop Whisper that supports semi-realtime https://github.com/chidiwilliams/buzz
-
[P] Whisper-UI Update: You can now bulk-transcribe, save & search transcriptions with Streamlit & SQLAlchemy 2.0 [details in the comments]
Github Repo: https://github.com/hayabhay/whisper-ui
-
Self-host Whisper As a Service with GUI and queueing. Schibsted created a transcription service for our journalists to transcribe audio interviews and podcasts really quick.
People may also like this tool which is a bit more about searching the contents. https://github.com/hayabhay/whisper-ui
- Show HN: Self-host Whisper As a Service with GUI and queueing
-
Audio equivalent of paperless?
There is whisper ui to create meta data (speech 2 text).
- Whisper-UI: You can now bulk-transcribe, save & search transcriptions from YouTube with OpenAI's Whisper, Streamlit & SQLAlchemy 2.0
- Whisper-UI Update: You can now bulk-transcribe, save & search transcriptions with Streamlit & SQLAlchemy 2.0
Whisper
-
Nvidia Speech and Translation AI Models Set Records for Speed and Accuracy
I've been using WhisperDesktop ( https://github.com/Const-me/Whisper ) with great success on a 3090 for fast & accurate transcription of often poor quality euro-english hours long multispeaker audio files. If there's an easy way to compare I'm certainly going to give this a try.
-
AMD's CDNA 3 Compute Architecture
Why would you want OpenCL? Pretty sure D3D11 compute shaders gonna be adequate for a Torch backend, and they even work on Linux with Wine: https://github.com/Const-me/Whisper/issues/42 Native Vulkan compute shaders would be even better.
Why would you want unified address space? At least in my experience, it’s often too slow to be useful. DMA transfers (CopyResource in D3D11, copy command queue in D3D12, transfer queue in VK) are implemented by dedicated hardware inside GPUs, and are way more efficient.
-
Amazon Bedrock Is Now Generally Available
https://github.com/ggerganov/whisper.cpp
https://github.com/Const-me/Whisper
I had fun with both of these. They will both do realtime transcription. Bit you will have to download the training data sets…
-
Why Nvidia Keeps Winning: The Rise of an AI Giant
Gamers don’t care about FP64 performance, and it seems nVidia is using that for market segmentation. The FP64 performance for RTX 4090 is 1.142 TFlops, for RTX 3090 Ti 0.524 TFlops. AMD doesn’t do that, FP64 performance is consistently better there, and have been this way for quite a few years. For example, the figure for 3090 Ti (a $2000 card from 2022) is similar to Radeon RX Vega 56, a $400 card from 2017 which can do 0.518 TFlops.
And another thing: nVidia forbids usage of GeForce cards in data centers, while AMD allows that. I don’t know how specifically they define datacenter, whether it’s enforceable, or whether it’s tested in courts of various jurisdictions. I just don’t want to find out answers to these questions at the legal expenses of my employer. I believe they would prefer to not cut corners like that.
I think nVidia only beats AMD due to the ecosystem: for GPGPU that’s CUDA (and especially the included first-party libraries like BLAS, FFT, DNN and others), also due to the support in popular libraries like TensorFlow. However, it’s not that hard to ignore the ecosystem, and instead write some compute shaders in HLSL. Here’s a non-trivial open-source project unrelated to CAE, where I managed to do just that with decent results: https://github.com/Const-me/Whisper That software even works on Linux, probably due to Valve’s work on DXVK 2.0 (a compatibility layer which implements D3D11 on top of Vulkan).
-
Ask HN: What is your recommended speech to text/audio transcription tool?
Currently, I use a GUI for Whisper AI (https://github.com/Const-me/Whisper) to upload MP3s of interviews to get text transcripts. However, I'm hoping to find another tool that would recognize and split out the text per speaker.
Does such a thing exist?
- Da audio a testo, consigli?
-
Ask HN: Any recommendations for cheap, high-quality transcription software
I just used Whisper over the weekend to transcribe 5 hours of meeting, worked nicely and it can be run on a single GPU locally. https://github.com/ggerganov/whisper.cpp
There are a few wrappers available with GUI like https://github.com/Const-me/Whisper
- Voice recognition software for German
- Const-me/Whisper: High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
- I built a massive search engine to find video clips by spoken text
What are some alternatives?
whisper.cpp - Port of OpenAI's Whisper model in C/C++
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
transcribe-anything - Input a local file or url and this service will transcribe it using Whisper AI. Completely private and Free 🤯🤯🤯
TransformerEngine - A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
just-an-email - App to share files & texts between your devices without installing anything
whisper-playground - Build real time speech2text web apps using OpenAI's Whisper https://openai.com/blog/whisper/
ggml - Tensor library for machine learning
nlp
beaker - An experimental peer-to-peer Web browser