Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Insanely-fast-whisper Alternatives
Similar projects and alternatives to insanely-fast-whisper
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
dstack
dstack is an open-source orchestration engine for running AI workloads at scale in any cloud or data center. https://discord.gg/u8SmfwPpMd
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
wordcab-transcribe
💬 ASR FastAPI server using faster-whisper and Multi-Scale Auto-Tuning Spectral Clustering for diarization.
-
cog-whisper-diarization
Cog implementation of transcribing + diarization pipeline with Whisper & Pyannote
insanely-fast-whisper reviews and mentions
-
Show HN: I Built an Open Source API with Insanely Fast Whisper and Fly GPUs
Hi HN! Since the launch of JigsawStack.com we've been trying to dive deeper into fully managed AI APIs built and fine tuned for specific use cases. Audio/video transcription was one of the more basic things and we wanted the best open source model and at this point it is OpenAI's whisper large v3 model based on the number languages it supports and accuracy.
The thing is, the model is huge and requires tons of GPU power for it to run efficiently at scale. Even OpenAI doesn't provide an API for their best transcription model while only providing whisper v2 at a pretty high price. I tried running the whisper large v3 model on multiple cloud providers from Modal.com, Replicate, Hugging faces dedicated interface and it takes a long time to transcribe any content about ~30mins long for 150mins of audio and this doesn't include the machine startup time for on demand GPUs. Keeping in mind at JigsawStack we aim to return any heavy computation under 25s or 2mins for async cases and any basic computation under 2s.
While exploring Replicate, I came across this project https://github.com/Vaibhavs10/insanely-fast-whisper by Vaibhav Srivastav which optimises the hell out of this whisper large v3 model with a variety of techniques like batching and using FlashAttention 2. This reduces computation time by almost 30x, check out the amazing repo for more stats! Open source wins again!!
First we tried using Replicates dedicated on-demand GPU service to run this model but that did not help, the cold startup/booting time alone of a GPU made the benefits of the optimised model pretty useless for our use case. Then tried Hugging face and modal.com and we got the same results, with a A100 80GB GPU, we were seeing around an average of ~2mins start up time to load the machine and model image. It didn't make sense for us to have a always on GPU running due to the crazy high cost. At this point I was inches away from giving up.
Next day I got an email from Fly.io: "Congrats, Yoeven D Khemlani has GPU access!" I totally forgot the Fly started providing GPUs and I'm a big fan of their infra reliability and ease to deploy. We also run a bunch of our GraphQL servers for JigsawStack on Fly's infra!
I quickly picked up some Python and Docker by referring to a bunch of other Github repos and Fly's GPU tutorials then wrote the API layer with the optimised version of whisper 3 and deployed on Fly's GPU machines.
And wow the results were pretty amazing, the start up time of the machine on average was ~20 seconds compared to the other providers at ~2mins with all the performance benefits from the optimised whisper. I've added some more stats in the Github repo. The more interesting thing to me is cost↓
Based on 10mins of audio:
-
Whisper: Nvidia RTX 4090 vs. M1 Pro with MLX
There's a better parallel/batching that works on the 30s chunks resulting in 40X. From HF at https://github.com/Vaibhavs10/insanely-fast-whisper
This is again not native PyTorch so there's still room to have better RTFX numbers.
-
Insanely Fast Whisper: Transcribe 300 minutes of audio in less than 98 seconds
Founder of Replicate here. We open pull requests on models[0] to get them running on Replicate so people can try out a demo of the model and run them with an API. They're also packaged with Cog[1] so you can run them as a Docker image.
Somebody happened to stumble across our fork of the model and submitted it. We didn't submit it nor intend for it to be an ad. I hope the submission gets replaced with the upstream repo so the author gets full credit. :)
[0] https://github.com/Vaibhavs10/insanely-fast-whisper/pull/42
-
A note from our sponsor - InfluxDB
www.influxdata.com | 29 Apr 2024
Stats
Vaibhavs10/insanely-fast-whisper is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of insanely-fast-whisper is Jupyter Notebook.
Popular Comparisons
- insanely-fast-whisper VS insanely-fast-whisper
- insanely-fast-whisper VS whisperX
- insanely-fast-whisper VS whisper_streaming
- insanely-fast-whisper VS insanely-fast-whisper-api
- insanely-fast-whisper VS faster-whisper
- insanely-fast-whisper VS insanely-fast-whisper
- insanely-fast-whisper VS cog
- insanely-fast-whisper VS wordcab-transcribe
- insanely-fast-whisper VS mlx-examples
Sponsored