insanely-fast-whisper
dstack
insanely-fast-whisper | dstack | |
---|---|---|
6 | 17 | |
6,527 | 1,123 | |
- | 6.2% | |
8.9 | 9.8 | |
3 days ago | about 22 hours ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
insanely-fast-whisper
-
Show HN: I Built an Open Source API with Insanely Fast Whisper and Fly GPUs
Hi HN! Since the launch of JigsawStack.com we've been trying to dive deeper into fully managed AI APIs built and fine tuned for specific use cases. Audio/video transcription was one of the more basic things and we wanted the best open source model and at this point it is OpenAI's whisper large v3 model based on the number languages it supports and accuracy.
The thing is, the model is huge and requires tons of GPU power for it to run efficiently at scale. Even OpenAI doesn't provide an API for their best transcription model while only providing whisper v2 at a pretty high price. I tried running the whisper large v3 model on multiple cloud providers from Modal.com, Replicate, Hugging faces dedicated interface and it takes a long time to transcribe any content about ~30mins long for 150mins of audio and this doesn't include the machine startup time for on demand GPUs. Keeping in mind at JigsawStack we aim to return any heavy computation under 25s or 2mins for async cases and any basic computation under 2s.
While exploring Replicate, I came across this project https://github.com/Vaibhavs10/insanely-fast-whisper by Vaibhav Srivastav which optimises the hell out of this whisper large v3 model with a variety of techniques like batching and using FlashAttention 2. This reduces computation time by almost 30x, check out the amazing repo for more stats! Open source wins again!!
First we tried using Replicates dedicated on-demand GPU service to run this model but that did not help, the cold startup/booting time alone of a GPU made the benefits of the optimised model pretty useless for our use case. Then tried Hugging face and modal.com and we got the same results, with a A100 80GB GPU, we were seeing around an average of ~2mins start up time to load the machine and model image. It didn't make sense for us to have a always on GPU running due to the crazy high cost. At this point I was inches away from giving up.
Next day I got an email from Fly.io: "Congrats, Yoeven D Khemlani has GPU access!" I totally forgot the Fly started providing GPUs and I'm a big fan of their infra reliability and ease to deploy. We also run a bunch of our GraphQL servers for JigsawStack on Fly's infra!
I quickly picked up some Python and Docker by referring to a bunch of other Github repos and Fly's GPU tutorials then wrote the API layer with the optimised version of whisper 3 and deployed on Fly's GPU machines.
And wow the results were pretty amazing, the start up time of the machine on average was ~20 seconds compared to the other providers at ~2mins with all the performance benefits from the optimised whisper. I've added some more stats in the Github repo. The more interesting thing to me is cost↓
Based on 10mins of audio:
-
Whisper: Nvidia RTX 4090 vs. M1 Pro with MLX
There's a better parallel/batching that works on the 30s chunks resulting in 40X. From HF at https://github.com/Vaibhavs10/insanely-fast-whisper
This is again not native PyTorch so there's still room to have better RTFX numbers.
-
Insanely Fast Whisper: Transcribe 300 minutes of audio in less than 98 seconds
Founder of Replicate here. We open pull requests on models[0] to get them running on Replicate so people can try out a demo of the model and run them with an API. They're also packaged with Cog[1] so you can run them as a Docker image.
Somebody happened to stumble across our fork of the model and submitted it. We didn't submit it nor intend for it to be an ad. I hope the submission gets replaced with the upstream repo so the author gets full credit. :)
[0] https://github.com/Vaibhavs10/insanely-fast-whisper/pull/42
dstack
-
Pyinfra: Automate Infrastructure Using Python
We build a similar tool except we focus on AI workloads. Also support on-prem clusters now in addition to GPU clouds. https://github.com/dstackai/dstack
-
Show HN: Open-source alternative to HashiCorp/IBM Vault
Not exactly this, but something related. At https://github.com/dstackai/dstack, we build an alternative to K8S for AI infra.
-
Ask HN: How does deploying a fine-tuned model work
You can use https://github.com/dstackai/dstack to deploy your model to the most affordable GPU clouds. It supports auto-scaling and other features.
Disclaimer: I’m the creator of dstack.
- FLaNK Stack Weekly 19 Feb 2024
-
Show HN: I Built an Open Source API with Insanely Fast Whisper and Fly GPUs
Great job on the project! It looks fantastic. Thanks to your post, I discovered Fly's GPUs. We are currently developing a platform called https://github.com/dstackai/dstack that enables users to run any model on any cloud. I am curious if it would be possible to add support for Fly.io as well. If you are interested in collaborating on this, please let me know!
- Show HN: Dstack – an open-source engine for running GPU workloads
-
[P] I built a tool to compare cloud GPUs. How should I improve it?
I also noticed that the creator of this app, dstack, is affiliated with Tensordock, the top results for most if not all queries. If that's the case, perhaps a direct link to the cheapest machine could be provided? I haven't used Tensordock, so I don't know if this is mechanically possible.
-
Running dev environments and ML tasks cost-effectively in any cloud
Here's the repository with all the important links, including documentation, examples, and more: https://github.com/dstackai/dstack
-
Dstack Hub
Hey everyone, I'm happy to release dstack Hub, an open-source tool that helps teams manage their ML workflows more effectively without vendor lock-in.
dstack Hub extend dstack [1] with workflow scheduling capabilities and user management. Here's how it works: run dstack Hub via Docker, use its UI to configure projects and cloud credentials, then pass the URL and personal token to the dstack CLI. Now, you can run workflows through the CLI and Hub will orchestrate them in the cloud on your behalf.
This is a beta release and we plan to continuously improve it. We'd love to hear your feedback and answer any questions!
[1] https://github.com/dstackai/dstack
-
Running Stable Diffusion Locally & in Cloud with Diffusers & dstack
To help you overcome this challenge, we have written an article to guide you through the simple steps of using both diffusers and dstack to generate images from prompts, both locally and in the cloud, using a simple example.
What are some alternatives?
insanely-fast-whisper
msdocs-python-django-azure-container-apps - Python web app using Django that can be deployed to Azure Container Apps.
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
dstack-examples - A collection of examples demonstrating how to use dstack
whisper_streaming - Whisper realtime streaming for long speech-to-text transcription and translation
zenml - ZenML 🙏: Build portable, production-ready MLOps pipelines. https://zenml.io.
insanely-fast-whisper-api - An API to transcribe audio with OpenAI's Whisper Large v3!
flyte - Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
faster-whisper - Faster Whisper transcription with CTranslate2
lambdapi - Serverless runtime environment tailored for code produced by LLMs. Automatic API generation from your code, support for multiple programming languages, and integrated file and database storage solutions.
insanely-fast-whisper - Incredibly fast Whisper-large-v3
metaflow - :rocket: Build and manage real-life ML, AI, and data science projects with ease!