cog VS insanely-fast-whisper

Compare cog vs insanely-fast-whisper and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
cog insanely-fast-whisper
20 6
7,133 6,337
8.2% -
9.4 9.0
7 days ago 30 days ago
Python Jupyter Notebook
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cog

Posts with mentions or reviews of cog. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-01.

insanely-fast-whisper

Posts with mentions or reviews of insanely-fast-whisper. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-18.
  • Show HN: I Built an Open Source API with Insanely Fast Whisper and Fly GPUs
    3 projects | news.ycombinator.com | 18 Feb 2024
    Hi HN! Since the launch of JigsawStack.com we've been trying to dive deeper into fully managed AI APIs built and fine tuned for specific use cases. Audio/video transcription was one of the more basic things and we wanted the best open source model and at this point it is OpenAI's whisper large v3 model based on the number languages it supports and accuracy.

    The thing is, the model is huge and requires tons of GPU power for it to run efficiently at scale. Even OpenAI doesn't provide an API for their best transcription model while only providing whisper v2 at a pretty high price. I tried running the whisper large v3 model on multiple cloud providers from Modal.com, Replicate, Hugging faces dedicated interface and it takes a long time to transcribe any content about ~30mins long for 150mins of audio and this doesn't include the machine startup time for on demand GPUs. Keeping in mind at JigsawStack we aim to return any heavy computation under 25s or 2mins for async cases and any basic computation under 2s.

    While exploring Replicate, I came across this project https://github.com/Vaibhavs10/insanely-fast-whisper by Vaibhav Srivastav which optimises the hell out of this whisper large v3 model with a variety of techniques like batching and using FlashAttention 2. This reduces computation time by almost 30x, check out the amazing repo for more stats! Open source wins again!!

    First we tried using Replicates dedicated on-demand GPU service to run this model but that did not help, the cold startup/booting time alone of a GPU made the benefits of the optimised model pretty useless for our use case. Then tried Hugging face and modal.com and we got the same results, with a A100 80GB GPU, we were seeing around an average of ~2mins start up time to load the machine and model image. It didn't make sense for us to have a always on GPU running due to the crazy high cost. At this point I was inches away from giving up.

    Next day I got an email from Fly.io: "Congrats, Yoeven D Khemlani has GPU access!" I totally forgot the Fly started providing GPUs and I'm a big fan of their infra reliability and ease to deploy. We also run a bunch of our GraphQL servers for JigsawStack on Fly's infra!

    I quickly picked up some Python and Docker by referring to a bunch of other Github repos and Fly's GPU tutorials then wrote the API layer with the optimised version of whisper 3 and deployed on Fly's GPU machines.

    And wow the results were pretty amazing, the start up time of the machine on average was ~20 seconds compared to the other providers at ~2mins with all the performance benefits from the optimised whisper. I've added some more stats in the Github repo. The more interesting thing to me is cost↓

    Based on 10mins of audio:

  • Whisper: Nvidia RTX 4090 vs. M1 Pro with MLX
    10 projects | news.ycombinator.com | 13 Dec 2023
    There's a better parallel/batching that works on the 30s chunks resulting in 40X. From HF at https://github.com/Vaibhavs10/insanely-fast-whisper

    This is again not native PyTorch so there's still room to have better RTFX numbers.

  • Insanely Fast Whisper: Transcribe 300 minutes of audio in less than 98 seconds
    8 projects | news.ycombinator.com | 14 Nov 2023
    Founder of Replicate here. We open pull requests on models[0] to get them running on Replicate so people can try out a demo of the model and run them with an API. They're also packaged with Cog[1] so you can run them as a Docker image.

    Somebody happened to stumble across our fork of the model and submitted it. We didn't submit it nor intend for it to be an ad. I hope the submission gets replaced with the upstream repo so the author gets full credit. :)

    [0] https://github.com/Vaibhavs10/insanely-fast-whisper/pull/42

What are some alternatives?

When comparing cog and insanely-fast-whisper you can also consider the following projects:

nixpacks - App source + Nix packages + Docker = Image

insanely-fast-whisper

pytorch_wavelets - Pytorch implementation of 2D Discrete Wavelet (DWT) and Dual Tree Complex Wavelet Transforms (DTCWT) and a DTCWT based ScatterNet

whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

piku - The tiniest PaaS you've ever seen. Piku allows you to do git push deployments to your own servers.

whisper_streaming - Whisper realtime streaming for long speech-to-text transcription and translation

heroku-review-app-actions - GitHub action to automate managing review apps on your Heroku account

insanely-fast-whisper-api - An API to transcribe audio with OpenAI's Whisper Large v3!

tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators

faster-whisper - Faster Whisper transcription with CTranslate2

memray - Memray is a memory profiler for Python

insanely-fast-whisper - Incredibly fast Whisper-large-v3