whisperX
whisper-turbo
whisperX | whisper-turbo | |
---|---|---|
24 | 11 | |
9,064 | 1,569 | |
- | - | |
8.4 | 8.9 | |
7 days ago | 2 months ago | |
Python | TypeScript | |
BSD 4-Clause "Original" or "Old" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
whisperX
-
Easy video transcription and subtitling with Whisper, FFmpeg, and Python
It uses this, which does support diarization: https://github.com/m-bain/whisperX
-
SOTA ASR Tooling: Long-Form Transcription
Author compared various whisper implementation
"We found that WhisperX is the best framework for transcribing long audio files efficiently and accurately. It’s much better than using the standard openai-whisper library."
https://github.com/m-bain/whisperX
-
Deploying whisperX on AWS SageMaker as Asynchronous Endpoint
import os # Directory and file paths dir_path = './models-v1' inference_file_path = os.path.join(dir_path, 'code/inference.py') requirements_file_path = os.path.join(dir_path, 'code/requirements.txt') # Create the directory structure os.makedirs(os.path.dirname(inference_file_path), exist_ok=True) # Inference.py content inference_content = '''# inference.py # inference.py import io import json import logging import os import tempfile import time import boto3 import torch import whisperx DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu' s3 = boto3.client('s3') def model_fn(model_dir, context=None): """ Load and return the WhisperX model necessary for audio transcription. """ print("Entering model_fn") logging.info("Loading WhisperX model") model = whisperx.load_model(whisper_arch=f"{model_dir}/guillaumekln/faster-whisper-large-v2", device=DEVICE, language="en", compute_type="float16", vad_options={'model_fp': f"{model_dir}/whisperx/vad/pytorch_model.bin"}) print("Loaded WhisperX model") print("Exiting model_fn with model loaded") return { 'model': model } def input_fn(request_body, request_content_type): """ Process and load audio from S3, given the request body containing S3 bucket and key. """ print("Entering input_fn") if request_content_type != 'application/json': raise ValueError("Invalid content type. Must be application/json") request = json.loads(request_body) s3_bucket = request['s3bucket'] s3_key = request['s3key'] # Download the file from S3 temp_file = tempfile.NamedTemporaryFile(delete=False) s3.download_file(Bucket=s3_bucket, Key=s3_key, Filename=temp_file.name) print(f"Downloaded audio from S3: {s3_bucket}/{s3_key}") print("Exiting input_fn") return temp_file.name def predict_fn(input_data, model, context=None): """ Perform transcription on the provided audio file and delete the file afterwards. """ print("Entering predict_fn") start_time = time.time() whisperx_model = model['model'] logging.info("Loading audio") audio = whisperx.load_audio(input_data) logging.info("Transcribing audio") transcription_result = whisperx_model.transcribe(audio, batch_size=16) try: os.remove(input_data) # input_data contains the path to the temp file print(f"Temporary file {input_data} deleted.") except OSError as e: print(f"Error: {input_data} : {e.strerror}") end_time = time.time() elapsed_time = end_time - start_time logging.info(f"Transcription took {int(elapsed_time)} seconds") print(f"Exiting predict_fn, processing took {int(elapsed_time)} seconds") return transcription_result def output_fn(prediction, accept, context=None): """ Prepare the prediction result for the response. """ print("Entering output_fn") if accept != "application/json": raise ValueError("Accept header must be application/json") response_body = json.dumps(prediction) print("Exiting output_fn with response prepared") return response_body, accept ''' # Write the inference.py file with open(inference_file_path, 'w') as file: file.write(inference_content) # Requirements.txt content requirements_content = '''speechbrain==0.5.16 faster-whisper==0.7.1 git+https://github.com/m-bain/whisperx.git@1b092de19a1878a8f138f665b1467ca21b076e7e ffmpeg-python ''' # Write the requirements.txt file with open(requirements_file_path, 'w') as file: file.write(requirements_content)
-
OpenVoice: Versatile Instant Voice Cloning
Whisper doesn't, but WhisperX <https://github.com/m-bain/whisperX/> does. I am using it right now and it's perfectly serviceable.
For reference, I'm transcribing research-related podcasts, meaning speech doesn't overlap a lot, which would be a problem for WhisperX from what I understand. There's also a lot of accents, which are straining on Whisper (though it's also doing well), but surely help WhisperX. It did have issues with figuring number of speakers on it's own, but that wasn't a problem for my use case.
- FLaNK 15 Jan 2024
-
Subtitle is now open-source
I've had good results with whisperx when I needed to generate captions. https://github.com/m-bain/whisperX
There is currently a problem with diarization, but otherwise, it is SOTA.
-
Insanely Fast Whisper: Transcribe 300 minutes of audio in less than 98 seconds
https://github.com/m-bain/whisperX/issues/569
WhisperX with the new model. It's not fast.
-
Distil-Whisper: distilled version of Whisper that is 6 times faster, 49% smaller
How much faster in real wall-clock time is this in batched data than https://github.com/m-bain/whisperX ?
-
whisper self hosted what's the most cost-efficient way
Checkout whisperx
-
Whisper Turbo: transcribe 20x faster than realtime using Rust and WebGPU
Neat to see a new implementation, although I'll note that for those looking for a drop-in replacement for the whisper library, I believe that both faster-whisper https://github.com/guillaumekln/faster-whisper and https://github.com/m-bain/whisperX are easier (PyTorch-based, doesn't require a web browser), and a lot faster (WhisperX is up to 70X realtime).
whisper-turbo
- Whisper Turbo: speech recognition in the browser using WebGPU
-
Show HN: Shadeup – A language that makes WebGPU easier
Even just looking at the ability to accelerate llms in the browser on any device without an installation is awesome
For example: fleetwood.dev has a really cool project that does audio transcription in browser on the GPU: https://whisper-turbo.com/#
- Run Whisper on WebGPU with a few lines of JS
- Run LLMs on my own Mac fast and efficient Only 2 MBs
-
Distil-Whisper: distilled version of Whisper that is 6 times faster, 49% smaller
You'd be surprised how capable old GPUs are! I've had great success with people running Whisper-Turbo in the browser on really old hardware: https://whisper-turbo.com/
- Running Whisper on Rust and WebGPU
-
Workers AI: serverless GPU-powered inference on Cloudflare’s global network
Whisper large is only 1.5B params, why not run it client side with something like https://github.com/FL33TW00D/whisper-turbo
(Disclaimer: I am the author)
- Whisper Turbo – Run Whisper Directly in the Browser with Rust and WebGPU
- Whisper Turbo: transcribe 20x faster than realtime using Rust and WebGPU
What are some alternatives?
whisper.cpp - Port of OpenAI's Whisper model in C/C++
faster-whisper - Faster Whisper transcription with CTranslate2
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
WhisperInput - Offline voice input panel & keyboard with punctuation for Android.
willow - Open source, local, and self-hosted Amazon Echo/Google Home competitive Voice Assistant alternative
insanely-fast-whisper - Incredibly fast Whisper-large-v3
discourse-ai
openai-whisper-cpu - Improving transcription performance of OpenAI Whisper for CPU based deployment
project-2501 - Project 2501 is an open-source AI assistant, written in C++.
ControlNet - Let us control diffusion models!
get-beam - Run GPU inference and training jobs on serverless infrastructure that scales with you.