mlx
whisperX
mlx | whisperX | |
---|---|---|
23 | 24 | |
14,739 | 9,284 | |
8.5% | - | |
9.8 | 8.4 | |
3 days ago | 2 days ago | |
C++ | Python | |
MIT License | BSD 4-Clause "Original" or "Old" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mlx
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
Yes, we are also looking at integrating MLX [1] which is optimized for Apple Silicon and built by an incredible team of individuals, a few of which were behind the original Torch [2] project. There's also TensorRT-LLM [3] by Nvidia optimized for their recent hardware.
All of this of course acknowledging that llama.cpp is an incredible project with competitive performance and support for almost any platform.
[1] https://github.com/ml-explore/mlx
[2] https://en.wikipedia.org/wiki/Torch_(machine_learning)
[3] https://github.com/NVIDIA/TensorRT-LLM
-
Ask HN: What is the current (Apr. 2024) gold standard of running an LLM locally?
If you're able to purchase a separate GPU, the most popular option is to get an NVIDIA RTX3090 or RTX4090.
Apple Mac M2 or M3's are becoming a viable option because of MLX https://github.com/ml-explore/mlx . If you are getting an M series Mac for LLMs, I'd recommend getting something with 24GB or more of RAM.
- MLX Community Projects
- FLaNK 15 Jan 2024
- Why the M2 is more advanced that it seemed
-
I made an app that runs Mistral 7B 0.2 LLM locally on iPhone Pros
3) Not Enough Benefit (For the Cost... Yet!)
This is my best understanding based on my own work and research for a local LLM iOS app. Read on for more in-depth justifications of each point!
-—-
1) No Neural Engine API
- There is no developer API to use the Neural Engine programmatically, so CoreML is the only way to be able to use it.
2) CoreML has challenges modeling LLMs efficiently right now.
- Its most-optimized use cases seem tailored for image models, as it works best with fixed input lengths[1][2], which are fairly limiting for general language modeling (are all prompts, sentences and paragraphs, the same number of tokens? do you want to pad all your inputs?).
- CoreML features limited support for the leading approaches for compressing LLMs (quantization, whether weights-only or activation-aware). Falcon-7b-instruct (fp32) in CoreML is 27.7GB [3], Llama-2-chat (fp16) is 13.5GB [4] — neither will fit in memory on any currently shipping iPhone. They'd only barely fit on the newest, highest-end iPad Pros.
- HuggingFace‘s swift-transformers[5] is a CoreML-focused library under active development to eventually help developers with many of these problems, in addition to an `exporters` cli tool[6] that wraps Apple's `coremltools` for converting PyTorch or other models to CoreML.
3) Not Enough Benefit (For the Cost... Yet!)
- ANE & GPU (Metal) have access to the same unified memory. They are both subject to the same restrictions on background execution (you simply can't use them in the background, or your app is killed[7]).
- So the main benefit from unlocking the ANE would be multitasking: running an ML task in parallel with non-ML tasks that might also require the GPU: e.g. SwiftUI Metal Shaders, background audio processing (shoutout Overcast!), screen recording/sharing, etc. Absolutely worthwhile to achieve, but for the significant work required and the lack of ecosystem currently around CoreML for LLMs specifically, the benefits become less clear.
- Apple's hot new ML library, MLX, only uses Metal for GPU[8], just like Llama.cpp. More nuanced differences arise on closer inspection related to MLX's focus on unified memory optimizations. So perhaps we can squeeze out some performance from unified memory in Llama.cpp, but CoreML will be the only way to unlock ANE, which is lower priority according to lead maintainer Georgi Gerganov as of late this past summer[9], likely for many of the reasons enumerated above.
I've learned most of this while working on my own private LLM inference app, cnvrs[10] — would love to hear your feedback or thoughts!
Britt
---
[1] https://github.com/huggingface/exporters/pull/37
[2] https://apple.github.io/coremltools/docs-guides/source/flexi...
[3] https://huggingface.co/tiiuae/falcon-7b-instruct/tree/main/c...
[4] https://huggingface.co/coreml-projects/Llama-2-7b-chat-corem...
[5] https://github.com/huggingface/swift-transformers
[6] https://github.com/huggingface/exporters
[7] https://developer.apple.com/documentation/metal/gpu_devices_...
[8] https://github.com/ml-explore/mlx/issues/18
[9] https://github.com/ggerganov/llama.cpp/issues/1714#issuecomm...
[10] https://testflight.apple.com/join/ERFxInZg
-
Ferret: An End-to-End MLLM by Apple
Maybe MLX is meant to fill this gap?
https://github.com/ml-explore/mlx
-
PowerInfer: Fast Large Language Model Serving with a Consumer-Grade GPU [pdf]
This is basically fork of llama.cpp. I created a PR to see the diff and added my comments on it: https://github.com/ggerganov/llama.cpp/pull/4543
One thing that caught my interest is this line from their readme:
> PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine: hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons are computed on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers.
Apple's Metal/M3 is perfect for this because CPU and GPU share memory. No need to do any data transfers. Checkout mlx from apple: https://github.com/ml-explore/mlx
-
Whisper: Nvidia RTX 4090 vs. M1 Pro with MLX
How does this compare to insanely-fast-whisper though? https://github.com/Vaibhavs10/insanely-fast-whisper
I think that not using optimizations allows this to be a 1:1 comparison, but if the optimizations are not ported to MLX, then it would still be better to use a 4090.
Having looked at MLX recently, I think it's definitely going to get traction on Macs - and iOS when Swift bindings are released https://github.com/ml-explore/mlx/issues/15 (although there might be some C++20 compilation issue blocking right now).
-
[D] M3 MAX 64GB VS RTX 3080
software is already there, check the new ml framework from Apple https://github.com/ml-explore/mlx
whisperX
-
Easy video transcription and subtitling with Whisper, FFmpeg, and Python
It uses this, which does support diarization: https://github.com/m-bain/whisperX
-
SOTA ASR Tooling: Long-Form Transcription
Author compared various whisper implementation
"We found that WhisperX is the best framework for transcribing long audio files efficiently and accurately. It’s much better than using the standard openai-whisper library."
https://github.com/m-bain/whisperX
-
Deploying whisperX on AWS SageMaker as Asynchronous Endpoint
import os # Directory and file paths dir_path = './models-v1' inference_file_path = os.path.join(dir_path, 'code/inference.py') requirements_file_path = os.path.join(dir_path, 'code/requirements.txt') # Create the directory structure os.makedirs(os.path.dirname(inference_file_path), exist_ok=True) # Inference.py content inference_content = '''# inference.py # inference.py import io import json import logging import os import tempfile import time import boto3 import torch import whisperx DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu' s3 = boto3.client('s3') def model_fn(model_dir, context=None): """ Load and return the WhisperX model necessary for audio transcription. """ print("Entering model_fn") logging.info("Loading WhisperX model") model = whisperx.load_model(whisper_arch=f"{model_dir}/guillaumekln/faster-whisper-large-v2", device=DEVICE, language="en", compute_type="float16", vad_options={'model_fp': f"{model_dir}/whisperx/vad/pytorch_model.bin"}) print("Loaded WhisperX model") print("Exiting model_fn with model loaded") return { 'model': model } def input_fn(request_body, request_content_type): """ Process and load audio from S3, given the request body containing S3 bucket and key. """ print("Entering input_fn") if request_content_type != 'application/json': raise ValueError("Invalid content type. Must be application/json") request = json.loads(request_body) s3_bucket = request['s3bucket'] s3_key = request['s3key'] # Download the file from S3 temp_file = tempfile.NamedTemporaryFile(delete=False) s3.download_file(Bucket=s3_bucket, Key=s3_key, Filename=temp_file.name) print(f"Downloaded audio from S3: {s3_bucket}/{s3_key}") print("Exiting input_fn") return temp_file.name def predict_fn(input_data, model, context=None): """ Perform transcription on the provided audio file and delete the file afterwards. """ print("Entering predict_fn") start_time = time.time() whisperx_model = model['model'] logging.info("Loading audio") audio = whisperx.load_audio(input_data) logging.info("Transcribing audio") transcription_result = whisperx_model.transcribe(audio, batch_size=16) try: os.remove(input_data) # input_data contains the path to the temp file print(f"Temporary file {input_data} deleted.") except OSError as e: print(f"Error: {input_data} : {e.strerror}") end_time = time.time() elapsed_time = end_time - start_time logging.info(f"Transcription took {int(elapsed_time)} seconds") print(f"Exiting predict_fn, processing took {int(elapsed_time)} seconds") return transcription_result def output_fn(prediction, accept, context=None): """ Prepare the prediction result for the response. """ print("Entering output_fn") if accept != "application/json": raise ValueError("Accept header must be application/json") response_body = json.dumps(prediction) print("Exiting output_fn with response prepared") return response_body, accept ''' # Write the inference.py file with open(inference_file_path, 'w') as file: file.write(inference_content) # Requirements.txt content requirements_content = '''speechbrain==0.5.16 faster-whisper==0.7.1 git+https://github.com/m-bain/whisperx.git@1b092de19a1878a8f138f665b1467ca21b076e7e ffmpeg-python ''' # Write the requirements.txt file with open(requirements_file_path, 'w') as file: file.write(requirements_content)
-
OpenVoice: Versatile Instant Voice Cloning
Whisper doesn't, but WhisperX <https://github.com/m-bain/whisperX/> does. I am using it right now and it's perfectly serviceable.
For reference, I'm transcribing research-related podcasts, meaning speech doesn't overlap a lot, which would be a problem for WhisperX from what I understand. There's also a lot of accents, which are straining on Whisper (though it's also doing well), but surely help WhisperX. It did have issues with figuring number of speakers on it's own, but that wasn't a problem for my use case.
- FLaNK 15 Jan 2024
-
Subtitle is now open-source
I've had good results with whisperx when I needed to generate captions. https://github.com/m-bain/whisperX
There is currently a problem with diarization, but otherwise, it is SOTA.
-
Insanely Fast Whisper: Transcribe 300 minutes of audio in less than 98 seconds
https://github.com/m-bain/whisperX/issues/569
WhisperX with the new model. It's not fast.
-
Distil-Whisper: distilled version of Whisper that is 6 times faster, 49% smaller
How much faster in real wall-clock time is this in batched data than https://github.com/m-bain/whisperX ?
-
whisper self hosted what's the most cost-efficient way
Checkout whisperx
-
Whisper Turbo: transcribe 20x faster than realtime using Rust and WebGPU
Neat to see a new implementation, although I'll note that for those looking for a drop-in replacement for the whisper library, I believe that both faster-whisper https://github.com/guillaumekln/faster-whisper and https://github.com/m-bain/whisperX are easier (PyTorch-based, doesn't require a web browser), and a lot faster (WhisperX is up to 70X realtime).
What are some alternatives?
cog-whisper-diarization - Cog implementation of transcribing + diarization pipeline with Whisper & Pyannote
whisper.cpp - Port of OpenAI's Whisper model in C/C++
Cgml - GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
llama.cpp - LLM inference in C/C++
faster-whisper - Faster Whisper transcription with CTranslate2
enchanted - Enchanted is iOS and macOS app for chatting with private self hosted language models such as Llama2, Mistral or Vicuna using Ollama.
insanely-fast-whisper - Incredibly fast Whisper-large-v3
swift-transformers - Swift Package to implement a transformers-like API in Swift
openai-whisper-cpu - Improving transcription performance of OpenAI Whisper for CPU based deployment
mlx-examples - Examples in the MLX framework
ControlNet - Let us control diffusion models!