Whisper VS ggml

Compare Whisper vs ggml and see what are their differences.

Whisper

High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model (by Const-me)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
Whisper ggml
32 69
7,182 9,642
- -
6.5 9.8
7 months ago 8 days ago
C++ C
Mozilla Public License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Whisper

Posts with mentions or reviews of Whisper. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-17.
  • Nvidia Speech and Translation AI Models Set Records for Speed and Accuracy
    1 project | news.ycombinator.com | 18 Apr 2024
    I've been using WhisperDesktop ( https://github.com/Const-me/Whisper ) with great success on a 3090 for fast & accurate transcription of often poor quality euro-english hours long multispeaker audio files. If there's an easy way to compare I'm certainly going to give this a try.
  • AMD's CDNA 3 Compute Architecture
    7 projects | news.ycombinator.com | 17 Dec 2023
    Why would you want OpenCL? Pretty sure D3D11 compute shaders gonna be adequate for a Torch backend, and they even work on Linux with Wine: https://github.com/Const-me/Whisper/issues/42 Native Vulkan compute shaders would be even better.

    Why would you want unified address space? At least in my experience, it’s often too slow to be useful. DMA transfers (CopyResource in D3D11, copy command queue in D3D12, transfer queue in VK) are implemented by dedicated hardware inside GPUs, and are way more efficient.

  • Amazon Bedrock Is Now Generally Available
    2 projects | news.ycombinator.com | 28 Sep 2023
    https://github.com/ggerganov/whisper.cpp

    https://github.com/Const-me/Whisper

    I had fun with both of these. They will both do realtime transcription. Bit you will have to download the training data sets…

  • Why Nvidia Keeps Winning: The Rise of an AI Giant
    3 projects | news.ycombinator.com | 6 Jul 2023
    Gamers don’t care about FP64 performance, and it seems nVidia is using that for market segmentation. The FP64 performance for RTX 4090 is 1.142 TFlops, for RTX 3090 Ti 0.524 TFlops. AMD doesn’t do that, FP64 performance is consistently better there, and have been this way for quite a few years. For example, the figure for 3090 Ti (a $2000 card from 2022) is similar to Radeon RX Vega 56, a $400 card from 2017 which can do 0.518 TFlops.

    And another thing: nVidia forbids usage of GeForce cards in data centers, while AMD allows that. I don’t know how specifically they define datacenter, whether it’s enforceable, or whether it’s tested in courts of various jurisdictions. I just don’t want to find out answers to these questions at the legal expenses of my employer. I believe they would prefer to not cut corners like that.

    I think nVidia only beats AMD due to the ecosystem: for GPGPU that’s CUDA (and especially the included first-party libraries like BLAS, FFT, DNN and others), also due to the support in popular libraries like TensorFlow. However, it’s not that hard to ignore the ecosystem, and instead write some compute shaders in HLSL. Here’s a non-trivial open-source project unrelated to CAE, where I managed to do just that with decent results: https://github.com/Const-me/Whisper That software even works on Linux, probably due to Valve’s work on DXVK 2.0 (a compatibility layer which implements D3D11 on top of Vulkan).

  • Ask HN: What is your recommended speech to text/audio transcription tool?
    1 project | news.ycombinator.com | 12 Jun 2023
    Currently, I use a GUI for Whisper AI (https://github.com/Const-me/Whisper) to upload MP3s of interviews to get text transcripts. However, I'm hoping to find another tool that would recognize and split out the text per speaker.

    Does such a thing exist?

  • Da audio a testo, consigli?
    1 project | /r/Universitaly | 8 Jun 2023
  • Ask HN: Any recommendations for cheap, high-quality transcription software
    2 projects | news.ycombinator.com | 29 May 2023
    I just used Whisper over the weekend to transcribe 5 hours of meeting, worked nicely and it can be run on a single GPU locally. https://github.com/ggerganov/whisper.cpp

    There are a few wrappers available with GUI like https://github.com/Const-me/Whisper

  • Voice recognition software for German
    2 projects | /r/software | 20 May 2023
  • Const-me/Whisper: High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
    1 project | /r/thirdbrain | 15 May 2023
  • I built a massive search engine to find video clips by spoken text
    3 projects | /r/videos | 10 May 2023

ggml

Posts with mentions or reviews of ggml. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-11.
  • LLMs on your local Computer (Part 1)
    7 projects | dev.to | 11 Mar 2024
    git clone https://github.com/ggerganov/ggml cd ggml mkdir build cd build cmake .. make -j4 gpt-j ../examples/gpt-j/download-ggml-model.sh 6B
  • GGUF, the Long Way Around
    2 projects | news.ycombinator.com | 29 Feb 2024
    Cool. I was just learning about GGUF by creating my own parser for it based on the spec https://github.com/ggerganov/ggml/blob/master/docs/gguf.md (for educational purposes)
  • Ask HN: People who switched from GPT to their own models. How was it?
    3 projects | news.ycombinator.com | 26 Feb 2024
    If you don't care about the details of how those model servers work, then something that abstracts out the whole process like LM Studio or Ollama is all you need.

    However, if you want to get into the weeds of how this actually works, I recommend you look up model quantization and some libraries like ggml[1] that actually do that for you.

    [1] https://github.com/ggerganov/ggml

  • GGUF File Format
    1 project | news.ycombinator.com | 31 Dec 2023
  • Google just shipped libggml from llama-cpp into its Android AICore
    2 projects | /r/LocalLLaMA | 9 Dec 2023
    Because the library is called ggml, but it supports gguf.
  • Q-Transformer
    2 projects | news.ycombinator.com | 30 Nov 2023
    Apparently this guy like a bunch of others like https://github.com/ggerganov/ggml are implementing transformers from papers for people that want them. Pretty cool.
  • [P] Inference Vision Transformer (ViT) in plain C/C++ with ggml
    2 projects | /r/MachineLearning | 26 Nov 2023
    You can access it here: https://github.com/staghado/vit.cpp It has been added to the ggml library on GitHub: https://github.com/ggerganov/ggml
  • Falcon 180B Released
    1 project | news.ycombinator.com | 6 Sep 2023
    https://github.com/ggerganov/ggml

    One note is that prompt ingestion is extremely slow on CPU compared to GPU. So short prompts are fine (as tokens can be streamed once the prompt is ingested), but long prompts feel extremely sluggish.

  • Stable Diffusion in pure C/C++
    8 projects | news.ycombinator.com | 19 Aug 2023
    I did a quick run under profiler and on my AVX2-laptop the slowest part (>50%) was matrix multiplication (sgemm).

    In current version of GGML if OpenBLAS is enabled, they convert matrices to FP32 before running sgemm.

    If OpenBLAS is disabled, on AVX2 plaftorm they convert FP16 to FP32 on every FMA operation, which even worse (due to repetition). After that, both ggml_vec_dot_f16 and ggml_vec_dot_f32 took first place in profiler.

    Source: https://github.com/ggerganov/ggml/blob/master/src/ggml.c#L10...

  • Accessing Llama 2 from the command-line with the LLM-replicate plugin
    16 projects | news.ycombinator.com | 18 Jul 2023
    For those getting started, the easiest one click installer I've used is Nomic.ai's gpt4all: https://gpt4all.io/

    This runs with a simple GUI on Windows/Mac/Linux, leverages a fork of llama.cpp on the backend and supports GPU acceleration, and LLaMA, Falcon, MPT, and GPT-J models. It also has API/CLI bindings.

    I just saw a slick new tool https://ollama.ai/ that will let you install a llama2-7b with a single `ollama run llama2` command that has a very simple 1-click installer for Apple Silicon Mac (but need to build from source for anything else atm). It looks like it only supports llamas OOTB but it also seems to use llama.cpp (via Go adapter) on the backend - it seemed to be CPU-only on my MBA, but I didn't poke too much and it's brand new, so we'll see.

    For anyone on HN, they should probably be looking at https://github.com/ggerganov/llama.cpp and https://github.com/ggerganov/ggml directly. If you have a high-end Nvidia consumer card (3090/4090) I'd highly recommend looking into https://github.com/turboderp/exllama

    For those generally confused, the r/LocalLLaMA wiki is a good place to start: https://www.reddit.com/r/LocalLLaMA/wiki/guide/

    I've also been porting my own notes into a single location that tracks models, evals, and has guides focused on local models: https://llm-tracker.info/

What are some alternatives?

When comparing Whisper and ggml you can also consider the following projects:

whisper.cpp - Port of OpenAI's Whisper model in C/C++

llama.cpp - LLM inference in C/C++

whisper - Robust Speech Recognition via Large-Scale Weak Supervision

alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM

TransformerEngine - A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.

alpaca-lora - Instruct-tune LLaMA on consumer hardware

just-an-email - App to share files & texts between your devices without installing anything

mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.

beaker - An experimental peer-to-peer Web browser

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

cookwherever - Cook Wherever is an open source project to attempt to making cooking more accessible and engaging for everyone.

llm - An ecosystem of Rust libraries for working with large language models