whisper
whisper.cpp
Our great sponsors
whisper | whisper.cpp | |
---|---|---|
342 | 185 | |
58,758 | 29,005 | |
6.7% | - | |
6.8 | 9.8 | |
9 days ago | 7 days ago | |
Python | C | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
whisper
-
How I built NotesGPT – a full-stack AI voice note app
Last week, I launched notesGPT, a free and open source voice note app that has 35,000 visitors, 7,000 users, and over 1,000 GitHub stars so far in the last week. It allows you to record a voice note, transcribes it uses Whisper, and uses Mixtral via Together to extract action items and display them in an action items view. It’s also fully open source and comes equipped with authentication, storage, vector search, action items, and is fully responsive on mobile for ease of use.
-
WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
There is a plot of language performance on their repo: https://github.com/openai/whisper
I am not aware of a multi-lingual leaderboard for speech recognition models.
- Ask HN: AI that allows you to make phone calls in a language you don't speak?
-
Subtitle is now open-source
Whisper already generates subtitles[0], supporting VTT and SRT so this is just a thin wrapper around that.
[0]: https://github.com/openai/whisper/blob/e58f28804528831904c3b...
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
> although it does require you to wear headphones so the bot doesn't hear itself and get interrupted.
Maybe you can rely on some sort of speaker identification to sort this out?
-
OpenSubtitles is not open anymore
You can do it live or you can do it from a video file. There are more than just English models as well, though the performance varies by language. https://github.com/openai/whisper
- New models and developer products announced at OpenAI DevDay
-
OpenAI releases Whisper v3, new generation open source ASR model
Do you know if these implementations also support leveraging the M1/M2 GPU, such as shown here? https://github.com/openai/whisper/pull/382
Good improvements for many languages, numbers here
https://github.com/openai/whisper/blob/main/language-breakdo...
whisper.cpp
-
LLMs on your local Computer (Part 1)
The ggml library is one of the first library for local LLM interference. It’s a pure C library that converts models to run on several devices, including desktops, laptops, and even mobile device - and therefore, it can also be considered as a tinkering tool, trying new optimizations, that will then be incorporated into other downstream projects. This tool is at the heart of several other projects, powering LLM interference on desktop or even mobile phones. Subprojects for running specific LLMs or LLM families exists, such as whisper.cpp.
-
Voxos.ai – An Open-Source Desktop Voice Assistant
I'm not sure if it is _fully_ openai compatible, but whispercpp has a server bundled that says it is "OAI-like": https://github.com/ggerganov/whisper.cpp/tree/master/example...
I don't have any direct experience with it... I've only played around with whisper locally, using scripts.
-
Jarvis: A Voice Virtual Assistant in Python (OpenAI, ElevenLabs, Deepgram)
unless i'm misunderstanding `whisper.cpp` seems to support streaming & the repository includes a native example[0] and a WASM example[1] with a demo site[2].
[0]: https://github.com/ggerganov/whisper.cpp/tree/master/example...
-
I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now.
Usage 1: Good to transcribe audio. An example use case could be to summarize YouTube videos or long courses. Usage 2: You talk with voice to your AI that responds with text (later with audio too). - https://github.com/ggerganov/whisper.cpp
-
Scrybble is the ReMarkable highlights to Obsidian exporter I have been looking for
🗣️🎙️ whisper.cpp (offline speech-to-text transcription, models trained by OpenAI, CLI based, browser based)
- FLaNK Stack Weekly 06 Nov 2023
-
Talk-Llama
https://github.com/ggerganov/whisper.cpp/issues/352#issuecom...
I'm not sure what changed, but basically I purged ffmpeg and libsdl2-dev and the `make` in the root of the repo. Then I installed libsdl2 and ffmpeg and `make talk-llama`.
It's quite slow on 4 core i7-8550U and 16 GB of RAM.
basically, in the root of the repo:
$ sudo apt purge ffmpeg
I'm getting a "floating point exception" when running ./talk-llama on arch and debian. Already checked sdl2lib and ffmpeg (because of this issue: https://github.com/ggerganov/whisper.cpp/issues/1325) but nothing seems to fix it. Anyone else?
- Distil-Whisper: distilled version of Whisper that is 6 times faster, 49% smaller
What are some alternatives?
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
faster-whisper - Faster Whisper transcription with CTranslate2
Whisper - High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
silero-vad - Silero VAD: pre-trained enterprise-grade Voice Activity Detector
buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
NeMo - NeMo: a framework for generative AI
bark - 🔊 Text-Prompted Generative Audio Model
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llama.cpp - LLM inference in C/C++
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.