Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Top 23 Whisper Open-Source Projects
-
quivr
Your GenAI Second Brain 🧠 A personal productivity assistant (RAG) ⚡️🤖 Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthropic, VertexAI, Ollama, LLMs, Groq that you can share with users ! Local & Private alternative to OpenAI GPTs & ChatGPT powered by retrieval-augmented generation.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
PaddleSpeech
Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.
-
buzz
Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
-
embark-framework
Framework for serverless Decentralized Applications using Ethereum, IPFS and other platforms
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
FunASR
A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models. |语音识别工具包,包含丰富的性能优越的开源预训练模型,支持语音识别、语音端点检测、文本后处理等,具备服务部署能力。
-
distil-whisper
Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
-
chatgpt-telegram-bot
🤖 A Telegram bot that integrates with OpenAI's official ChatGPT APIs to provide answers, written in Python (by n3d1117)
-
inference
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
-
ruby-openai
OpenAI API + Ruby! 🤖❤️ Now with Assistants, Threads, Messages, Runs and Text to Speech 🍾
-
willow
Open source, local, and self-hosted Amazon Echo/Google Home competitive Voice Assistant alternative
-
whisper-timestamped
Multilingual Automatic Speech Recognition with word-level timestamps and confidence
-
subsai
🎞️ Subtitles generation tool (Web-UI + CLI + Python package) powered by OpenAI's Whisper and its variants 🎞️
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Project mention: privateGPT VS quivr - a user suggested alternative | libhunt.com/r/privateGPT | 2024-01-12
Project mention: Show HN: I created automatic subtitling app to boost short videos | news.ycombinator.com | 2024-04-09whisper.cpp [1] has a karaoke example that uses ffmpeg's drawtext filter to display rudimentary karaoke-like captions. It also supports diarisation. Perhaps it could be a starting point to create a better script that does what you need.
--
1: https://github.com/ggerganov/whisper.cpp/blob/master/README....
PaddlePaddle/PaddleSpeech
Project mention: Buzz: Transcribe and translate audio offline on your personal computer | news.ycombinator.com | 2024-03-21
Project mention: Easy video transcription and subtitling with Whisper, FFmpeg, and Python | news.ycombinator.com | 2024-04-06It uses this, which does support diarization: https://github.com/m-bain/whisperX
For our real-time STT needs, we'll employ a fantastic library called faster-whisper.
Has anyone been able to get into big tech via AI or some other way without doing leetcode?
Project mention: FunASR: Fundamental End-to-End Speech Recognition Toolkit | news.ycombinator.com | 2024-01-13
Project mention: GreptimeAI + Xinference - Efficient Deployment and Monitoring of Your LLM Applications | dev.to | 2024-01-24Xorbits Inference (Xinference) is an open-source platform to streamline the operation and integration of a wide array of AI models. With Xinference, you’re empowered to run inference using any open-source LLMs, embedding models, and multimodal models either in the cloud or on your own premises, and create robust AI-driven applications. It provides a RESTful API compatible with OpenAI API, Python SDK, CLI, and WebUI. Furthermore, it integrates third-party developer tools like LangChain, LlamaIndex, and Dify, facilitating model integration and development.
ruby-openai
Fair points but with all due respect completely misses the point and context. My comment was a reply to a new user interested in esphome on a post about esphome.
You're talking about CircuitPython, 35KB web replies, PSRAM, UF2 bootloader, etc. These are comparatively very advanced topics and you didn't mention esphome once.
The comfort and familiarity of Amazon for what is already a new, intimidating, and challenging subject is of immeasurable value for a novice. They can click those links, fill a cart, and have stuff show up tomorrow with all of the usual ease, friendliness, and reliability of Amazon. If they get frustrated or it doesn't work out they can shove it in the box and get a full refund Amazon-style.
You're suggesting wandering all over the internet, ordering stuff from China, multiple vendors, etc while describing a bunch of things that frankly just won't matter to them. I say this as someone who has been an esphome and home assistant user since day one. The approach I described has never failed or remotely bothered me and over the past ~decade I've seen it suggested to new users successfully time and time again.
In terms of PSRAM to my knowledge the only thing it is utilized for in the esphome ecosystem is higher resolution displays and more advanced voice assistant scenarios that almost always require -S3 anyway and are a very advanced, challenging use cases. I'm very familiar with displays, voice, the S3, and PSRAM but more on that in a second...
> live with one less LX7 core and no Bluetooth
I'm the founder of Willow[0] and when comparing Willow to esphome the most frequent request we get is supporting bluetooth functionality i.e. esphome bluetooth proxy[1]. This is an extremely popular use case in the esphome/home assistant community. Not having bluetooth while losing a core and paying more is a bigger issue than pin spacing.
It's also a pretty obscure board and while not a big deal to you and I if you look around at docs, guides, etc, etc you'll see the cheap-o boards from Amazon are by far the most popular and common (unsurprisingly). Another plus for a new user.
Speaking of Willow (and back to PSRAM again) even the voice assistant satellite functionality of Home Assistant doesn't fundamentally require it - the most popular device doesn't have it either[2].
Very valuable comment with a lot of interesting information, just doesn't apply to context.
[0] - https://heywillow.io/
[1] - https://esphome.io/components/bluetooth_proxy.html
[2] - https://www.home-assistant.io/voice_control/thirteen-usd-voi...
https://github.com/MahmoudAshraf97/whisper-diarization
This project has been alright for transcribing audio with speaker diarization. A big finicky. The OpenAI model is better than other paid products(Descript, Riverside) so I’m looking forward to trying MacWhisper.
Project mention: Show HN: AI Dub Tool I Made to Watch Foreign Language Videos with My 7-Year-Old | news.ycombinator.com | 2024-02-28Yes. But Whisper's word-level timings are actually quite inaccurate out of the box. There are some Python libraries that mitigate that. I tested several of them. whisper-timestamped seems to be the best one. [0]
[0] https://github.com/linto-ai/whisper-timestamped
Or, can it? https://github.com/aallam/openai-kotlin
Project mention: Show HN: WhisperFusion – Ultra-low latency conversations with an AI chatbot | news.ycombinator.com | 2024-01-29Everything runs locally, we use:
- WhisperLive for the transcription - https://github.com/collabora/WhisperLive
Project mention: Porting CP/M to the Brother SuperPowerNote Z80 laptop thing [video] | news.ycombinator.com | 2023-12-13Adding Whisper subtitles was really easy and they're dramatically better than the automatic Google ones (I did it via https://github.com/abdeladim-s/subsai, which was really easy to use). So there is now a reasonably good transcript available in the video comments.
Project mention: Next.js and GPT-4: A Guide to Streaming Generated Content as UI Components | dev.to | 2024-01-25ModelFusion is an AI integration library that I am developing. It enables you to integrate AI models into your JavaScript and TypeScript applications. You can install it with the following command:
Whisper related posts
- Show HN: Open-source Google Docs for audio transcriptions (Whisper)
- Show HN: I created automatic subtitling app to boost short videos
- Easy video transcription and subtitling with Whisper, FFmpeg, and Python
- SOTA ASR Tooling: Long-Form Transcription
- Deploying whisperX on AWS SageMaker as Asynchronous Endpoint
- Buzz: Transcribe and translate audio offline on your personal computer
- Voxos.ai – An Open-Source Desktop Voice Assistant
-
A note from our sponsor - InfluxDB
www.influxdata.com | 24 Apr 2024
Index
What are some of the best open-source Whisper projects? This list will help you:
Project | Stars | |
---|---|---|
1 | quivr | 32,240 |
2 | whisper.cpp | 30,942 |
3 | PaddleSpeech | 10,120 |
4 | buzz | 9,778 |
5 | whisperX | 8,869 |
6 | faster-whisper | 8,723 |
7 | embark-framework | 3,775 |
8 | cheetah | 3,781 |
9 | FunASR | 3,110 |
10 | distil-whisper | 3,125 |
11 | openai | 2,713 |
12 | chatgpt-telegram-bot | 2,686 |
13 | inference | 2,512 |
14 | ruby-openai | 2,405 |
15 | willow | 2,361 |
16 | whisper-diarization | 1,985 |
17 | whisper-timestamped | 1,501 |
18 | yt-whisper | 1,313 |
19 | openai-kotlin | 1,264 |
20 | auto-subtitle | 1,164 |
21 | WhisperLive | 1,143 |
22 | subsai | 1,051 |
23 | modelfusion | 883 |
Sponsored