whisper-asr-webservice
modal-examples
whisper-asr-webservice | modal-examples | |
---|---|---|
11 | 9 | |
1,664 | 569 | |
- | 5.1% | |
7.8 | 9.5 | |
13 days ago | 3 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
whisper-asr-webservice
- How I converted a podcast into a knowledge base using Orama search and OpenAI whisper and Astro
-
Bazarr AI subs
Check https://github.com/openai/whisper & https://github.com/ahmetoner/whisper-asr-webservice
-
Bulk download subtitles
I see that bazarr had already been mentioned. If there are no subtitles available, you can also generate the subtitles by connecting bazarr to the AI model whisper which you can self host locally. I run everything in containers, tried it a few times and it works quite well for me! It does however use some computational resources to generate the subtitles, how long processing takes depends on the chosen model accuracy.
-
Writeout.ai – Transcribe and translate any audio files. Free and open source
You (essentially) need GPU but here you go:
https://github.com/ahmetoner/whisper-asr-webservice
For your requirements the medium.en model (max) should be satisfactory.
-
Whispers AI Modular Future
What utilities related to Whisper do you wish existed? What have you had to build yourself?
On the end user application side, I wish there was something that let me pick a podcast of my choosing, get it fully transcribed, and get an embeddings search plus answer q&a on top of that podcast or set of chosen podcasts. I've seen ones for specific podcasts, but I'd like one where I can choose the podcast. (Probably won't build it)
Also on the end user side, I wish there was an Otter alternative (still paid $30/mo, but unlimited minutes per month) that had longer transcription limits. (Started building this, not much interest from users though)
Things I've seen on the dev tool side:
Gladia (API call version of Whisper)
Whisper.cpp
Whisper webservice (https://github.com/ahmetoner/whisper-asr-webservice) - via this thread
Live microphone demo (not real time, it still does it in chunks) https://github.com/mallorbc/whisper_mic
Streamlit UI https://github.com/hayabhay/whisper-ui
Whisper playground https://github.com/saharmor/whisper-playground
Real time whisper https://github.com/shirayu/whispering
Whisper as a service https://github.com/schibsted/WAAS
Improved timestamps and speaker identification https://github.com/m-bain/whisperX
MacWhisper https://goodsnooze.gumroad.com/l/macwhisper
Crossplatform desktop Whisper that supports semi-realtime https://github.com/chidiwilliams/buzz
-
I made a free transcription service powered by Whisper AI
I think there's been talk to do speaker diarization with whisper-asr-webservice[0] which is also written in python and should be able to make use of goodies such as pyannote-audio, py-webrtcvad, etc.
Whisper is great but at the point we get to kludging various things together it starts to make more sense to use something like Nvidia NeMo[1] which was built with all of this in mind and more
[0] - https://github.com/ahmetoner/whisper-asr-webservice
[1] - https://github.com/NVIDIA/NeMo
- whisper-asr-webservice-client - A self-hosted OpenAI Whisper API client
-
Show HN: A self-hosted OpenAI Whisper API client
(read the docs in the repo)
In terms of me not storing your data for this (I don't) I guess you'll just have to trust me?
[0] - https://github.com/ahmetoner/whisper-asr-webservice
-
[P] OpenAI Whisper ASR Webservice API released
For more details: https://github.com/ahmetoner/whisper-asr-webservice
modal-examples
-
Show HN: Real-time image autocomplete in <100 lines of code with SDXL Lightning
We made a small app for SDXL Lightning, running your own Python code on GPUs. It generates images in real time.
https://potatoes.ai/
We know there was a fal.ai post yesterday, and that got a lot of interest, but we also made this demo yesterday and didn't share — just wanted to mention it as an alternative option for people who like running their own code and custom models instead of using a prebuilt API provider.
The backend code is open-source too and you can deploy it yourself: https://github.com/modal-labs/modal-examples/blob/main/06_gpu_and_ml/stable_diffusion/stable_diffusion_xl_lightning.py
-
Our startup has docs issues and it is costing us prospects. What things can you share to help us?
The startup I work at is relatively pretty good at documentation engineering. We have written code to test the code snippets in docstrings (https://github.com/modal-labs/pytest-markdown-docs) and we have written code to do synthetic monitoring testing of the examples in our examples repo (https://github.com/modal-labs/modal-examples). We are also diligent about putting using Python's warnings library to handle API deprecation, and treat deprecation warnings as errors internally, ensuring our own code samples and examples are most up-to-date.
-
OpenLLaMA: An Open Reproduction of LLaMA
You can get it running with one Python script on Modal.com :)
https://github.com/modal-labs/modal-examples/blob/main/06_gp...
-
Whispers AI Modular Future
This demo lets you choose the podcast, and is open-source: https://modal-labs--whisper-pod-transcriber-fastapi-app.moda...
https://github.com/modal-labs/modal-examples/tree/main/06_gp...
Transcribes 1hr of audio in roughly 1min, using parallelisation across CPUs.
-
Show HN: PodText.ai – Search anything said on a podcast, Highlight text to play
This demo is open-source: https://github.com/modal-labs/modal-examples/tree/main/06_gp....
https://modal-labs--whisper-pod-transcriber-fastapi-app.moda...
-
Show HN: Stable Diffusion Pokémon Cards
It's become so easy to stick together ML models, often without training most or all of them yourself.
*video demo:* https://youtu.be/mQsMuM8d4Qc
*cloud platform:* https://modal.com
*code*: https://github.com/modal-labs/modal-examples/tree/main/06_gp...
-
How can machine learning help us learn languages better?
Transcription - OpenAI just released Whisper. Check out what it can do with podcasts
-
[P] Transcribe any podcast episode in just 1 minute with optimized OpenAI/whisper
Here's the source code.
What are some alternatives?
whisper.cpp - Port of OpenAI's Whisper model in C/C++
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
generate-subtitles - Generate transcripts for audio and video content with a user friendly UI, powered by Open AI's Whisper with automatic translations and download videos automatically with yt-dlp integration
WAAS - Whisper as a Service (GUI and API with queuing for OpenAI Whisper)
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
EasyLM - Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
whisper-asr-webservice-client
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
gitbar-2023 - New release of gitbar website
brev-cli - Connect your laptop to cloud computers. Follow to stay updated about our product