FlexGen
whisper-asr-webservice
FlexGen | whisper-asr-webservice | |
---|---|---|
39 | 11 | |
9,007 | 1,644 | |
0.8% | - | |
3.0 | 7.8 | |
15 days ago | 8 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
FlexGen
- Run 70B LLM Inference on a Single 4GB GPU with This New Technique
- Colorful Custom RTX 4060 Ti GPU Clocks Outed, 8 GB VRAM Confirmed
-
Local Alternatives of ChatGPT and Midjourney
LLaMA, Pythia, RWKV, Flan-T5 (self-hosted), FlexGen
- FlexGen: Running large language models on a single GPU
-
Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
> With no real knowledge of LLM and only recently started to understand what LLM terms mean, such as 'model, inference, LLM model, intruction set, fine tuning' whatelse do you think is required to make a took like yours?
This was mee a few weeks ago. I got interested in all this when FlexGen (https://github.com/FMInference/FlexGen) was announced, which allowed to run inference using OPT model on consumer hardware. I'm an avid user of Stable Diffusion, and I wanted to see if I can have an SD equivalent of ChatGPT.
Not understanding the details of hyperparameters or terminology, I basically asked ChatGPT to explain to me what these things are:
Explain to someone who is a software engineer with limited knowledge of ML terms or linear algebra, what is "feed forward" and "self-attention" in the context of ML and large language models. Provide examples when possible.
- Could this new flexgen be used in place of GPTq? or is this different?
- OpenAI is expensive
whisper-asr-webservice
- How I converted a podcast into a knowledge base using Orama search and OpenAI whisper and Astro
-
Bazarr AI subs
Check https://github.com/openai/whisper & https://github.com/ahmetoner/whisper-asr-webservice
-
Bulk download subtitles
I see that bazarr had already been mentioned. If there are no subtitles available, you can also generate the subtitles by connecting bazarr to the AI model whisper which you can self host locally. I run everything in containers, tried it a few times and it works quite well for me! It does however use some computational resources to generate the subtitles, how long processing takes depends on the chosen model accuracy.
-
Writeout.ai – Transcribe and translate any audio files. Free and open source
You (essentially) need GPU but here you go:
https://github.com/ahmetoner/whisper-asr-webservice
For your requirements the medium.en model (max) should be satisfactory.
-
Whispers AI Modular Future
What utilities related to Whisper do you wish existed? What have you had to build yourself?
On the end user application side, I wish there was something that let me pick a podcast of my choosing, get it fully transcribed, and get an embeddings search plus answer q&a on top of that podcast or set of chosen podcasts. I've seen ones for specific podcasts, but I'd like one where I can choose the podcast. (Probably won't build it)
Also on the end user side, I wish there was an Otter alternative (still paid $30/mo, but unlimited minutes per month) that had longer transcription limits. (Started building this, not much interest from users though)
Things I've seen on the dev tool side:
Gladia (API call version of Whisper)
Whisper.cpp
Whisper webservice (https://github.com/ahmetoner/whisper-asr-webservice) - via this thread
Live microphone demo (not real time, it still does it in chunks) https://github.com/mallorbc/whisper_mic
Streamlit UI https://github.com/hayabhay/whisper-ui
Whisper playground https://github.com/saharmor/whisper-playground
Real time whisper https://github.com/shirayu/whispering
Whisper as a service https://github.com/schibsted/WAAS
Improved timestamps and speaker identification https://github.com/m-bain/whisperX
MacWhisper https://goodsnooze.gumroad.com/l/macwhisper
Crossplatform desktop Whisper that supports semi-realtime https://github.com/chidiwilliams/buzz
-
I made a free transcription service powered by Whisper AI
I think there's been talk to do speaker diarization with whisper-asr-webservice[0] which is also written in python and should be able to make use of goodies such as pyannote-audio, py-webrtcvad, etc.
Whisper is great but at the point we get to kludging various things together it starts to make more sense to use something like Nvidia NeMo[1] which was built with all of this in mind and more
[0] - https://github.com/ahmetoner/whisper-asr-webservice
[1] - https://github.com/NVIDIA/NeMo
- whisper-asr-webservice-client - A self-hosted OpenAI Whisper API client
-
Show HN: A self-hosted OpenAI Whisper API client
(read the docs in the repo)
In terms of me not storing your data for this (I don't) I guess you'll just have to trust me?
[0] - https://github.com/ahmetoner/whisper-asr-webservice
-
[P] OpenAI Whisper ASR Webservice API released
For more details: https://github.com/ahmetoner/whisper-asr-webservice
What are some alternatives?
llama - Inference code for Llama models
whisper.cpp - Port of OpenAI's Whisper model in C/C++
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
text-generation-inference - Large Language Model Text Generation Inference
generate-subtitles - Generate transcripts for audio and video content with a user friendly UI, powered by Open AI's Whisper with automatic translations and download videos automatically with yt-dlp integration
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
whisper-asr-webservice-client
audiolm-pytorch - Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch
gitbar-2023 - New release of gitbar website