get-beam
whisper-turbo
get-beam | whisper-turbo | |
---|---|---|
9 | 11 | |
89 | 1,594 | |
- | - | |
7.9 | 8.9 | |
21 days ago | 3 months ago | |
Shell | TypeScript | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
get-beam
-
Ask HN: Where to find an env with GPU for model training?
You should checkout https://beam.cloud (I'm the founder), it'll give you access to plenty of cloud GPU resources for training or inference.
Right now it's pretty hard to get GPU quota on AWS/GCP, so hopefully this is useful for you.
-
Cloudflare launches new AI tools to help customers deploy and run models
Cloudflare AI and Replicate are great for running off-the-shelf models, but anything custom is going to incur a 10+ minute cold start.
For running custom fine-tuned models on serverless, you could look into https://beam.cloud which is optimized for serving custom models with extremely fast cold start (I'm a little biased since I work there, but the numbers don't lie)
-
Workers AI: serverless GPU-powered inference on Cloudflare’s global network
Serverless only works if the cold boot is fast. For context, my company runs a serverless cloud GPU product called https://beam.cloud, which we've optimized for fast cold start. We see Whisper in production cold start in under 10s (across model sizes). A lot of our users are running semi-real time STT, and this seems to be working well for them.
-
Ultrafast serverless GPU runtime for custom SD models
I’m Eli, and my co-founder and I built Beam to run workloads on serverless cloud GPUs with hot reloading, autoscaling, and (of course) fast cold start. You don’t need Docker or AWS to use it, and everyone who signs up gets 10 hours of free GPU credit to try it out.
-
[D] We built Beam: An ultrafast serverless GPU runtime
Github with example apps and tutorials: https://github.com/slai-labs/get-beam/tree/main/examples
-
How to Finetune Llama 2: A Beginner's Guide
In this blog post, I want to make it as simple as possible to fine-tune the LLaMA 2 - 7B model, using as little code as possible. We will be using the Alpaca Lora Training script, which automates the process of fine-tuning the model and for GPU we will be using Beam.
- Run CodeLlama on a Serverless GPU
whisper-turbo
- Whisper Turbo: speech recognition in the browser using WebGPU
-
Show HN: Shadeup – A language that makes WebGPU easier
Even just looking at the ability to accelerate llms in the browser on any device without an installation is awesome
For example: fleetwood.dev has a really cool project that does audio transcription in browser on the GPU: https://whisper-turbo.com/#
- Run Whisper on WebGPU with a few lines of JS
- Run LLMs on my own Mac fast and efficient Only 2 MBs
-
Distil-Whisper: distilled version of Whisper that is 6 times faster, 49% smaller
You'd be surprised how capable old GPUs are! I've had great success with people running Whisper-Turbo in the browser on really old hardware: https://whisper-turbo.com/
- Running Whisper on Rust and WebGPU
-
Workers AI: serverless GPU-powered inference on Cloudflare’s global network
Whisper large is only 1.5B params, why not run it client side with something like https://github.com/FL33TW00D/whisper-turbo
(Disclaimer: I am the author)
- Whisper Turbo – Run Whisper Directly in the Browser with Rust and WebGPU
- Whisper Turbo: transcribe 20x faster than realtime using Rust and WebGPU
What are some alternatives?
discourse-ai
faster-whisper - Faster Whisper transcription with CTranslate2
finetune-llama2
WhisperInput - Offline voice input panel & keyboard with punctuation for Android.
store-sentry - Manage access to in-app purchase content hosted in Cloudflare based on App Store Server Notifications
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
alpaca-lora - Instruct-tune LLaMA on consumer hardware
willow - Open source, local, and self-hosted Amazon Echo/Google Home competitive Voice Assistant alternative
project-2501 - Project 2501 is an open-source AI assistant, written in C++.
whisper.cpp - Port of OpenAI's Whisper model in C/C++
CTranslate2 - Fast inference engine for Transformer models