SpotifyTranscripts
modal-examples
SpotifyTranscripts | modal-examples | |
---|---|---|
1 | 9 | |
140 | 572 | |
- | 5.6% | |
6.3 | 9.5 | |
5 months ago | 4 days ago | |
JavaScript | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SpotifyTranscripts
-
Show HN: PodText.ai – Search anything said on a podcast, Highlight text to play
This is great, I was working on something similar in the last few days, but since it is hard to cover every podcast, I stopped to think of a way to niche down. I feel your pain with GPU and scalability to transcript podcasts.
I was thinking of adding something like this for the UI https://github.com/johan-akerman/SpotifyTranscripts in case you find it useful.
Good luck! It is a really nice project.
modal-examples
-
Show HN: Real-time image autocomplete in <100 lines of code with SDXL Lightning
We made a small app for SDXL Lightning, running your own Python code on GPUs. It generates images in real time.
https://potatoes.ai/
We know there was a fal.ai post yesterday, and that got a lot of interest, but we also made this demo yesterday and didn't share — just wanted to mention it as an alternative option for people who like running their own code and custom models instead of using a prebuilt API provider.
The backend code is open-source too and you can deploy it yourself: https://github.com/modal-labs/modal-examples/blob/main/06_gpu_and_ml/stable_diffusion/stable_diffusion_xl_lightning.py
-
Our startup has docs issues and it is costing us prospects. What things can you share to help us?
The startup I work at is relatively pretty good at documentation engineering. We have written code to test the code snippets in docstrings (https://github.com/modal-labs/pytest-markdown-docs) and we have written code to do synthetic monitoring testing of the examples in our examples repo (https://github.com/modal-labs/modal-examples). We are also diligent about putting using Python's warnings library to handle API deprecation, and treat deprecation warnings as errors internally, ensuring our own code samples and examples are most up-to-date.
-
OpenLLaMA: An Open Reproduction of LLaMA
You can get it running with one Python script on Modal.com :)
https://github.com/modal-labs/modal-examples/blob/main/06_gp...
-
Whispers AI Modular Future
This demo lets you choose the podcast, and is open-source: https://modal-labs--whisper-pod-transcriber-fastapi-app.moda...
https://github.com/modal-labs/modal-examples/tree/main/06_gp...
Transcribes 1hr of audio in roughly 1min, using parallelisation across CPUs.
-
Show HN: PodText.ai – Search anything said on a podcast, Highlight text to play
This demo is open-source: https://github.com/modal-labs/modal-examples/tree/main/06_gp....
https://modal-labs--whisper-pod-transcriber-fastapi-app.moda...
-
Show HN: Stable Diffusion Pokémon Cards
It's become so easy to stick together ML models, often without training most or all of them yourself.
*video demo:* https://youtu.be/mQsMuM8d4Qc
*cloud platform:* https://modal.com
*code*: https://github.com/modal-labs/modal-examples/tree/main/06_gp...
-
How can machine learning help us learn languages better?
Transcription - OpenAI just released Whisper. Check out what it can do with podcasts
-
[P] Transcribe any podcast episode in just 1 minute with optimized OpenAI/whisper
Here's the source code.
What are some alternatives?
Saveddit - Search and Filter through your Saved Reddit Posts
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
rapviz - 🔥🎤 See your bars broken down right in the browser. Powered by Spotify, Genius, and Railway.
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
autocropper.io - API to automatically crop and output individual photos from multi-photo scans (deprecated)
WAAS - Whisper as a Service (GUI and API with queuing for OpenAI Whisper)
threaddit - Threaddit is a full-stack Reddit clone; it's a comprehensive web application inspired by Reddit, built using Flask and its diverse libraries for the backend, combined with PostgreSQL for robust database management. The frontend is developed using React.js and its rich set of libraries, offering a seamless and dynamic user experience.
EasyLM - Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
realtime-transcription-playground - A real-time transcription project using React and socketio
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
brev-cli - Connect your laptop to cloud computers. Follow to stay updated about our product