whisper
wit
Our great sponsors
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
whisper
-
Why I Care Deeply About Web Accessibility And You Should Too
Let’s not talk about local models as the hardware requirements are way beyond most of these people’s reach. I have a MacBook Air with an M2 chip and 8GB of RAM and can hardly run Whisper locally, so I use this HuggingFace space.
-
How I built NotesGPT – a full-stack AI voice note app
Last week, I launched notesGPT, a free and open source voice note app that has 35,000 visitors, 7,000 users, and over 1,000 GitHub stars so far in the last week. It allows you to record a voice note, transcribes it uses Whisper, and uses Mixtral via Together to extract action items and display them in an action items view. It’s also fully open source and comes equipped with authentication, storage, vector search, action items, and is fully responsive on mobile for ease of use.
-
Ask HN: Can AI break a speech audio into individual words?
I found a pretty good discussion in the topic here:
https://github.com/openai/whisper/discussions/1243
-
WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
There is a plot of language performance on their repo: https://github.com/openai/whisper
I am not aware of a multi-lingual leaderboard for speech recognition models.
- Ask HN: AI that allows you to make phone calls in a language you don't speak?
-
Ask HN: Favorite Podcast Episodes of 2023?
I don't know how OP does it, but here's how I'd do it:
* Generate a transcript by runing Whisper against the podcast audio file: https://github.com/openai/whisper
* Upload transcript to ChatGPT and ask it to summarize.
* Automate all the above.
-
Need advice
Ahh, that makes sense. I've been building something like that, but only from other languages into English using Whisper
-
Subtitle is now open-source
Whisper already generates subtitles[0], supporting VTT and SRT so this is just a thin wrapper around that.
[0]: https://github.com/openai/whisper/blob/e58f28804528831904c3b...
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
> although it does require you to wear headphones so the bot doesn't hear itself and get interrupted.
Maybe you can rely on some sort of speaker identification to sort this out?
https://github.com/openai/whisper/discussions/264
-
Federated Finetuning of OpenAI's Whisper on Raspberry Pi 5
v3 only comes in one flavor: large.
I don’t think you’re going to have a good time running the large model on a Pi of any kind.
The large models are 32x slower than the tiny models, roughly.[0]
I’m seeing people report that the Pi 4 can transcribe 30 seconds of audio in somewhere between 30 seconds and 60 seconds with the tiny model.
You can do the math… 32x = 16 minutes to 32 minutes to transcribe 30 seconds of audio with the large model. Not a good time for most people.
The Pi 5 could be 2x to 3x faster.
I should benchmark my Pi 4 sometime (or Pi 5, if it ever shows up).
[0]: https://github.com/openai/whisper/blob/main/README.md#availa...
wit
-
A list of SaaS, PaaS and IaaS offerings that have free tiers of interest to devops and infradev
wit.ai — NLP for developers.
-
LLM for chatting and command recognition
Hello everyone, new to LLMs. I am working on my thesis project. The whole idea is to create a mixed reality voice assistant that can control some devices in a room and you can have with it a more intelligent conversation compared to other voice assistants(Alexa,Google, etc.). I thought initially to use wit.ai for the extraction of commands and if it's not a recognized command to send a request to a chatgpt API. Then I realized that wit.ai wasn't pretty accurate when I added multiple intents, for example distinguishing creating list from adding item to list or removing item from removing reminder etc, even though I added enough training data. Sometimes it identifies the entities correctly, but the intent is wrong even though there are no intents with these entities in the existing data. If user does not specify correctly a command I will have to handle a lot of scenarios like not specifying time or date or both and asking user to give them again and managing to extract the values correctly after. I thought that chatgpt could actually do both if I give some instructions with the formats of prompts and responses I want. I tried it in the web version and seemed to do what I want. Then I realized how expensive it is to use their API just to have context on a conversation. So my next thought was to use a local llm. So could someone recommend me a small model that can be used for such case or what adjustments should I do in order to work? In my university I can use a machine with 32GΒ Ram and Nvidia Α4000 GPU. Thank you :)
-
Properly sending a wav file via post request
I can't find anything wrong with the code you posted. It is possible that wit.ai is expecting some default header that Unity is not sending (and that you are not setting)
-
Sample VR AI NPC Project (Godot 3.5.x) - Project files on Github, link below
Even though this was made for VR hopefully the scripts for wit.ai and GPT will be helpful to anyone who wants to explore this topic and doesn't know where to start.
-
Show HN: Using GPT-3 and Whisper to save 40% of doctors’ time
Hey HN,
We're Alex, Martin and Laurent. We previously founded [Wit.ai](http://wit.ai/) (W14), which we sold to Facebook in 2015. Since 2019, we've been working on Nabla (https://www.nabla.com), an intelligent assistant for health practitioners.
When GPT-3 was released in 2020, we investigated it's usage in a medical context[0], to mixed results.
Since then we’ve kept exploring opportunities at the intersection of healthcare and AI, and noticed that doctors spend am awful lot of time on medical documentation (writing clinical notes, updating their EHR, etc.).
Today, we're releasing Nabla Copilot, a Chrome extension generating clinical notes from video consultations, to address this problem.
You can try it out, without installation nor sign up, on our demo page: [https://www.nabla.com/copilot-demo/](https://www.nabla.com/copilot-demo/)
Here’s how it works under the hood:
- When a doctor starts a video consultation, our Chrome extension auto-starts itself and listens to the active tab as well as the doctor’s microphone.
-
Instance type or cost for an NLP server?
Thank you, that's helpful except that currently we're not running our own server. I'm currently using wit.ai for NLP which is a web API service provided by Meta. I'm trying to budget for what it would cost to roll out our own on a private cloud.
-
[Question] Teaching a new skill to an AI model during runtime.
I have an NLP agent that uses wit.ai to "understand" input prompts. Now, wit.ai is just an API server that calculates the most probable intent of the prompt (out of the intents trained with) and also extracts the entities present in the prompt.
-
Best Websites For Coders
Wit AI : Natural Language for Developers
- Looking for the Best and easily customizable free AI library for NodeJs
-
free-for.dev
wit.ai — NLP for developers.
What are some alternatives?
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
localtunnel - expose yourself
silero-vad - Silero VAD: pre-trained enterprise-grade Voice Activity Detector
Codename One - Cross-platform framework for building truly native mobile apps with Java or Kotlin. Write Once Run Anywhere support for iOS, Android, Desktop & Web.
buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
Zulip - Zulip server and web application. Open-source team chat that helps teams stay productive and focused.
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
Skylight - Skylight agent for Ruby
whisper.cpp - Port of OpenAI's Whisper model in C/C++
TinyMCE - The world's #1 JavaScript library for rich text editing. Available for React, Vue and Angular
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Friendica - Friendica Communications Platform