gpt-3
whisper
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt-3
-
GPT4.5 or GPT5 being tested on LMSYS?
>I wasn't talking about "state of the art LLMs," I am aware that commercial offerings are much better trained in Spanish. This was a thought experiment based on comments from people testing GPT-3.5 with Swahili.
A thought experiment from other people comments on another language. So...No. Fabricating failure modes from their constructed ideas about how LLMs work seems to be a frustratingly common occurrence in these kinds of discussions.
>Frustratingly, just few months ago I read a paper describing how LLMs excessively rely on English-language representations of ideas, but now I can't find it.
Most LLMs are trained on English overwhelmingly. GPT-3 had a 92.6% English dataset. https://github.com/openai/gpt-3/blob/master/dataset_statisti...
That the models are as proficient as they are is evidence enough of knowledge transfer clearly happening. https://arxiv.org/abs/2108.13349. If you trained a model on the Catalan tokens GPT-3 was trained on alone, you'd just get a GPT-2 level gibberish model at best.
anyway. These are some interesting papers
How do languages influence each other? Studying cross-lingual data sharing during LLM fine-tuning - https://arxiv.org/pdf/2305.13286
Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer - https://arxiv.org/abs/2404.04042
Multilingual LLMs are Better Cross-lingual In-context Learners with Alignment - https://arxiv.org/abs/2305.05940
It's not like there is perfect transfer but the idea that there's none at all seemed so ridiculous to me (and why i asked the first question). Models would be utterly useless in multilingual settings if that were really the case.
-
What are LLMs? An intro into AI, models, tokens, parameters, weights, quantization and more
Large models: Everything above 10B of parameters. This is where Llama 3, Llama 2, Mistral 8x22B, GPT 3, and most likely GPT 4 sit.
-
Can ChatGPT improve my L2 grammar?
Are generative AI models useful for learning a language, and if so which languages? Over 90% of ChatGPT's training data was in English. The remaining 10% of data was split unevenly between 100+ languages. This suggests that the quality of the outputs will vary from language to language.
-
GPT4 Can’t Ace MIT
I have doubts it was extensively trained on German data. Who knows about GPT4, but GPT3 is ~92% of English and ~1.5% of German, which means it saw more "die, motherfucker, die" than on "die Mutter".
(https://github.com/openai/gpt-3/blob/master/dataset_statisti...)
- Necesito ayuda.
-
[R] PaLM 2 Technical Report
Catalan was 0.018 % of GPT-3's training corpus. https://github.com/openai/gpt-3/blob/master/dataset_statistics/languages_by_word_count.csv.
- I'm seriously concerned that if I lost ChatGPT-4 I would be handicapped
- The responses I got from bard after asking why 100 times… he was pissed 😂
-
BharatGPT: India's Own ChatGPT
>Certainly it is pleasing that they are not just doing Hindi, but some of these languages must be represented online by a very small corpus of text indeed. I wonder how effectively an LLM can be trained on such a small training set for any given language?
as long as it's not the main language it doesn't really matter. Besides English(92.6%), the biggest language by representation (word count) is taken up by french at 1.8%. Most of the languages GPT-3 knows are sitting at <0.2% representation.
https://github.com/openai/gpt-3/blob/master/dataset_statisti...
Competence in the main language will bleed into the rest.
- GPT-4 gets a B on Scott Aaronson's quantum computing final exam
whisper
- Creando Subtítulos Automáticos para Vídeos con Python, Faster-Whisper, FFmpeg, Streamlit, Pillow
-
Why I Care Deeply About Web Accessibility And You Should Too
Let’s not talk about local models as the hardware requirements are way beyond most of these people’s reach. I have a MacBook Air with an M2 chip and 8GB of RAM and can hardly run Whisper locally, so I use this HuggingFace space.
-
How I built NotesGPT – a full-stack AI voice note app
Last week, I launched notesGPT, a free and open source voice note app that has 35,000 visitors, 7,000 users, and over 1,000 GitHub stars so far in the last week. It allows you to record a voice note, transcribes it uses Whisper, and uses Mixtral via Together to extract action items and display them in an action items view. It’s also fully open source and comes equipped with authentication, storage, vector search, action items, and is fully responsive on mobile for ease of use.
-
Ask HN: Can AI break a speech audio into individual words?
I found a pretty good discussion in the topic here:
https://github.com/openai/whisper/discussions/1243
-
WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
There is a plot of language performance on their repo: https://github.com/openai/whisper
I am not aware of a multi-lingual leaderboard for speech recognition models.
- Ask HN: AI that allows you to make phone calls in a language you don't speak?
-
Ask HN: Favorite Podcast Episodes of 2023?
I don't know how OP does it, but here's how I'd do it:
* Generate a transcript by runing Whisper against the podcast audio file: https://github.com/openai/whisper
* Upload transcript to ChatGPT and ask it to summarize.
* Automate all the above.
-
Need advice
Ahh, that makes sense. I've been building something like that, but only from other languages into English using Whisper
-
Subtitle is now open-source
Whisper already generates subtitles[0], supporting VTT and SRT so this is just a thin wrapper around that.
[0]: https://github.com/openai/whisper/blob/e58f28804528831904c3b...
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
> although it does require you to wear headphones so the bot doesn't hear itself and get interrupted.
Maybe you can rely on some sort of speaker identification to sort this out?
https://github.com/openai/whisper/discussions/264
What are some alternatives?
dalle-mini - DALL·E Mini - Generate images from a text prompt
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
DALL-E - PyTorch package for the discrete VAE used for DALL·E.
silero-vad - Silero VAD: pre-trained enterprise-grade Voice Activity Detector
DALLE-mtf - Open-AI's DALL-E for large scale training in mesh-tensorflow.
buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
v-diffusion-pytorch - v objective diffusion inference code for PyTorch.
whisper.cpp - Port of OpenAI's Whisper model in C/C++
dalle-2-preview
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.