community-events
gpt-3
community-events | gpt-3 | |
---|---|---|
8 | 41 | |
379 | 9,406 | |
2.1% | - | |
7.2 | 3.5 | |
5 months ago | over 3 years ago | |
Jupyter Notebook | ||
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
community-events
-
Controlling Stable Diffusion with JAX & Diffusers using TPU v4
Best applications that will come out of this sprint will receive prizes. You can find more information here. If you want to get started, simply join huggingface.co/discord, take the role š§Ø Diffusers and head to #jax-diffusers-ideas to share your idea or join one of the teams, and fill this form: https://forms.gle/t3M7aNPuLL9V1sfa9
-
JAX & Diffusers to Control Stable Diffusion (with TPUs ā”ļø )
It will start on 17th of April. To join us, you can join huggingface.co/join/discord and take the role Diffusers from #role-assignment. After this, simply fill the form provided in this guide to later get access to TPUs. https://github.com/huggingface/community-events/tree/main/jax-controlnet-sprint
- āControl Stable Diffusionā Sprint kicks off with free TPU-v4 from Google
-
Free compute to train custom ControlNet by Hugging Face
Details and sign-up: https://github.com/huggingface/community-events/tree/main/jax-controlnet-sprint
-
How can I create a dataset to refine Whisper AI from old videos with subtitles?
For the training, I extremely recommend checking out the Whisper Fine-Tuning Event. It has a python script to train in one command, tons of tips, even a walkthrough video.
- I am using OpenAi's whisper transcription/translation model. I am wondering if I can improve it's performance by optimizing the audio files somehow. What features of audio files should I look into to make the whisper model perform better?
-
[N] Gradio Blocks + Hugging Face event is starting this week. A hackathon type event from May 17th to May 31st with prizes in which we will create interactive web demos for state-of-the-art machine learning models
We are happy to invite you to the Gradio Blocks Party - a community event in which we will create interactive demos for state-of-the-art machine learning models. Demos are powerful because they allow anyone ā not just ML engineers ā to try out models in the browser, give feedback on predictions, identify trustworthy models. The event will take place from May 17th to 31st. We will be organizing this event on Github and the Hugging Face discord channel. Prizes will be given at the end of the event, see the Prizes section
-
Dall-E 2
If you're interested in generative models, Hugging Face is putting on an event around generative models right now called the HugGAN sprint, where they're giving away free access to compute to train models like this.
You can join it by following the steps in the guide here: https://github.com/huggingface/community-events/tree/main/hu...
There will also be talks from awesome folks at EleutherAI, Google, and Deepmind
gpt-3
-
GPT4.5 or GPT5 being tested on LMSYS?
>I wasn't talking about "state of the art LLMs," I am aware that commercial offerings are much better trained in Spanish. This was a thought experiment based on comments from people testing GPT-3.5 with Swahili.
A thought experiment from other people comments on another language. So...No. Fabricating failure modes from their constructed ideas about how LLMs work seems to be a frustratingly common occurrence in these kinds of discussions.
>Frustratingly, just few months ago I read a paper describing how LLMs excessively rely on English-language representations of ideas, but now I can't find it.
Most LLMs are trained on English overwhelmingly. GPT-3 had a 92.6% English dataset. https://github.com/openai/gpt-3/blob/master/dataset_statisti...
That the models are as proficient as they are is evidence enough of knowledge transfer clearly happening. https://arxiv.org/abs/2108.13349. If you trained a model on the Catalan tokens GPT-3 was trained on alone, you'd just get a GPT-2 level gibberish model at best.
anyway. These are some interesting papers
How do languages influence each other? Studying cross-lingual data sharing during LLM fine-tuning - https://arxiv.org/pdf/2305.13286
Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer - https://arxiv.org/abs/2404.04042
Multilingual LLMs are Better Cross-lingual In-context Learners with Alignment - https://arxiv.org/abs/2305.05940
It's not like there is perfect transfer but the idea that there's none at all seemed so ridiculous to me (and why i asked the first question). Models would be utterly useless in multilingual settings if that were really the case.
-
What are LLMs? An intro into AI, models, tokens, parameters, weights, quantization and more
Large models: Everything above 10B of parameters. This is where Llama 3, Llama 2, Mistral 8x22B, GPT 3, and most likely GPT 4 sit.
-
Can ChatGPT improve my L2 grammar?
Are generative AI models useful for learning a language, and if so which languages? Over 90% of ChatGPT's training data was in English. The remaining 10% of data was split unevenly between 100+ languages. This suggests that the quality of the outputs will vary from language to language.
-
GPT4 Canāt Ace MIT
I have doubts it was extensively trained on German data. Who knows about GPT4, but GPT3 is ~92% of English and ~1.5% of German, which means it saw more "die, motherfucker, die" than on "die Mutter".
(https://github.com/openai/gpt-3/blob/master/dataset_statisti...)
- Necesito ayuda.
-
[R] PaLM 2 Technical Report
Catalan was 0.018 % of GPT-3's training corpus. https://github.com/openai/gpt-3/blob/master/dataset_statistics/languages_by_word_count.csv.
- I'm seriously concerned that if I lost ChatGPT-4 I would be handicapped
- The responses I got from bard after asking why 100 timesā¦ he was pissed š
-
BharatGPT: India's Own ChatGPT
>Certainly it is pleasing that they are not just doing Hindi, but some of these languages must be represented online by a very small corpus of text indeed. I wonder how effectively an LLM can be trained on such a small training set for any given language?
as long as it's not the main language it doesn't really matter. Besides English(92.6%), the biggest language by representation (word count) is taken up by french at 1.8%. Most of the languages GPT-3 knows are sitting at <0.2% representation.
https://github.com/openai/gpt-3/blob/master/dataset_statisti...
Competence in the main language will bleed into the rest.
- GPT-4 gets a B on Scott Aaronson's quantum computing final exam
What are some alternatives?
dalle-2-preview
dalle-mini - DALLĀ·E Mini - Generate images from a text prompt
DALL-E - PyTorch package for the discrete VAE used for DALLĀ·E.
bevy_retro - Plugin pack for making 2D games with Bevy
DALLE-mtf - Open-AI's DALL-E for large scale training in mesh-tensorflow.
lm-human-preferences - Code for the paper Fine-Tuning Language Models from Human Preferences
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
glide-text2im - GLIDE: a diffusion-based text-conditional image synthesis model
v-diffusion-pytorch - v objective diffusion inference code for PyTorch.
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"