lm-human-preferences
gpt-3
Our great sponsors
lm-human-preferences | gpt-3 | |
---|---|---|
8 | 39 | |
1,106 | 9,406 | |
5.3% | - | |
2.7 | 3.5 | |
9 months ago | over 3 years ago | |
Python | ||
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
lm-human-preferences
- Ask HN: Open-source GPT-3 alternatives
- El éxito continuo de OpenAI: Y como llegaron a crear la IA más avanzada del 2023. ChatGPT.
-
Sam Altman on the best and worst case scenario for AI - "...the good case is just so unbelievably good that you sound like a really crazy person to start talking about it."
Lest you think that that sounds like a too galaxy-brained possibility, it has already happened at OpenAI (scroll down to "Bugs can optimize for bad behavior"), just with a model that was very far from being capable enough to be dangerous.
-
Value head in GPT2
Found relevant code at https://github.com/openai/lm-human-preferences + all code implementations here
-
Should we stick to the devil we know?
That's why, when they're serious, they use RL for finetuning from human preferences (would be hilarious if this attempt to solve the terrible bias you take to be evidence of AGI threat ends up creating a Woke Singleton itself, btw); it's a powerful general approach, and I see no sign of it being applied here.
-
Dall-E 2
The kind of measures they are taking, like simply deleting wholesale anything problematic, don't really have a '-1'.
But amusingly, exactly that did happen in one of their GPT experiments! https://openai.com/blog/fine-tuning-gpt-2/
- Discussion Thread
-
[D] Applications for using reinforcement learning to fine-tune GPT-2
Code for https://arxiv.org/abs/1909.08593 found: https://github.com/openai/lm-human-preferences
gpt-3
-
Can ChatGPT improve my L2 grammar?
Are generative AI models useful for learning a language, and if so which languages? Over 90% of ChatGPT's training data was in English. The remaining 10% of data was split unevenly between 100+ languages. This suggests that the quality of the outputs will vary from language to language.
-
GPT4 Can’t Ace MIT
I have doubts it was extensively trained on German data. Who knows about GPT4, but GPT3 is ~92% of English and ~1.5% of German, which means it saw more "die, motherfucker, die" than on "die Mutter".
(https://github.com/openai/gpt-3/blob/master/dataset_statisti...)
- Necesito ayuda.
-
[R] PaLM 2 Technical Report
Catalan was 0.018 % of GPT-3's training corpus. https://github.com/openai/gpt-3/blob/master/dataset_statistics/languages_by_word_count.csv.
- I'm seriously concerned that if I lost ChatGPT-4 I would be handicapped
- The responses I got from bard after asking why 100 times… he was pissed 😂
-
BharatGPT: India's Own ChatGPT
>Certainly it is pleasing that they are not just doing Hindi, but some of these languages must be represented online by a very small corpus of text indeed. I wonder how effectively an LLM can be trained on such a small training set for any given language?
as long as it's not the main language it doesn't really matter. Besides English(92.6%), the biggest language by representation (word count) is taken up by french at 1.8%. Most of the languages GPT-3 knows are sitting at <0.2% representation.
https://github.com/openai/gpt-3/blob/master/dataset_statisti...
Competence in the main language will bleed into the rest.
- GPT-4 gets a B on Scott Aaronson's quantum computing final exam
-
[D] Dumb question: is GPT3 model open-sourced?
And from skimming their GH page, it seems it'd be costly to host as well
- ChatGPT and the Daily Question Thread, re-evaluated with GPT-4.
What are some alternatives?
trl - Train transformer language models with reinforcement learning.
dalle-mini - DALL·E Mini - Generate images from a text prompt
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
DALL-E - PyTorch package for the discrete VAE used for DALL·E.
DALLE-mtf - Open-AI's DALL-E for large scale training in mesh-tensorflow.
tensorrtx - Implementation of popular deep learning networks with TensorRT network definition API
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
glide-text2im - GLIDE: a diffusion-based text-conditional image synthesis model
v-diffusion-pytorch - v objective diffusion inference code for PyTorch.
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
dalle-2-preview