gpt-3
glide-text2im
gpt-3 | glide-text2im | |
---|---|---|
41 | 32 | |
9,406 | 3,470 | |
- | 0.6% | |
3.5 | 0.0 | |
over 3 years ago | about 2 months ago | |
Python | ||
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt-3
-
GPT4.5 or GPT5 being tested on LMSYS?
>I wasn't talking about "state of the art LLMs," I am aware that commercial offerings are much better trained in Spanish. This was a thought experiment based on comments from people testing GPT-3.5 with Swahili.
A thought experiment from other people comments on another language. So...No. Fabricating failure modes from their constructed ideas about how LLMs work seems to be a frustratingly common occurrence in these kinds of discussions.
>Frustratingly, just few months ago I read a paper describing how LLMs excessively rely on English-language representations of ideas, but now I can't find it.
Most LLMs are trained on English overwhelmingly. GPT-3 had a 92.6% English dataset. https://github.com/openai/gpt-3/blob/master/dataset_statisti...
That the models are as proficient as they are is evidence enough of knowledge transfer clearly happening. https://arxiv.org/abs/2108.13349. If you trained a model on the Catalan tokens GPT-3 was trained on alone, you'd just get a GPT-2 level gibberish model at best.
anyway. These are some interesting papers
How do languages influence each other? Studying cross-lingual data sharing during LLM fine-tuning - https://arxiv.org/pdf/2305.13286
Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer - https://arxiv.org/abs/2404.04042
Multilingual LLMs are Better Cross-lingual In-context Learners with Alignment - https://arxiv.org/abs/2305.05940
It's not like there is perfect transfer but the idea that there's none at all seemed so ridiculous to me (and why i asked the first question). Models would be utterly useless in multilingual settings if that were really the case.
-
What are LLMs? An intro into AI, models, tokens, parameters, weights, quantization and more
Large models: Everything above 10B of parameters. This is where Llama 3, Llama 2, Mistral 8x22B, GPT 3, and most likely GPT 4 sit.
-
Can ChatGPT improve my L2 grammar?
Are generative AI models useful for learning a language, and if so which languages? Over 90% of ChatGPT's training data was in English. The remaining 10% of data was split unevenly between 100+ languages. This suggests that the quality of the outputs will vary from language to language.
-
GPT4 Canโt Ace MIT
I have doubts it was extensively trained on German data. Who knows about GPT4, but GPT3 is ~92% of English and ~1.5% of German, which means it saw more "die, motherfucker, die" than on "die Mutter".
(https://github.com/openai/gpt-3/blob/master/dataset_statisti...)
- Necesito ayuda.
-
[R] PaLM 2 Technical Report
Catalan was 0.018 % of GPT-3's training corpus. https://github.com/openai/gpt-3/blob/master/dataset_statistics/languages_by_word_count.csv.
- I'm seriously concerned that if I lost ChatGPT-4 I would be handicapped
- The responses I got from bard after asking why 100 timesโฆ he was pissed ๐
-
BharatGPT: India's Own ChatGPT
>Certainly it is pleasing that they are not just doing Hindi, but some of these languages must be represented online by a very small corpus of text indeed. I wonder how effectively an LLM can be trained on such a small training set for any given language?
as long as it's not the main language it doesn't really matter. Besides English(92.6%), the biggest language by representation (word count) is taken up by french at 1.8%. Most of the languages GPT-3 knows are sitting at <0.2% representation.
https://github.com/openai/gpt-3/blob/master/dataset_statisti...
Competence in the main language will bleed into the rest.
- GPT-4 gets a B on Scott Aaronson's quantum computing final exam
glide-text2im
-
์ธ๊ณต์ง๋ฅ์ ๋ํ ์ดํด : https://youtu.be/g1ARrNTwBHg 1ํธ - ๋ฅ๋ฌ๋์ ์๋ฆฌ https://youtu.be/CA5Ggqg5x6o 2ํธ - ์ธ๊ณต์ง๋ฅ์ ์ฐฝ์์ฑ๊ณผ ํ
์ฌ๋ผ AI https://youtu.be/jHYYggG7qq8 3ํธ - ์ฝ๋ฉ, ๊ณผํ, ์ํ ๋์ ๋ฅผ ํด๊ฒฐํ๋ ค๋ A.I. https://youtu.be/BWJWAdMZGNY ---------------------------------------------------- ์์์ ๋ฑ์ฅํ๋ ๋งํฌ : ADOP(2021) https://arxiv.org
GLIDE(2021) https://syncedreview.com/2021/12/24/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-173/ || ์์ค์ฝ๋ : https://github.com/openai/glide-text2im
- [R][P] I made an app for Instant Image/Text to 3D using PointE from OpenAI
-
"Teacher villainess, DreamWorks official character design sheet turnaround, studio, Best on Artstation, 4K HD, by Nate Wragg"
The bolded part is a reference to the publicly released version of OpenAI's GLIDE, which is the predecessor of DALL-E 2. OpenAI didn't release the GLIDE model(s) trained on human faces.
-
Trying to remember the name of an upscaler. I thought it was Glide XL or something.
OpenAI's GLIDE text2im https://github.com/openai/glide-text2im
-
It just struck me that text diffs do *not* require the image-generating prompt as a starting point, and my mind is blown to pieces.
If I can stop wasting my time playing video games for a while, I might work on getting the Dalle-2 open-source predecessor (GLIDE) to work. Also can't wait for this to be released, I have so many uses for it!
- [D] Making text-to-image even better - GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models, a 5-minute paper summary by Casual GAN Papers
-
Dall-E 2
A few comments by someone who's spent way too much time in the AI-generated space:
* I recommend reading the System Card that came with it because it's very through: https://github.com/openai/dalle-2-preview/blob/main/system-c...
* Unlike GPT-3, my read of this announcement is that OpenAI does not intend to commercialize it, and that access to the waitlist is indeed more for testing its limits (and as noted, commercializing it would make it much more likely lead to interesting legal precedent). Per the docs, access is very explicitly limited: (https://github.com/openai/dalle-2-preview/blob/main/system-c... )
* A few months ago, OpenAI released GLIDE ( https://github.com/openai/glide-text2im ) which uses a similar approach to AI image generation, but suspiciously never received a fun blog post like this one. The reason for that in retrospect may be "because we made it obsolete."
* The images in the announcement are still cherry-picked, which is therefore a good reason why they tested DALL-E 1 vs. DALL-E 2 presumably on non-cherrypicked images.
* Cherry-picking is relevant because AI image generation is still slow unless you do real shenanigans that likely compromise image quality, although OpenAI has likely a better infra to handle large models as they have demonstrated with GPT-3.
- Glide-Text2Im
-
AI-generated photos of European flags
The flags were generated using Glide. You can try it out yourself in Google Colab
- New AI technique that lets you generate images from text. Now better than ever!
What are some alternatives?
dalle-mini - DALLยทE Mini - Generate images from a text prompt
dalle-2-preview
DALL-E - PyTorch package for the discrete VAE used for DALLยทE.
DALLE-mtf - Open-AI's DALL-E for large scale training in mesh-tensorflow.
glide-text2im-colab - Colab notebook for openai/glide-text2im.
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
pixray
v-diffusion-pytorch - v objective diffusion inference code for PyTorch.
improved-diffusion - Release for Improved Denoising Diffusion Probabilistic Models