DALLE-pytorch
DALLE-mtf
Our great sponsors
DALLE-pytorch | DALLE-mtf | |
---|---|---|
20 | 41 | |
5,492 | 435 | |
- | 0.0% | |
2.5 | 0.0 | |
2 months ago | about 2 years ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DALLE-pytorch
-
The Eleuther AI Mafia
It all started originally on lucidrains/dalle-pytorch in the months following the release of DALL-E (1). The group started as `dalle-pytorch-replicate` but was never officially "blessed" by Phil Wang who seems to enjoy being a free agent (can't blame him).
https://github.com/lucidrains/DALLE-pytorch/issues/116 is where the discord got kicked off originally. There's a lot of other interactions between us in the github there. You should be able to find when Phil was approached by Jenia Jitsev, Jan Ebert, and Mehdi Cherti (all starting LAION members) who graciously offered the chance to replicate the DALL-E paper using their available compute at the JUWELS and JUWELS Booster HPC system. This all predates Emad's arrival. I believe he showed up around the time guided diffusion and GLIDE, but it may have been a bit earlier.
Data work originally focused on amassing several of the bigger datasets of the time. Getting CC12M downloaded and trained on was something of an early milestone (robvanvolt's work). A lot of early work was like that though, shuffling through CC12M, COCO, etc. with the dalle-pytorch codebase until we got an avocado armchair.
Christophe Schumann was an early contributor as well and great at organizing and rallying. He focused a lot on the early data scraping work for what would become the "LAION5B" dataset. I don't want to credit him with the coding and I'm ashamed to admit I can't recall who did much of the work there - but a distributed scraping program was developed (the name was something@home... not scraping@home?).
The discord link on Phil Wang's readme at dalle-pytorch got a lot of traffic and a lot of people who wanted to pitch in with the scraping effort.
Eventually a lot of people from Eleuther and many other teams mingled with us, some sort of non-profit org was created in Germany I believe for legal purposes. The dataset continued to grow and the group moved from training DALLE's to finetuning diffusion models.
The `CompVis` team were great inspiration at the time and much of their work on VQGAN and then latent diffusion models basically kept us motivated. As I mentioned a personal motivation was Katherine Crowson's work on a variety of things like CLIP-guided vqgan, diffusion, etc.
I believe Emad Mostaque showed up around the time GLIDE was coming out? I want to say he donated money for scrapers to be run on AWS to speed up data collection. I was largely hands off for much of the data scraping process and mostly enjoyed training new models on data we had.
As with any online community things got pretty ill-defined, roles changed over, volunteers came/went, etc. I would hardly call this definitive and that's at least partially the reason it's hard to trace as an outsider. That much of the early history is scattered about GitHub issues and PR's can't have helped though.
-
Thoughts on AI image generators from text
Here you go: https://github.com/lucidrains/DALLE-pytorch
-
[P] DALL·E Mini & Mega demo and production API
Here are some other implementations of Dalle clones in Pytorch by various authors in the ML and DL community: https://github.com/lucidrains/DALLE-pytorch
- New text-to-image network from Google beats DALL-E
-
[Project] DALL-3 - generate better images with fewer tokens through clip guided diffusion
If in general DDPM > GAN > VAE, why do transformer image generators all use VQVAE to decode images? Wouldn't it be better to use a diffusion model? I was wondering about this and started experimenting with different ways to decode vector-quantized embeddings with a diffusion model - see discussion here After a lot of trial and error I got something that works pretty well.
- Still waiting for dall-e
-
Ask HN: Computer Vision Project Ideas?
- "Discrete VAE", used as the backbone for OpenAI's DALL-E, reimplimented here (and other places) https://github.com/lucidrains/DALLE-pytorch (code for training a discrete VAE)
-
Crawling@Home: Help Build The Worlds Largest Image-Text Pair Dataset!
Here's the DALLE-pytorch git repo.
-
(from the discord stream) I'm so hyped for this game. This generation is really good.
I am very excited, when AI Dungeon was released and seeing them filtering stuff, I thought that one day there will be an open source version of this without filters, the same goes for any future open sourced GPT-X. Now if we can get to train an open source DALL-E too and integrate it on NovelAI. Wouldn't that be even more awesome?
-
Wann habt Ihr euch das letzte Mal wie ein Kind über eine Sache gefreut?
Vielleicht bei https://github.com/lucidrains/DALLE-pytorch und https://github.com/kobiso/DALLE-reproduction
DALLE-mtf
-
How Open is Generative AI? Part 2
This vision is in line with EleutherAI, a non-profit organization founded in July 2020 by a group of researchers. Driven by the perceived opacity and the challenge of reproducibility in AI, their goal was to create leading open-source language models.
- The open source learning curve for AI researchers
- EleutherAI: Empowering Open-Source Artificial Intelligence Research
-
Seeking advice on fine-tuning Pythia for semantic search in a non-English language
My current idea is to utilize the EleutherAI pythia (Databricks Dolly). I would like to know whether translating the Dolly-15k dataset into the desired language using state-of-the-art translation techniques like DeepL would be a viable approach to fine-tune the Pythia base model. I want to use this model for semantic search, so perfection is not a necessity.
-
Does anyone want to collaborate to make anti-capitalist AI?
There are open source AI efforts, like EleutherAI. Needless to say, they are lagging behind big players, but it's better than nothing.
-
ChatGPT is bonkers.
The new GPT 3.5 isn't aware what are GPT-3.5 or davinci-002 (repeatable) and claimed that it was designed by EleutherAI and has only 6 bil parameters (wasn't been able to repeat but didn't really try).
-
My teacher has falsely accused me of using ChatGPT to use an assignment.
Hi, my name is Stella Biderman and I run EleutherAI, the one of the foremost non-profit research institutes in the world that trains and studies large language models. I have been involved with the majority of models to hold the title “largest open source GPT model in the world” and have dabbled in exploring using plagiarism detection tools to identify code written by GPT-J.
-
dolly-v2-12b
dolly-v2-12bis a 12 billion parameter causal language model created by Databricks that is derived from EleutherAI’s Pythia-12b and fine-tuned on a ~15K record instruction corpus generated by Databricks employees and released under a permissive license (CC-BY-SA)
-
Futurism: "The Company Behind Stable Diffusion Appears to Be At Risk of Going Under"
It is true that Emad needs to find an appropriate business model. The good news is that the hype is still undergoing. I'm sure that Emad can grab another round of liquidity injection. He got plenty of resources. Remember he is also from the finance industry. He got https://www.eleuther.ai/ which can supply a secured, in-house custom LLM equivalent to bloombergGPT.
-
How can AI be used to protect against exploitative use of other AI?
By promoting fully open-source AI, i.e. making datasets, models, methodology and codebases freely available and transparent. What OpenAI claimed to be aiming for, basically.
What are some alternatives?
DALL-E - PyTorch package for the discrete VAE used for DALL·E.
VQGAN-CLIP - Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
CLIP-Guided-Diffusion - Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
deep-daze - Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). Technique was originally created by https://twitter.com/advadnoun
dalle-mini - DALL·E Mini - Generate images from a text prompt
DALLE-datasets - This is a summary of easily available datasets for generalized DALLE-pytorch training.
big-sleep - A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Technique was originally created by https://twitter.com/advadnoun
imagen-pytorch - Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch
gpt-3 - GPT-3: Language Models are Few-Shot Learners
CoCa-pytorch - Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
dalle-2-preview