DALLE2-pytorch
metaseq
DALLE2-pytorch | metaseq | |
---|---|---|
65 | 53 | |
10,826 | 6,386 | |
- | 0.3% | |
6.8 | 6.2 | |
3 months ago | 13 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DALLE2-pytorch
-
One year ago I got access to closed beta DALL-E 2.
I was showing people Dalle2 last year and telling them how much of an impact an open source solution was going to have on, well, everything to do with art and design. (At the time Stable Diffusion had not released, not even the leak, and all hopes was on https://github.com/lucidrains/DALLE2-pytorch)
- [Machinelearning] [D] Quelqu'un travaille-t-il sur l'open-sourcing de Dall-E 2 ?
-
AMA (Emad here hello)
Stable diffusion is the model, MJ will use a variant and DALL-E is the old version (we have our own implementation from our distinguished fellow Lucidrains here: https://github.com/lucidrains/DALLE2-pytorch)
-
An impressionist painting of an floating raccoon god, 4k, digital painting, trending on artstation
Sadly I don't think so. From what I understand the architecture is fixed to 1024x1024 pictures.
- I asked AI to turn P&R characters into muppets..
-
Comparison of AI text-to-image generators
The code is open source, the model is not I believe. https://github.com/lucidrains/DALLE2-pytorch
- Protests erupt outside of DALL-E offices after pricing implementation, press photograph
-
$15 for 115 “generation increments” Very expensive Beta pricing announcement. Dissapointed
Phil Wang has been fairly prolific at creating open source implementations of these text to image models. For example, here is the dalle-2 repo https://github.com/lucidrains/DALLE2-pytorch
-
DALL·E Now Available in Beta
There's already an open-source implementation of DALL-E 2 (https://github.com/lucidrains/DALLE2-pytorch) and a pretrained model for it should be released within this year.
Also true for Google's Imagen, which should be even better than DALLE-2 (and faster) https://github.com/lucidrains/imagen-pytorch.
This is possible because the original research papers behind both DALLE-2 and Imagen were publicly released.
-
would love to know what portion of this prompt is not allowed
The paper describing the model is public and has been implemented here, but that's not the hard part. The model likely requires months of compute and dozens of gigabytes of VRAM to train and run, likely costing several hundred thousand dollars.
metaseq
-
Training great LLMs from ground zero in the wilderness as a startup
This is a super important issue that affects the pace and breadth of iteration of AI almost as much as the raw hardware improvements do. The blog is fun but somewhat shallow and not technical or very surprising if you’ve worked with clusters of GPUs in any capacity over the years. (I liked the perspective of a former googler, but I’m not sure why past colleagues would recommend Jax over pytorch for LLMs outside of Google.) I hope this newco eventually releases a more technical report about their training adventures, like the PDF file here: https://github.com/facebookresearch/metaseq/tree/main/projec...
- Chronicles of Opt Development
-
See the pitch memo that raised €105M for four-week-old startup Mistral
The number of people who can actually pre-train a true LLM is very small.
It remains a major feat with many tweaks and tricks. Case in point: the 114 pages of OPT175B logbook [1]
[1] https://github.com/facebookresearch/metaseq/blob/main/projec...
- Technologie: „Austro-ChatGPT“ – aber kein Geld zum Testen
- OPT (Open Pre-trained Transformers) is a family of NLP models trained on billions of tokens of text obtained from the internet
- Current state-of-the-art open source LLM
-
Elon Musk Buys Ten Thousand GPUs for Secretive AI Project
Reliability at scale: take a look at the OPT training log book for their 175B model run. It needed a lot of babysitting. In my experience, that scale of TPU training run requires a restart about once every 1-2 weeks—and they provide the middleware to monitor the health of the cluster and pick up on hardware failures.
-
Is AI Development more fun than Software Development?
I really appreciated this log of Facebook training a large language model of how troublesome AI development can be: https://github.com/facebookresearch/metaseq/tree/main/projects/OPT/chronicles
-
Visual ChatGPT
Stable Diffusion will run on any decent gaming GPU or a modern MacBook, meanwhile LLMs comparable to GPT-3/ChatGPT have had pretty insane memory requirements - e.g., <https://github.com/facebookresearch/metaseq/issues/146>
-
Ask HN: Is There On-Call in ML?
It seems so, check this log book from Meta: https://github.com/facebookresearch/metaseq/blob/main/projec...
What are some alternatives?
dalle-mini - DALL·E Mini - Generate images from a text prompt
stable-diffusion - A latent text-to-image diffusion model
disco-diffusion
nlp-resume-parser - NLP-powered, GPT-3 enabled Resume Parser from PDF to JSON.
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
DALL-E - PyTorch package for the discrete VAE used for DALL·E.
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
dalle-2-preview
manim - Animation engine for explanatory math videos
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
cupscale - Image Upscaling GUI based on ESRGAN