majesty-diffusion
metaseq
majesty-diffusion | metaseq | |
---|---|---|
8 | 53 | |
274 | 6,389 | |
0.0% | 0.4% | |
0.0 | 6.2 | |
almost 2 years ago | 7 days ago | |
Jupyter Notebook | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
majesty-diffusion
- disco diffusion makes realistic portraits, Latent Majesty makes portraits + bewbs
-
Protests erupt outside of DALL-E offices after pricing implementation, press photograph
You missed Majesty Diffusion. It's rather complicated to use because it uses latent space diffusion and CLIP guidance at the same time, so you have to get many settings right, but once you do it can give amazing results, go see them on their Discord!
-
DALL·E Now Available in Beta
Here are a couple I've used recently:
Majestic diffusion - https://github.com/multimodalart/majesty-diffusion
Centipede diffusion - https://colab.research.google.com/github/Zalring/Centipede_D...
-
Judy Hopps as a real person (Latent Majesty Difusion)
I suck with computers so i hope these links mean something to you, looks like devil witch magic to me. link1 link2 link3
-
The inner works of AGI
There's also another model going around called Latent Majesty Diffusion that does the same thing.
-
Lenin as a bust on Mars (Dall-E-Mini + Majesty Diffusion + Centipede Diffusion)
I found the github: https://github.com/multimodalart/majesty-diffusion
-
New text-to-image network from Google beats DALL-E
Check https://github.com/multimodalart/majesty-diffusion
There is a Google Colab workbook that you can try and run for free :)
This is the image-text pairs behind: https://laion.ai/laion-400-open-dataset/
-
Colab notebooks "Latent Majesty Diffusion" (CLIP-guided latent diffusion; formerly known as Latent Princess Generator) and "V-Majesty Diffusion" (CLIP-guided V-objective diffusion; formerly known as Princess Generator Victoria)
GitHub repo.
metaseq
-
Training great LLMs from ground zero in the wilderness as a startup
This is a super important issue that affects the pace and breadth of iteration of AI almost as much as the raw hardware improvements do. The blog is fun but somewhat shallow and not technical or very surprising if you’ve worked with clusters of GPUs in any capacity over the years. (I liked the perspective of a former googler, but I’m not sure why past colleagues would recommend Jax over pytorch for LLMs outside of Google.) I hope this newco eventually releases a more technical report about their training adventures, like the PDF file here: https://github.com/facebookresearch/metaseq/tree/main/projec...
- Chronicles of Opt Development
-
See the pitch memo that raised €105M for four-week-old startup Mistral
The number of people who can actually pre-train a true LLM is very small.
It remains a major feat with many tweaks and tricks. Case in point: the 114 pages of OPT175B logbook [1]
[1] https://github.com/facebookresearch/metaseq/blob/main/projec...
- Technologie: „Austro-ChatGPT“ – aber kein Geld zum Testen
- OPT (Open Pre-trained Transformers) is a family of NLP models trained on billions of tokens of text obtained from the internet
- Current state-of-the-art open source LLM
-
Elon Musk Buys Ten Thousand GPUs for Secretive AI Project
Reliability at scale: take a look at the OPT training log book for their 175B model run. It needed a lot of babysitting. In my experience, that scale of TPU training run requires a restart about once every 1-2 weeks—and they provide the middleware to monitor the health of the cluster and pick up on hardware failures.
-
Is AI Development more fun than Software Development?
I really appreciated this log of Facebook training a large language model of how troublesome AI development can be: https://github.com/facebookresearch/metaseq/tree/main/projects/OPT/chronicles
-
Visual ChatGPT
Stable Diffusion will run on any decent gaming GPU or a modern MacBook, meanwhile LLMs comparable to GPT-3/ChatGPT have had pretty insane memory requirements - e.g., <https://github.com/facebookresearch/metaseq/issues/146>
-
Ask HN: Is There On-Call in ML?
It seems so, check this log book from Meta: https://github.com/facebookresearch/metaseq/blob/main/projec...
What are some alternatives?
dalle-mini - DALL·E Mini - Generate images from a text prompt
stable-diffusion - A latent text-to-image diffusion model
text-to-text-transfer-transformer - Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
nlp-resume-parser - NLP-powered, GPT-3 enabled Resume Parser from PDF to JSON.
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
imagen-pytorch - Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
hent-AI - Automation of censor bar detection
manim - Animation engine for explanatory math videos
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
cupscale - Image Upscaling GUI based on ESRGAN