metaseq
minGPT
metaseq | minGPT | |
---|---|---|
53 | 35 | |
6,389 | 18,932 | |
0.4% | - | |
6.2 | 0.0 | |
11 days ago | 10 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
metaseq
-
Training great LLMs from ground zero in the wilderness as a startup
This is a super important issue that affects the pace and breadth of iteration of AI almost as much as the raw hardware improvements do. The blog is fun but somewhat shallow and not technical or very surprising if you’ve worked with clusters of GPUs in any capacity over the years. (I liked the perspective of a former googler, but I’m not sure why past colleagues would recommend Jax over pytorch for LLMs outside of Google.) I hope this newco eventually releases a more technical report about their training adventures, like the PDF file here: https://github.com/facebookresearch/metaseq/tree/main/projec...
- Chronicles of Opt Development
-
See the pitch memo that raised €105M for four-week-old startup Mistral
The number of people who can actually pre-train a true LLM is very small.
It remains a major feat with many tweaks and tricks. Case in point: the 114 pages of OPT175B logbook [1]
[1] https://github.com/facebookresearch/metaseq/blob/main/projec...
- Technologie: „Austro-ChatGPT“ – aber kein Geld zum Testen
- OPT (Open Pre-trained Transformers) is a family of NLP models trained on billions of tokens of text obtained from the internet
- Current state-of-the-art open source LLM
-
Elon Musk Buys Ten Thousand GPUs for Secretive AI Project
Reliability at scale: take a look at the OPT training log book for their 175B model run. It needed a lot of babysitting. In my experience, that scale of TPU training run requires a restart about once every 1-2 weeks—and they provide the middleware to monitor the health of the cluster and pick up on hardware failures.
-
Is AI Development more fun than Software Development?
I really appreciated this log of Facebook training a large language model of how troublesome AI development can be: https://github.com/facebookresearch/metaseq/tree/main/projects/OPT/chronicles
-
Visual ChatGPT
Stable Diffusion will run on any decent gaming GPU or a modern MacBook, meanwhile LLMs comparable to GPT-3/ChatGPT have had pretty insane memory requirements - e.g., <https://github.com/facebookresearch/metaseq/issues/146>
-
Ask HN: Is There On-Call in ML?
It seems so, check this log book from Meta: https://github.com/facebookresearch/metaseq/blob/main/projec...
minGPT
- FLaNK AI Weekly for 29 April 2024
-
Ask HN: Daily practices for building AI/ML skills?
minGPT (Karpathy): https://github.com/karpathy/minGPT
Next, some foundational textbooks for general ML and deep learning:
-
[D] What are some examples of being clever with batching for training efficiency?
Language Model novice here. I was going through the README section of minGPT and read this line.
-
LLM Visualization: 3D interactive model of a GPT-style LLM network running inference.
The first network displayed with working weights is a tiny such network, which sorts a small list of the letters A, B, and C. This is the demo example model from Andrej Karpathy's minGPT implementation.
- LLM Visualization
- Learn Machine Learning
-
Facebook Prophet: library for generating forecasts from any time series data
Tried it once. Its promise is to take the dataset's seasonal trend into account, which makes sense for Facebook's original use case.
We ran it on such a dataset and found out that directly using https://github.com/karpathy/minGPT consistently gives a better result. So we ended up using the output of Prophet as an input feature to a neural network, but the result was not improved in any significant way.
-
Tokenization of numerical series
Sure, im trying to regenerate a bunch of complex numbers based on their absolute value. So im trying to embed these absolute values and then using gpt model(probably mini gpt) try to recover the original comples numbers. There is a certain connection between these complex numbers and their order which im not capable of explaining yet. Im hoping the model would be capable of recognizing certain sequences of these absolute values and match them with the desired complex counterparts (by training the model).
-
Anyone know of any articles on training a LLM from scratch on a single GPU?
minGPT (https://github.com/karpathy/minGPT)
-
Understanding LLMs(to the best of our knowledge)
Check out minGPT and nanoGPT from Karpathy, he puts out some of the best machine learning tutorials and teaching content.
What are some alternatives?
stable-diffusion - A latent text-to-image diffusion model
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
nlp-resume-parser - NLP-powered, GPT-3 enabled Resume Parser from PDF to JSON.
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
simpletransformers - Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI
Pytorch-Simple-Transformer - A simple transformer implementation without difficult syntax and extra bells and whistles.
manim - Animation engine for explanatory math videos
nn-zero-to-hero - Neural Networks: Zero to Hero
cupscale - Image Upscaling GUI based on ESRGAN
huggingface_hub - The official Python client for the Huggingface Hub.