metaseq
vectorflow
metaseq | vectorflow | |
---|---|---|
53 | 12 | |
6,389 | 1,291 | |
0.4% | 0.3% | |
6.2 | 0.0 | |
11 days ago | 6 days ago | |
Python | D | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
metaseq
-
Training great LLMs from ground zero in the wilderness as a startup
This is a super important issue that affects the pace and breadth of iteration of AI almost as much as the raw hardware improvements do. The blog is fun but somewhat shallow and not technical or very surprising if you’ve worked with clusters of GPUs in any capacity over the years. (I liked the perspective of a former googler, but I’m not sure why past colleagues would recommend Jax over pytorch for LLMs outside of Google.) I hope this newco eventually releases a more technical report about their training adventures, like the PDF file here: https://github.com/facebookresearch/metaseq/tree/main/projec...
- Chronicles of Opt Development
-
See the pitch memo that raised €105M for four-week-old startup Mistral
The number of people who can actually pre-train a true LLM is very small.
It remains a major feat with many tweaks and tricks. Case in point: the 114 pages of OPT175B logbook [1]
[1] https://github.com/facebookresearch/metaseq/blob/main/projec...
- Technologie: „Austro-ChatGPT“ – aber kein Geld zum Testen
- OPT (Open Pre-trained Transformers) is a family of NLP models trained on billions of tokens of text obtained from the internet
- Current state-of-the-art open source LLM
-
Elon Musk Buys Ten Thousand GPUs for Secretive AI Project
Reliability at scale: take a look at the OPT training log book for their 175B model run. It needed a lot of babysitting. In my experience, that scale of TPU training run requires a restart about once every 1-2 weeks—and they provide the middleware to monitor the health of the cluster and pick up on hardware failures.
-
Is AI Development more fun than Software Development?
I really appreciated this log of Facebook training a large language model of how troublesome AI development can be: https://github.com/facebookresearch/metaseq/tree/main/projects/OPT/chronicles
-
Visual ChatGPT
Stable Diffusion will run on any decent gaming GPU or a modern MacBook, meanwhile LLMs comparable to GPT-3/ChatGPT have had pretty insane memory requirements - e.g., <https://github.com/facebookresearch/metaseq/issues/146>
-
Ask HN: Is There On-Call in ML?
It seems so, check this log book from Meta: https://github.com/facebookresearch/metaseq/blob/main/projec...
vectorflow
-
Programming languages endorsed for server-side use at Meta
>> Mozilla (of course)
Mozilla is a c++ and javascript shop. What do they ship in Rust? How much of Firefox is written in rust for example?
>> Microsoft, Meta, Google/Acrobat, Amazon
Large firms have lots of devs and consequently lots of toy projects. Is their usage of rust more significant than their use of D? I mean Meta was churning out projects in D a while back (warp, flint, etc) and looked like it might be going all in at one point (they even hired one of the leads on D lang).
>> That's practically all of FAANG
Who were we missing? Netflix, they’ve dabbled with D too: https://github.com/Netflix/vectorflow
Don’t misunderstand my point - it’s not that D is more popular than rust, it’s that rust is not used for real work in any significant capacity yet.
Where’s the big project written in rust? Servo and the rust compiler are the only two large rust projects on github.
-
Cloud TPU VMs are generally available
Thanks Zak, already applied.
Just wondering does TPU VM support Vectorflow?
https://github.com/Netflix/vectorflow
- Vectorflow is a minimalist neural network library optimized for sparse data and single machine environments open sourced by Netflix (r/MachineLearning)
- [P] Vectorflow is a minimalist neural network library optimized for sparse data and single machine environments open sourced by Netflix
- Vectorflow is a minimalist neural network library optimized for sparse data and single machine environments open sourced by Netflix
- Vectorflow: Minimalist neural network library faster than TensorFlow in D
-
Small Neural networks in Julia 5x faster than PyTorch
A library I designed a few years ago (https://github.com/Netflix/vectorflow) is also much faster than pytorch/tensorflow in these cases.
In "small" or "very sparse" setups, you're memory bound, not compute bound. TF and Pytorch are bad at that because they assume memory movements are worth it and do very little in-place operations.
Different tools for different jobs.
What are some alternatives?
stable-diffusion - A latent text-to-image diffusion model
tiny-cuda-nn - Lightning fast C++/CUDA neural network framework
nlp-resume-parser - NLP-powered, GPT-3 enabled Resume Parser from PDF to JSON.
dcompute - DCompute: Native execution of D on GPUs and other Accelerators
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
diffrax - Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. https://docs.kidger.site/diffrax/
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
LeNetTorch - PyTorch implementation of LeNet for fitting MNIST for benchmarking.
manim - Animation engine for explanatory math videos
juliaup - Julia installer and version multiplexer
cupscale - Image Upscaling GUI based on ESRGAN
blis - BLAS-like Library Instantiation Software Framework