galai
metaseq
galai | metaseq | |
---|---|---|
8 | 53 | |
2,628 | 6,388 | |
0.0% | 0.4% | |
0.0 | 1.0 | |
about 1 year ago | 12 days ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
galai
-
Meta’s powerful AI language model has leaked online — what happens now?
Official website: https://galactica.org/ Community-driven API provided via GitHub: https://github.com/paperswithcode/galai
-
What is this subreddit about? I can't tell if its wifaus or locally run LLMs
More important is the language model. Gattica AI has been trained on scientific papers: https://github.com/paperswithcode/galai
-
I asked ChatGPT to find me some papers, all the papers it gave me did not exist. What gives?
Check out "galactica" https://github.com/paperswithcode/galai , the language model for making papers
-
I wrote an Emacs package for ChatGPT
https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/README.md https://github.com/paperswithcode/galai https://github.com/yandex/YaLM-100B
-
Convincing ChatGPT to Write a Python Program to Eradicate Humanity
Isn't it still available, they just aren't running an instance for public use anymore but I thought you could run your own?
https://github.com/paperswithcode/galai
-
Galactica: an AI trained on humanity's scientific knowledge
You can run Galactica (the "base" model) for free on Colab (https://colab.research.google.com/). It takes about 4 minutes to start up. Just specify a "GPU" runtime on colab, and follow the simple instructions from their github (https://github.com/paperswithcode/galai):
import galai as gal
model = gal.load_model("base")
model.generate("Scaled dot product attention:\n\n\\[")
-
Over the past several months I've put together a spreadsheet of 470 categorized SD resources and apps. Put it up online in case it helps someone (should be the biggest public list so far)
I think you should probably add https://github.com/paperswithcode/galai to NLP text models too.
metaseq
-
Training great LLMs from ground zero in the wilderness as a startup
This is a super important issue that affects the pace and breadth of iteration of AI almost as much as the raw hardware improvements do. The blog is fun but somewhat shallow and not technical or very surprising if you’ve worked with clusters of GPUs in any capacity over the years. (I liked the perspective of a former googler, but I’m not sure why past colleagues would recommend Jax over pytorch for LLMs outside of Google.) I hope this newco eventually releases a more technical report about their training adventures, like the PDF file here: https://github.com/facebookresearch/metaseq/tree/main/projec...
- Chronicles of Opt Development
-
See the pitch memo that raised €105M for four-week-old startup Mistral
The number of people who can actually pre-train a true LLM is very small.
It remains a major feat with many tweaks and tricks. Case in point: the 114 pages of OPT175B logbook [1]
[1] https://github.com/facebookresearch/metaseq/blob/main/projec...
- Technologie: „Austro-ChatGPT“ – aber kein Geld zum Testen
- OPT (Open Pre-trained Transformers) is a family of NLP models trained on billions of tokens of text obtained from the internet
- Current state-of-the-art open source LLM
-
Elon Musk Buys Ten Thousand GPUs for Secretive AI Project
Reliability at scale: take a look at the OPT training log book for their 175B model run. It needed a lot of babysitting. In my experience, that scale of TPU training run requires a restart about once every 1-2 weeks—and they provide the middleware to monitor the health of the cluster and pick up on hardware failures.
-
Is AI Development more fun than Software Development?
I really appreciated this log of Facebook training a large language model of how troublesome AI development can be: https://github.com/facebookresearch/metaseq/tree/main/projects/OPT/chronicles
-
Visual ChatGPT
Stable Diffusion will run on any decent gaming GPU or a modern MacBook, meanwhile LLMs comparable to GPT-3/ChatGPT have had pretty insane memory requirements - e.g., <https://github.com/facebookresearch/metaseq/issues/146>
-
Ask HN: Is There On-Call in ML?
It seems so, check this log book from Meta: https://github.com/facebookresearch/metaseq/blob/main/projec...
What are some alternatives?
scibert - A BERT model for scientific text.
stable-diffusion - A latent text-to-image diffusion model
stylegan2-projecting-images - Projecting images to latent space with StyleGAN2.
nlp-resume-parser - NLP-powered, GPT-3 enabled Resume Parser from PDF to JSON.
YaLM-100B - Pretrained language model with 100B parameters
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
openplayground - An LLM playground you can run on your laptop
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
pen.el - Pen.el stands for Prompt Engineering in emacs. It facilitates the creation, discovery and usage of prompts to language models. Pen supports OpenAI, EleutherAI, Aleph-Alpha, HuggingFace and others. It's the engine for the LookingGlass imaginary web browser.
manim - Animation engine for explanatory math videos
awesome-generative-ai - A curated list of modern Generative Artificial Intelligence projects and services
cupscale - Image Upscaling GUI based on ESRGAN