the-pile
mesh-transformer-jax
the-pile | mesh-transformer-jax | |
---|---|---|
15 | 52 | |
1,403 | 6,213 | |
1.6% | - | |
0.0 | 0.0 | |
about 1 year ago | over 1 year ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
the-pile
-
The Pile
[2] https://github.com/EleutherAI/the-pile/issues/56
-
The Pile: a dataset for language modeling [pdf]
I came so close to getting my dataset DebateSum (https://huggingface.co/datasets/Hellisotherpeople/DebateSum) into the pile, but they decided at the last minute not to add it: https://github.com/EleutherAI/the-pile/issues/56
I'm still a tiny bit salty about that.
-
Sarah Silverman is suing OpenAI and Meta for copyright infringement
Anyone want to check if the book in question is in ThePile dataset?:
https://github.com/EleutherAI/the-pile/blob/master/the_pile/...
-
What Types Of Websites Are Typically Scraped To Train LLMs?
All of it, itβs quite diverse. Especially the commoncrawl bit, https://github.com/EleutherAI/the-pile.
-
Can anyone answer some questions on how GPT-NeoX-20B was developed, and future models?
For example, before this I didn't realize one of the sources of data that the pile uses is a massive number of emails gathered during the Enron lawsuits. Weird, but cool I guess.
-
How do I add AI modules?
NovelAI's Krake and Euterpe, and the rest, are finetuned versions of existing models. The original models were trained on a mass of text. Krake is a finetune of Neo-X 20b, which was trained on The Pile. NovelAI's finetunes involve further training but on various works of fiction rather than more text trawled from the internet. The statistical rules in the existing models are thus shifted in a (slightly) new direction. Modules refine those statistical rules, or weights, just a little bit more.
- GitHub - EleutherAI/the-pile
-
Sounds about right π /s
Literally The Pile.
-
What is the difference between OpenAI and the gpt3 algorithm?
The parameters are taken from large datasets like The Pile.
-
Official Beta AMA @ June 14th, 12pm EST
We use the GPT-Neo as our base model which trained on The Pile and you can see it's contents in their github repo: https://github.com/EleutherAI/the-pile
mesh-transformer-jax
-
Large Language Models: Compairing Gen2/Gen3 Models (GPT-3, GPT-J, MT5 and More)
GPT-J is a LLM case study with two goals: Training a LLM with a data source containing unique material, and using the training frameworkMesh Transformer JAX to achieve a high training efficiency through parallelization. There is no research paper about GPT-J, but on its GitHub pages, the model, different checkpoints, and the complete source code for training is given.
-
[R] Parallel Attention and Feed-Forward Net Design for Pre-training and Inference on Transformers
This idea has already been proposed in ViT-22B and GPT-J-6B.
- Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
-
[D] An Instruct Version Of GPT-J Using Stanford Alpaca's Dataset
Sure. Here's the repo I used for the fine-tuning: https://github.com/kingoflolz/mesh-transformer-jax. I used 5 epochs, and appart from that I kept the default parameters in the repo.
- Boss wants me to use ChatGPT for work, but I refuse to input my personal phone number. Any advice?
-
Let's build GPT: from scratch, in code, spelled out by Andrej Karpathy
You can skip to step 4 using something like GPT-J as far as I understand: https://github.com/kingoflolz/mesh-transformer-jax#links
The pretrained model is already available.
-
Best coding model?
The Github repo suggests it's possible you can change the number of checkpoints to make it run on a GPU.
- Ask HN: What language models can I fine-tune at home?
-
selfhosted/ open-source ChatGPT alternative?
GPT-J, which uses mesh-transformer-jax: https://github.com/kingoflolz/mesh-transformer-jax
-
GPT-J, an open-source alternative to GPT-3
They hinted at it in the screenshot, but the goods are linked from the https://6b.eleuther.ai page: https://github.com/kingoflolz/mesh-transformer-jax#gpt-j-6b (Apache 2)
What are some alternatives?
datasets - π€ The largest hub of ready-to-use NLP datasets for ML models with fast, easy-to-use and efficient data manipulation tools
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
opendyslexic - OpenDyslexic, a typeface that uses typeface shapes & features to help offset some visual symptoms of Dyslexia. Now in SIL-OFL.
tensorflow - An Open Source Machine Learning Framework for Everyone
jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
DALLE-mtf - Open-AI's DALL-E for large scale training in mesh-tensorflow.
KoboldAI-Client
alpaca-lora - Instruct-tune LLaMA on consumer hardware
Finetune_LLMs - Repo for fine-tuning Casual LLMs