mesh-transformer-jax
Finetune_LLMs
mesh-transformer-jax | Finetune_LLMs | |
---|---|---|
52 | 2 | |
6,213 | 438 | |
- | - | |
0.0 | 8.5 | |
over 1 year ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mesh-transformer-jax
-
Large Language Models: Compairing Gen2/Gen3 Models (GPT-3, GPT-J, MT5 and More)
GPT-J is a LLM case study with two goals: Training a LLM with a data source containing unique material, and using the training frameworkMesh Transformer JAX to achieve a high training efficiency through parallelization. There is no research paper about GPT-J, but on its GitHub pages, the model, different checkpoints, and the complete source code for training is given.
-
[R] Parallel Attention and Feed-Forward Net Design for Pre-training and Inference on Transformers
This idea has already been proposed in ViT-22B and GPT-J-6B.
- Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
-
[D] An Instruct Version Of GPT-J Using Stanford Alpaca's Dataset
Sure. Here's the repo I used for the fine-tuning: https://github.com/kingoflolz/mesh-transformer-jax. I used 5 epochs, and appart from that I kept the default parameters in the repo.
- Boss wants me to use ChatGPT for work, but I refuse to input my personal phone number. Any advice?
-
Let's build GPT: from scratch, in code, spelled out by Andrej Karpathy
You can skip to step 4 using something like GPT-J as far as I understand: https://github.com/kingoflolz/mesh-transformer-jax#links
The pretrained model is already available.
-
Best coding model?
The Github repo suggests it's possible you can change the number of checkpoints to make it run on a GPU.
- Ask HN: What language models can I fine-tune at home?
-
selfhosted/ open-source ChatGPT alternative?
GPT-J, which uses mesh-transformer-jax: https://github.com/kingoflolz/mesh-transformer-jax
-
GPT-J, an open-source alternative to GPT-3
They hinted at it in the screenshot, but the goods are linked from the https://6b.eleuther.ai page: https://github.com/kingoflolz/mesh-transformer-jax#gpt-j-6b (Apache 2)
Finetune_LLMs
-
Prepare Dataset
Regarding this: if you have resources (at least Colab Pro), you would be much better off training GPT-J (aka GPT-J-6B). Not only it's 4x larger than the largest GPT-2, its architecture, AFAIK, is based on GPT-3. You can use this repo as a good example for GPT-J finetuning.
-
[D] Fine-tuning GPT-J: lessons learned
And this: https://github.com/mallorbc/Finetune_GPTNEO_GPTJ6B
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
tensorflow - An Open Source Machine Learning Framework for Everyone
code-llama-for-vscode - Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
AnglE - Angle-optimized Text Embeddings | 🔥 SOTA on STS and MTEB Leaderboard
jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
GoLLIE - Guideline following Large Language Model for Information Extraction
KoboldAI-Client
replicate-llama2-sms-chatbot
alpaca-lora - Instruct-tune LLaMA on consumer hardware
go-llama2 - Llama 2 inference in one file of pure Go