finetune-gpt2xl
DeepSpeed
finetune-gpt2xl | DeepSpeed | |
---|---|---|
9 | 51 | |
421 | 32,739 | |
- | 1.6% | |
0.0 | 9.8 | |
11 months ago | 3 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
finetune-gpt2xl
-
Fine-tuning?
git clone the finetuning repo https://github.com/Xirider/finetune-gpt2xl go into the finetuning repo, install the rest of the requirements, pip install -r requirements.txt
- Training text-generating models locally
-
Dataset For GPT Fine-Tuning
I would like to understand a little better how to organize texts for Fine-Tuning, especially for GPT Neo. I plan to use this repo procedure, where is the following notice,
-
How to share the finetuned model
In the code suggested in the video (and in the repo) the flag --fp16 is used. But reading the "DeepSpeed Integration" article it is said that,
- [D] I made a script that does all the work to deploy GPT-NEO on Windows 10. (Please Test)
-
[Project] Estimating fine-tuning cost
Finetuning GPT-NEO 2.7B on Wikitext (180mb) took me about 45 minutes on one preemptible V100 instance on google cloud. It cost 1.30$ per hour and therefore around 1 $. Here are the steps: https://github.com/Xirider/finetune-gpt2xl
-
[P] Guide: Finetune GPT2-XL (1.5 Billion Parameters, the biggest model) on a single 16 GB VRAM V100 Google Cloud instance with Huggingface Transformers using DeepSpeed
Here i explain the setup and commands to get it running: https://github.com/Xirider/finetune-gpt2xl
- Guide: Finetune GPT2-XL (1.5 Billion Parameters, the biggest model) on a single 16 GB VRAM V100 Google Cloud instance with Huggingface Transformers using DeepSpeed
DeepSpeed
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
DeepSpeed can handle parallelism concerns, and even offload data/model to RAM, or even NVMe (!?) . I'm surprised I don't see this project used more.
- [P][D] A100 is much slower than expected at low batch size for text generation
- DeepSpeed-FastGen: High-Throughput for LLMs via MII and DeepSpeed-Inference
- DeepSpeed-FastGen: High-Throughput Text Generation for LLMs
- Why async gradient update doesn't get popular in LLM community?
- DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models (r/MachineLearning)
- [P] DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
-
A comprehensive guide to running Llama 2 locally
While on the surface, a 192GB Mac Studio seems like a great deal (it's not much more than a 48GB A6000!), there are several reasons why this might not be a good idea:
* I assume most people have never used llama.cpp Metal w/ large models. It will drop to CPU speeds whenever the context window is full: https://github.com/ggerganov/llama.cpp/issues/1730#issuecomm... - while sure this might be fixed in the future, it's been an issue since Metal support was added, and is a significant problem if you are actually trying to actually use it for inferencing. With 192GB of memory, you could probably run larger models w/o quantization, but I've never seen anyone post benchmarks of their experiences. Note that at that point, the limited memory bandwidth will be a big factor.
* If you are planning on using Apple Silicon for ML/training, I'd also be wary. There are multi-year long open bugs in PyTorch[1], and most major LLM libs like deepspeed, bitsandbytes, etc don't have Apple Silicon support[2][3].
You can see similar patterns w/ Stable Diffusion support [4][5] - support lagging by months, lots of problems and poor performance with inference, much less fine tuning. You can apply this to basically any ML application you want (srt, tts, video, etc)
Macs are fine to poke around with, but if you actually plan to do more than run a small LLM and say "neat", especially for a business, recommending a Mac for anyone getting started w/ ML workloads is a bad take. (In general, for anyone getting started, unless you're just burning budget, renting cloud GPU is going to be the best cost/perf, although on-prem/local obviously has other advantages.)
[1] https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3A...
[2] https://github.com/microsoft/DeepSpeed/issues/1580
[3] https://github.com/TimDettmers/bitsandbytes/issues/485
[4] https://github.com/AUTOMATIC1111/stable-diffusion-webui/disc...
[5] https://forums.macrumors.com/threads/ai-generated-art-stable...
-
Microsoft Research proposes new framework, LongMem, allowing for unlimited context length along with reduced GPU memory usage and faster inference speed. Code will be open-sourced
And https://github.com/microsoft/deepspeed
-
April 2023
DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales (https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat)
What are some alternatives?
detoxify - Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at [email protected].
ColossalAI - Making large AI models cheaper, faster and more accessible
Extracting-Training-Data-from-Large-Langauge-Models - A re-implementation of the "Extracting Training Data from Large Language Models" paper by Carlini et al., 2020
Megatron-LM - Ongoing research training transformer models at scale
fairscale - PyTorch extensions for high performance and large scale training.
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
llama - Inference code for Llama models
flash-attention - Fast and memory-efficient exact attention
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.