LLaMA_MPS
DeepSpeedExamples
LLaMA_MPS | DeepSpeedExamples | |
---|---|---|
4 | 5 | |
566 | 5,688 | |
- | 2.2% | |
10.0 | 8.7 | |
about 1 year ago | 2 days ago | |
Python | Python | |
GPL-3.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLaMA_MPS
-
A brief history of LLaMA models
Most places that recommend llama.cpp for mac fail to mention https://github.com/jankais3r/LLaMA_MPS, which runs unquantized 7b and 13b models on the M1/M2 GPU directly. It's slightly slower, (not a lot), and significantly lower energy usage. To me the win not having to quantize is huge; I wish more people knew about it.
-
Databricks Releases 15K Record Training Corpus for Instruction Tuning LLMs
I saw this: https://github.com/jankais3r/LLaMA_MPS
it runs slightly slower on the GPU than under llama.cpp but uses much less power doing so
I would guess the slowness is due to immaturity of the PyTorch MPS backend, the asitop graphs show it doing a bunch of cpu along with the gpu, so it might be inefficiently falling back to cpu for some ops and swapping layers back and forth (I have no idea, just guessing)
-
Apples effort on developing Chat GPT like functions?
Not chatgpt, but also nothing to sneeze at. https://github.com/jankais3r/LLaMA_MPS 7B llm on 32gb m1 pro.
-
llama VS LLaMA_MPS - a user suggested alternative
2 projects | 10 Mar 2023
DeepSpeedExamples
-
[R] 🚀🧠Introducing 3 New LoRA Models Trained with LLaMA on the OASST Dataset at 2048 seq length! 📊🔥
Microsoft recently launched something called deepspeed chat which should speed up the rlhf process a good bit. So hopefully we will start seeing those soon. We are working on some now that we will open source on completion!
-
DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-Like Models
Also see the example repo README: https://github.com/microsoft/DeepSpeedExamples/tree/master/a...
> With just one click, you can train, generate and serve a 1.3 billion parameter ChatGPT model within 1.36 hours on a single consumer-grade NVIDIA A6000 GPU with 48GB memory. On a single DGX node with 8 NVIDIA A100-40G GPUs, DeepSpeed-Chat enables training for a 13 billion parameter ChatGPT model in 13.6 hours. On multi-GPU multi-node systems (cloud scenarios),i.e., 8 DGX nodes with 8 NVIDIA A100 GPUs/node, DeepSpeed-Chat can train a 66 billion parameter ChatGPT model under 9 hours. Finally, it enables 15X faster training over the existing RLHF systems
> The following are some of the open-source examples that are powered by DeepSpeed: Databricks Dolly, LMFlow, CarperAI-TRLX, Huggingface-PEFT
(disclaimer: MSFT/GH employee, not affiliated with this project)
-
Databricks Releases 15K Record Training Corpus for Instruction Tuning LLMs
can you compare your dolly offering with https://github.com/microsoft/DeepSpeedExamples/blob/master/a...
- DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-Like Models
-
Microsoft DeepSpeed
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales https://github.com/microsoft/DeepSpeedExamples/tree/master/a...
What are some alternatives?
llama-mps - Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2
ggml - Tensor library for machine learning
m1xxx - Unofficial native Mixxx builds for macOS (Apple Silicon/Intel) and Linux
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
dolly - Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform
RedPajama-Data - The RedPajama-Data repository contains code for preparing large datasets for training large language models.
vanilla-llama - Plain pytorch implementation of LLaMA
Multi-Modality-Arena - Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
llama-dfdx - LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!
peft - 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.