LLaMA_MPS
vanilla-llama
LLaMA_MPS | vanilla-llama | |
---|---|---|
4 | 3 | |
566 | 178 | |
- | - | |
10.0 | 4.8 | |
about 1 year ago | 12 months ago | |
Python | Python | |
GPL-3.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLaMA_MPS
-
A brief history of LLaMA models
Most places that recommend llama.cpp for mac fail to mention https://github.com/jankais3r/LLaMA_MPS, which runs unquantized 7b and 13b models on the M1/M2 GPU directly. It's slightly slower, (not a lot), and significantly lower energy usage. To me the win not having to quantize is huge; I wish more people knew about it.
-
Databricks Releases 15K Record Training Corpus for Instruction Tuning LLMs
I saw this: https://github.com/jankais3r/LLaMA_MPS
it runs slightly slower on the GPU than under llama.cpp but uses much less power doing so
I would guess the slowness is due to immaturity of the PyTorch MPS backend, the asitop graphs show it doing a bunch of cpu along with the gpu, so it might be inefficiently falling back to cpu for some ops and swapping layers back and forth (I have no idea, just guessing)
-
Apples effort on developing Chat GPT like functions?
Not chatgpt, but also nothing to sneeze at. https://github.com/jankais3r/LLaMA_MPS 7B llm on 32gb m1 pro.
-
llama VS LLaMA_MPS - a user suggested alternative
2 projects | 10 Mar 2023
vanilla-llama
-
How to extract vector embeddings from passages analyzed with LLaMA
I shouldn't have any trouble with the second step, but I'm not sure how to get started on the first one. I found a Python package for interfacing with LLaMA, but its examples focus on just generating text, and I'm not sure how I would actually get embedding vectors or anything beyond text generation. Ideally, I would like to not even just create embedding vectors but rather directly hook up some new layers to LLaMA for supervised learning.
- Has anyone used LLaMA with a TPU instead of GPU?
- [P] vanilla-llama an hackable plain-pytorch implementation of LLaMA that can be run on any system (if you have enough resources)
What are some alternatives?
llama-mps - Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2
LLaVA - [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
m1xxx - Unofficial native Mixxx builds for macOS (Apple Silicon/Intel) and Linux
chat-llama-discord-bot - A Discord Bot for chatting with LLaMA, Vicuna, Alpaca, MPT, or any other Large Language Model (LLM) supported by text-generation-webui or llama.cpp.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
RedPajama-Data - The RedPajama-Data repository contains code for preparing large datasets for training large language models.
xTuring - Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
Multi-Modality-Arena - Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
coral-pi-rest-server - Perform inferencing of tensorflow-lite models on an RPi with acceleration from Coral USB stick
llama-dfdx - LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!
dolly - Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform