LLaMA_MPS VS peft

Compare LLaMA_MPS vs peft and see what are their differences.

LLaMA_MPS

Run LLaMA inference on Apple Silicon GPUs. (by jankais3r)

peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. (by huggingface)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
LLaMA_MPS peft
4 26
566 13,877
- 4.1%
10.0 9.7
about 1 year ago 2 days ago
Python Python
GPL-3.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

LLaMA_MPS

Posts with mentions or reviews of LLaMA_MPS. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-29.

peft

Posts with mentions or reviews of peft. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-05.

What are some alternatives?

When comparing LLaMA_MPS and peft you can also consider the following projects:

llama-mps - Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2

lora - Using Low-rank adaptation to quickly fine-tune diffusion models.

m1xxx - Unofficial native Mixxx builds for macOS (Apple Silicon/Intel) and Linux

LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.

alpaca-lora - Instruct-tune LLaMA on consumer hardware

RedPajama-Data - The RedPajama-Data repository contains code for preparing large datasets for training large language models.

dalai - The simplest way to run LLaMA on your local machine

vanilla-llama - Plain pytorch implementation of LLaMA

Multi-Modality-Arena - Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!

minLoRA - minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.