llama-mps
Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2 (by remixer-dec)
llama-cpu
Fork of Facebooks LLaMa model to run on CPU (by markasoftware)
llama-mps | llama-cpu | |
---|---|---|
4 | 9 | |
83 | 775 | |
- | - | |
3.8 | 3.1 | |
9 months ago | about 1 year ago | |
Python | Python | |
GNU General Public License v3.0 only | GNU General Public License v3.0 only |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama-mps
Posts with mentions or reviews of llama-mps.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-13.
-
llama.cpp now officially supports GPU acceleration.
There are currently at least 3 ways to run llama on m1 with GPU acceleration. - mlc-llm (pre-built, only 1 model has been ported) - tinygrad (very memory efficient, not that easy to integrate into other projects) - llama-mps (original llama codebase + llama adapter support)
-
LLaMA-7B in Pure C++ with full Apple Silicon support
There is also a gpu-acelerated fork of the original repo
https://github.com/remixer-dec/llama-mps
- Llama-CPU: Fork of Facebooks LLaMa model to run on CPU
-
[D] Tutorial: Run LLaMA on 8gb vram on windows (thanks to bitsandbytes 8bit quantization)
I tried to port the llama-cpu version to a gpu-accelerated mps version for macs, it runs, but the outputs are not as good as expected and it often gives "-1" tokens. Any help and contributions on fixing it are welcome!
llama-cpu
Posts with mentions or reviews of llama-cpu.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-03-08.
-
Why is ChatGPT 3.5 API 10x cheaper than GPT3?
You've probably heard, but LLaMA just released, and its 13B parameter model outperforms GPT-3 on most metrics (because they trained it on a lot more data). Someone's already quantized it to 4 and 3 bits and it performs virtually the same. It also apparently performs well on CPUs (several words per second on a 7900X). Running something equivalent to GPT3.5 on a phone is not out that far out.
- Fork of Facebook’s LLaMa model to run on CPU
- Llama-CPU: Fork of Facebooks LLaMa model to run on CPU
-
[D] Tutorial: Run LLaMA on 8gb vram on windows (thanks to bitsandbytes 8bit quantization)
I tried to port the llama-cpu version to a gpu-accelerated mps version for macs, it runs, but the outputs are not as good as expected and it often gives "-1" tokens. Any help and contributions on fixing it are welcome!
-
Facebook LLAMA is being openly distributed via torrents | Hacker News
You can run it with only a CPU and 32 gigs of RAM: https://github.com/markasoftware/llama-cpu
- [D] Is it possible to run Meta's LLaMA 65B model on consumer-grade hardware?
-
Facebook LLAMA is being openly distributed via torrents
I was able to run 7B on a CPU, inferring several words per second: https://github.com/markasoftware/llama-cpu
What are some alternatives?
When comparing llama-mps and llama-cpu you can also consider the following projects:
llama - Inference code for Llama models
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
awesome-ml - Curated list of useful LLM / Analytics / Datascience resources
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
llama - Inference code for LLaMA models
wrapyfi-examples_llama - Inference code for facebook LLaMA models with Wrapyfi support
LLaMA_MPS - Run LLaMA inference on Apple Silicon GPUs.
bitsandbytes-win-prebuilt
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
llama-mps vs llama
llama-cpu vs text-generation-webui
llama-mps vs text-generation-webui
llama-cpu vs llama
llama-mps vs awesome-ml
llama-cpu vs GPTQ-for-LLaMa
llama-mps vs llama
llama-cpu vs wrapyfi-examples_llama
llama-mps vs LLaMA_MPS
llama-cpu vs bitsandbytes-win-prebuilt
llama-mps vs tinygrad
llama-cpu vs transformers