llama-cpu
llama-cpu | bitsandbytes-win-prebuilt | |
---|---|---|
9 | 4 | |
775 | 75 | |
- | - | |
3.1 | 10.0 | |
about 1 year ago | over 1 year ago | |
Python | ||
GNU General Public License v3.0 only | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama-cpu
-
Why is ChatGPT 3.5 API 10x cheaper than GPT3?
You've probably heard, but LLaMA just released, and its 13B parameter model outperforms GPT-3 on most metrics (because they trained it on a lot more data). Someone's already quantized it to 4 and 3 bits and it performs virtually the same. It also apparently performs well on CPUs (several words per second on a 7900X). Running something equivalent to GPT3.5 on a phone is not out that far out.
- Fork of Facebook’s LLaMa model to run on CPU
- Llama-CPU: Fork of Facebooks LLaMa model to run on CPU
-
[D] Tutorial: Run LLaMA on 8gb vram on windows (thanks to bitsandbytes 8bit quantization)
I tried to port the llama-cpu version to a gpu-accelerated mps version for macs, it runs, but the outputs are not as good as expected and it often gives "-1" tokens. Any help and contributions on fixing it are welcome!
-
Facebook LLAMA is being openly distributed via torrents | Hacker News
You can run it with only a CPU and 32 gigs of RAM: https://github.com/markasoftware/llama-cpu
- [D] Is it possible to run Meta's LLaMA 65B model on consumer-grade hardware?
-
Facebook LLAMA is being openly distributed via torrents
I was able to run 7B on a CPU, inferring several words per second: https://github.com/markasoftware/llama-cpu
bitsandbytes-win-prebuilt
-
bitsandbytes now for Windows (8-bit CUDA functions for PyTorch)
So there used to be a compiled version from https://github.com/DeXtmL/bitsandbytes-win-prebuilt but now I see there is a new version (from last week) at https://github.com/acpopescu/bitsandbytes/releases which appears to maybe become the start of Windows support in the official repo?
-
[D] Tutorial: Run LLaMA on 8gb vram on windows (thanks to bitsandbytes 8bit quantization)
put libbitsandbytes_cuda116.dll in C:\Users\xxx\miniconda3\envs\textgen\lib\site-packages\bitsandbytes\
-
Running Pygmalion 6b with 8GB of VRAM
Download these 2 dll files from here. then you move those files into "installer_files\env\lib\site-packages\bitsandbytes\" under your oobabooga root folder (where you've extracted the oneclick installer)
- Has anyone gotten the models to load via 8-bit for windows?!?!?
What are some alternatives?
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llama - Inference code for Llama models
bitsandbytes - 8-bit CUDA functions for PyTorch
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
awesome-ml - Curated list of useful LLM / Analytics / Datascience resources
wrapyfi-examples_llama - Inference code for facebook LLaMA models with Wrapyfi support
one-click-installers - Simplified installers for oobabooga/text-generation-webui.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
KoboldAI-Client
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.