llama-int8
egghead
Our great sponsors
llama-int8 | egghead | |
---|---|---|
6 | 1 | |
1,044 | 3 | |
- | - | |
3.6 | 9.2 | |
about 1 year ago | 7 days ago | |
Python | Rust | |
GNU General Public License v3.0 only | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama-int8
- My new home server. :)
-
Show HN: Llama-dl – high-speed download of LLaMA, Facebook's 65B GPT model
If anyone is interested in running this at home, please follow the llama-int8 project [1]. LLM.int8() is a recent development allowing LLMs to run in half the memory without loss of performance [2]. Note that at the end of [2]'s abstract, the authors state "This result makes such models much more accessible, for example making it possible to use OPT-175B/BLOOM on a single server with consumer GPUs. We open-source our software." I'm very thankful we have researchers like this further democratizing access to this data and prying it out of the hands of the gatekeepers who wish to monetize it.
[1] https://github.com/tloen/llama-int8
[2] https://arxiv.org/abs/2208.07339
-
[D] First glance at LLaMA
To add a bit more context, the code other people linked (https://github.com/tloen/llama-int8) assumes single GPU. So if you want to run it on 2x3090, you'll need to modify it a bit:
- [D] Is it possible to run Meta's LLaMA 65B model on consumer-grade hardware?
egghead
-
Show HN: Llama-dl – high-speed download of LLaMA, Facebook's 65B GPT model
It's a toy I threw together as a weekend project, but you're welcome to give it a whirl: https://github.com/toasterrepairman/egghead
Here's the rundown:
- You need libtorch, openssl and cargo installed on your system before compiling
AND
- You have to put the variables from the README in your ~.bashrc along with a valid Discord bot token
Once you do that, it should "just work". It's using a super pruned model with high-temperature tuning, so the results should be... dicy. I assume no responsibility for the vast amount of misinformation this will produce.
Commands include "e.help" for help, "e.ask" for traditional ChatGPT-style questions, "e.news" to grab a Fox headline and generate the rest, "e.wiki" to look up a Wikipedia article and use it as a prompt, and "e.hn"... a feature I will build Soon™.
Let me know if you run into any issues!
What are some alternatives?
llama - Inference code for Llama models
llama-dl - High-speed download of LLaMA, Facebook's 65B parameter GPT model [UnavailableForLegalReasons - Repository access blocked]
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
test - Measuring Massive Multitask Language Understanding | ICLR 2021
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
text-g
llama-cpu - Fork of Facebooks LLaMa model to run on CPU