FlexGen
llama-cpu
FlexGen | llama-cpu | |
---|---|---|
39 | 9 | |
9,007 | 775 | |
0.8% | - | |
3.0 | 3.1 | |
15 days ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
FlexGen
- Run 70B LLM Inference on a Single 4GB GPU with This New Technique
- Colorful Custom RTX 4060 Ti GPU Clocks Outed, 8 GB VRAM Confirmed
-
Local Alternatives of ChatGPT and Midjourney
LLaMA, Pythia, RWKV, Flan-T5 (self-hosted), FlexGen
- FlexGen: Running large language models on a single GPU
-
Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
> With no real knowledge of LLM and only recently started to understand what LLM terms mean, such as 'model, inference, LLM model, intruction set, fine tuning' whatelse do you think is required to make a took like yours?
This was mee a few weeks ago. I got interested in all this when FlexGen (https://github.com/FMInference/FlexGen) was announced, which allowed to run inference using OPT model on consumer hardware. I'm an avid user of Stable Diffusion, and I wanted to see if I can have an SD equivalent of ChatGPT.
Not understanding the details of hyperparameters or terminology, I basically asked ChatGPT to explain to me what these things are:
Explain to someone who is a software engineer with limited knowledge of ML terms or linear algebra, what is "feed forward" and "self-attention" in the context of ML and large language models. Provide examples when possible.
- Could this new flexgen be used in place of GPTq? or is this different?
- OpenAI is expensive
llama-cpu
-
Why is ChatGPT 3.5 API 10x cheaper than GPT3?
You've probably heard, but LLaMA just released, and its 13B parameter model outperforms GPT-3 on most metrics (because they trained it on a lot more data). Someone's already quantized it to 4 and 3 bits and it performs virtually the same. It also apparently performs well on CPUs (several words per second on a 7900X). Running something equivalent to GPT3.5 on a phone is not out that far out.
- Fork of Facebook’s LLaMa model to run on CPU
- Llama-CPU: Fork of Facebooks LLaMa model to run on CPU
-
[D] Tutorial: Run LLaMA on 8gb vram on windows (thanks to bitsandbytes 8bit quantization)
I tried to port the llama-cpu version to a gpu-accelerated mps version for macs, it runs, but the outputs are not as good as expected and it often gives "-1" tokens. Any help and contributions on fixing it are welcome!
-
Facebook LLAMA is being openly distributed via torrents | Hacker News
You can run it with only a CPU and 32 gigs of RAM: https://github.com/markasoftware/llama-cpu
- [D] Is it possible to run Meta's LLaMA 65B model on consumer-grade hardware?
-
Facebook LLAMA is being openly distributed via torrents
I was able to run 7B on a CPU, inferring several words per second: https://github.com/markasoftware/llama-cpu
What are some alternatives?
llama - Inference code for Llama models
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
text-generation-inference - Large Language Model Text Generation Inference
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
whisper.cpp - Port of OpenAI's Whisper model in C/C++
wrapyfi-examples_llama - Inference code for facebook LLaMA models with Wrapyfi support
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
bitsandbytes-win-prebuilt
audiolm-pytorch - Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.