FlexGen
FlexGen
FlexGen | FlexGen | |
---|---|---|
39 | 19 | |
9,007 | 5,350 | |
0.8% | - | |
3.0 | 10.0 | |
15 days ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
FlexGen
- Run 70B LLM Inference on a Single 4GB GPU with This New Technique
- Colorful Custom RTX 4060 Ti GPU Clocks Outed, 8 GB VRAM Confirmed
-
Local Alternatives of ChatGPT and Midjourney
LLaMA, Pythia, RWKV, Flan-T5 (self-hosted), FlexGen
- FlexGen: Running large language models on a single GPU
-
Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
> With no real knowledge of LLM and only recently started to understand what LLM terms mean, such as 'model, inference, LLM model, intruction set, fine tuning' whatelse do you think is required to make a took like yours?
This was mee a few weeks ago. I got interested in all this when FlexGen (https://github.com/FMInference/FlexGen) was announced, which allowed to run inference using OPT model on consumer hardware. I'm an avid user of Stable Diffusion, and I wanted to see if I can have an SD equivalent of ChatGPT.
Not understanding the details of hyperparameters or terminology, I basically asked ChatGPT to explain to me what these things are:
Explain to someone who is a software engineer with limited knowledge of ML terms or linear algebra, what is "feed forward" and "self-attention" in the context of ML and large language models. Provide examples when possible.
- Could this new flexgen be used in place of GPTq? or is this different?
- OpenAI is expensive
FlexGen
-
Training LLaMA-65B with Stanford Code
#1: Progress Update | 4 comments #2: the default UI on the pinned Google Colab is buggy so I made my own frontend - YAFFOA. | 18 comments #3: Paper reduces resource requirement of a 175B model down to 16GB GPU | 19 comments
-
Replika users fell in love with their AI chatbot companions. Then they lost them
It's really just a gpu vram limitation: affordable GPUs are rather memory starved.
Fortunately people have started writing implementations for pipelining across multiple gpus.
https://github.com/Ying1123/FlexGen
- Same as with Stable Diffusion, new AI based LAION, are coming up slowly but surely: Paper reduces resource requirement of a 175B model down to 16GB GPU
- And Here..We..Go: Running large language models like ChatGPTon a single GPU. Up to 100x faster than other offloading systems
-
When, how and why will this Stable Diffusion spring stop?
Actually there's a solution : read this paper https://github.com/Ying1123/FlexGen/blob/main/docs/paper.pdf
-
Exciting new shit.
Flexgen - Run big models on your small GPU https://github.com/Ying1123/FlexGen
- Paper reduces resource requirement of a 175B model down to 16GB GPU
- FlexGen - Run 175B Parameter Models on consumer hardware
- Running large language models like ChatGPT on a single GPU
- FlexGen: Running large language models like ChatGPT/GPT-3/OPT-175B on a single GPU
What are some alternatives?
llama - Inference code for Llama models
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
CTranslate2 - Fast inference engine for Transformer models
text-generation-inference - Large Language Model Text Generation Inference
ggml - Tensor library for machine learning
whisper.cpp - Port of OpenAI's Whisper model in C/C++
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
rust-bert - Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
audiolm-pytorch - Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch
stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.