sparsegpt
llm
sparsegpt | llm | |
---|---|---|
16 | 41 | |
634 | 5,931 | |
5.0% | 2.7% | |
2.4 | 9.4 | |
about 1 month ago | about 2 months ago | |
Python | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sparsegpt
-
(1/2) May 2023
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot (https://arxiv.org/abs/2301.00774)
- Why Falcon going Apache 2.0 is a BIG deal for all of us.
-
New Open-source LLMs! 🤯 The Falcon has landed! 7B and 40B
There is this : https://github.com/IST-DASLab/sparsegpt
-
Webinar: Running LLMs performantly on CPUs Utilizing Pruning and Quantization
Check the paper here, it's intersting: https://arxiv.org/abs/2301.00774
-
OpenAI chief goes before US Congress to propose licenses for building AI
There's no chance that we've peeked from a bang for buck sense - we still haven't adequately investigated sparse networks.
Relevantish: https://arxiv.org/abs/2301.00774
The fact that we can reach those levels of sparseness with pruning also indicates that we're not doing a very good job of generating the initial network conditions.
Being able to come up with trainable initial settings for sparse networks across different topologies is hard, but given that we've had a degree of success with pre-trained networks, pre-training and pre-pruning might also allow for sparse networks with minimally compromised learning capabilities.
If it's possible to pre-train composable network modules, it might also be feasible to define trainable sparse networks with significantly relaxed topological constraints.
-
How to run Llama 13B with a 6GB graphics card
Training uses gradient descent, so you want to have good precision during that process. But once you have the overall structure of the network, https://arxiv.org/abs/2210.17323 (GPTQ) showed that you can cut down the precision quite a bit without losing a lot of accuracy. It seems you can cut down further for larger models. For the 13B Llama-based ones, going below 5 bit per parameter is noticeably worse, but for 30B models you can do 4 bits.
The same group did another paper https://arxiv.org/abs/2301.00774 which shows that in addition to reducing the precision of each parameter, you can also prune out a bunch of parameters entirely. It's harder to apply this optimization because models are usually loaded into RAM densely, but I hope someone figures out how to do it for popular models.
- SparseGPT: Language Models Can Be Accurately Pruned in One-Shot
llm
-
Open-sourcing a simple automation/agent workflow builder
We're open-sourcing a project that lets you build simple automations/agent workflows that use LLMs for different tasks. Kinda like Zapier or IFTTT but focused on using natural language to accomplish your tasks.It's super early but we'd love to start getting feedback to steer it in the right direction. It currently supports OpenAI and local models through llm.
-
Meta's Segment Anything written with C++ / GGML
> Tensorflow is a C++ framework that has Python bindings and a Python library, but when the models are served they are running on C++
Sure, and it's only a simple 20 step process that involves building Tensorflow from source. Yeay!
https://medium.com/@hamedmp/exporting-trained-tensorflow-mod...
Let me see what the process for compiling a LLM written in Rust is....
https://github.com/rustformers/llm
cargo install llm-cli
-
Announcing Floneum (A open source graph editor for local AI workflows written in rust)
Floneum is a graph editor for local AI workflows. It uses llm to run large language models locally, egui, and dioxus for the frontend, and wasmtime for the plugin system. If you are interested in the project, consider joining the discord, or building a plugin for Floneum in rust using WASI
- are there anytools or frameworks similar to "langchain" or "llamaindexbut implemented or designed in a language other than python?
-
(1/2) May 2023
Run inference for Large Language Models on CPU, with Rust (https://github.com/rustformers/llm)
-
I built a multi-platform desktop app to easily download and run models, open source btw
On the rustformers github page I see that one of the commands to generate the answer is llm llama infer -m ggml-gpt4all-j-v1.3-groovy.bin -p "Rust is a cool programming language because", my basic idea for now is to change the Tauri app to let it do -p prompt, which receives from my code through the link or through a shared variable (if I don't use the link and start different times your app)
- Weekly Megathread - 14 May 2023
-
rustformers/llm: Run inference for Large Language Models on CPU, with Rust 🦀🚀🦙
wonnx has done some fantastic work in this regard, so that's where we plan to start once we get there. In terms of general discussion of alternate backends, see this issue.
- llm: a Rust crate/CLI for CPU inference of LLMs, including LLaMA, GPT-NeoX, GPT-J and more
What are some alternatives?
StableLM - StableLM: Stability AI Language Models
llama.cpp - LLM inference in C/C++
github-copilot-product-specific-terms
ggml - Tensor library for machine learning
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
chat-ui - Open source codebase powering the HuggingChat app
alpaca-lora - Instruct-tune LLaMA on consumer hardware
intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
geov - The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER). We have shared a pre-trained 9B parameter model.
SD-CN-Animation - This script allows to automate video stylization task using StableDiffusion and ControlNet.