mlc-llm
sparsegpt
mlc-llm | sparsegpt | |
---|---|---|
89 | 16 | |
17,053 | 626 | |
3.7% | 3.8% | |
9.9 | 2.4 | |
4 days ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mlc-llm
- FLaNK 04 March 2024
-
Ai on a android phone?
This one uses gpu, it doesn't support Mistral yet: https://github.com/mlc-ai/mlc-llm
-
MLC vs llama.cpp
I have tried running mistral 7B with MLC on my m1 metal. And it kept crushing (git issue with description). Memory inefficiency problems.
-
[Project] Scaling LLama2 70B with Multi NVIDIA and AMD GPUs under 3k budget
Project: https://github.com/mlc-ai/mlc-llm
- Scaling LLama2-70B with Multi Nvidia/AMD GPU
-
AMD May Get Across the CUDA Moat
For LLM inference, a shoutout to MLC LLM, which runs LLM models on basically any API that's widely available: https://github.com/mlc-ai/mlc-llm
-
ROCm Is AMD's #1 Priority, Executive Says
One of your problems might be that gfx1032 is not supported by AMD's ROCm packages, which has a laughably short list of supported hardware: https://rocm.docs.amd.com/en/latest/release/gpu_os_support.h...
The normal workaround is to assign the closest architecture, eg gfx1030, so `HSA_OVERRIDE_GFX_VERSION=10.3.0` might help
Also, it looks like some of your tested projects are OpenCL? For me, I do something like: `yay -S rocm-hip-sdk rocm-ml-sdk rocm-opencl-sdk` to cover all the bases.
My recent interest has been LLMs and this is my general step by step for those (llama.cpp, exllama) for those interested: https://llm-tracker.info/books/howto-guides/page/amd-gpus
I didn't port the docs back in, but also here's a step-by-step w/ my adventures getting TVM/MLC working w/ an APU: https://github.com/mlc-ai/mlc-llm/issues/787
From my experience, ROCm is improving, but there's a good reason that Nvidia has 90% market share even at big price premiums.
-
Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
Maybe they're talking about https://github.com/mlc-ai/mlc-llm which is used for web-llm (https://github.com/mlc-ai/web-llm)? Seems to be using TVM.
-
Show HN: Fine-tune your own Llama 2 to replace GPT-3.5/4
you already have TVM for the cross platform stuff
see https://tvm.apache.org/docs/how_to/deploy/android.html
or https://octoml.ai/blog/using-swift-and-apache-tvm-to-develop...
or https://github.com/mlc-ai/mlc-llm
- Ask HN: Are you training and running custom LLMs and how are you doing it?
sparsegpt
-
(1/2) May 2023
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot (https://arxiv.org/abs/2301.00774)
- Why Falcon going Apache 2.0 is a BIG deal for all of us.
-
New Open-source LLMs! 🤯 The Falcon has landed! 7B and 40B
There is this : https://github.com/IST-DASLab/sparsegpt
-
Webinar: Running LLMs performantly on CPUs Utilizing Pruning and Quantization
Check the paper here, it's intersting: https://arxiv.org/abs/2301.00774
-
OpenAI chief goes before US Congress to propose licenses for building AI
There's no chance that we've peeked from a bang for buck sense - we still haven't adequately investigated sparse networks.
Relevantish: https://arxiv.org/abs/2301.00774
The fact that we can reach those levels of sparseness with pruning also indicates that we're not doing a very good job of generating the initial network conditions.
Being able to come up with trainable initial settings for sparse networks across different topologies is hard, but given that we've had a degree of success with pre-trained networks, pre-training and pre-pruning might also allow for sparse networks with minimally compromised learning capabilities.
If it's possible to pre-train composable network modules, it might also be feasible to define trainable sparse networks with significantly relaxed topological constraints.
-
How to run Llama 13B with a 6GB graphics card
Training uses gradient descent, so you want to have good precision during that process. But once you have the overall structure of the network, https://arxiv.org/abs/2210.17323 (GPTQ) showed that you can cut down the precision quite a bit without losing a lot of accuracy. It seems you can cut down further for larger models. For the 13B Llama-based ones, going below 5 bit per parameter is noticeably worse, but for 30B models you can do 4 bits.
The same group did another paper https://arxiv.org/abs/2301.00774 which shows that in addition to reducing the precision of each parameter, you can also prune out a bunch of parameters entirely. It's harder to apply this optimization because models are usually loaded into RAM densely, but I hope someone figures out how to do it for popular models.
- SparseGPT: Language Models Can Be Accurately Pruned in One-Shot
What are some alternatives?
llama.cpp - LLM inference in C/C++
StableLM - StableLM: Stability AI Language Models
ggml - Tensor library for machine learning
github-copilot-product-specific-terms
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
chat-ui - Open source codebase powering the HuggingChat app
llama-cpp-python - Python bindings for llama.cpp
intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
geov - The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER). We have shared a pre-trained 9B parameter model.