shady.ai
llama.cpp
shady.ai | llama.cpp | |
---|---|---|
1 | 777 | |
107 | 57,463 | |
- | - | |
7.6 | 10.0 | |
3 months ago | 7 days ago | |
Dart | C++ | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
shady.ai
-
The Coming of Local LLMs
I’ve got some of their smaller Raven models running locally on my M1 (only 16GB of RAM).
I’m also in the middle of making it user friendly to run these models on all platforms (built with Flutter). First MacOS release will be out before this weekend: https://github.com/BrutalCoding/shady.ai
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
StudentAI - StudentAI is an prompt-less AI chatbot app that uses OpenAI's large language model to help students learn more effectively. StudentAI can answer questions, provide explanations, and even generate creative content. This makes it a powerful tool for students of all ages and levels of learning.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
flutter_ci_cd - CI/CD & branching template for flutter apps
gpt4all - gpt4all: run open-source LLMs anywhere
more-ane-transformers - Run transformers (incl. LLMs) on the Apple Neural Engine.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Flutter-AssetsAudioPlayer - Play simultaneously music/audio from assets/network/file directly from Flutter, compatible with android / ios / web / macos, displays notifications
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
rwkv.cpp - INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
ggml - Tensor library for machine learning
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.