gpu_poor
LLamaStack
gpu_poor | LLamaStack | |
---|---|---|
3 | 1 | |
650 | 32 | |
- | - | |
8.3 | 10.0 | |
7 months ago | 6 months ago | |
JavaScript | C# | |
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpu_poor
-
Ask HN: Cheapest way to run local LLMs?
Here's a simple calculator for LLM inference requirements: https://rahulschand.github.io/gpu_poor/
- How many token/s can I get? A simple GitHub tool to see token/s u can get for a LLM
- Show HN: Can your LLM run this?
LLamaStack
What are some alternatives?
chatd - Chat with your documents using local AI
LLamaSharp - A C#/.NET library to run LLM models (🦙LLaMA/LLaVA) on your local device efficiently.
llama.net - .NET wrapper for LLaMA.cpp for LLaMA language model inference on CPU. 🦙
llama.go - llama.go is like llama.cpp in pure Golang!
chitchat - A simple LLM chat front-end that makes it easy to find, download, and mess around with models on your local machine.
langchain-alpaca - Run Alpaca LLM in LangChain
Pacha - "Pacha" TUI (Text User Interface) is a JavaScript application that utilizes the "blessed" library. It serves as a frontend for llama.cpp and provides a convenient and straightforward way to perform inference using local language models.
code-llama-for-vscode - Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
PerroPastor - Run Llama based LLMs in Unity entirely in compute shaders with no dependencies