gpu_poor
Pacha
gpu_poor | Pacha | |
---|---|---|
3 | 1 | |
646 | 31 | |
- | - | |
8.3 | 6.1 | |
6 months ago | 10 months ago | |
JavaScript | JavaScript | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpu_poor
-
Ask HN: Cheapest way to run local LLMs?
Here's a simple calculator for LLM inference requirements: https://rahulschand.github.io/gpu_poor/
- How many token/s can I get? A simple GitHub tool to see token/s u can get for a LLM
- Show HN: Can your LLM run this?
Pacha
-
Pacha - A Frontend for llama.cpp
pacha-windows
What are some alternatives?
LLamaStack - ASP.NET Core Web, WebApi & WPF implementations for LLama.cpp & LLamaSharp
llamero - A GUI application to easily try out Facebook's LLaMA models.
chatd - Chat with your documents using local AI
Eucalyptus-Chat - A frontend for large language models like 🐨 Koala or 🦙 Vicuna running on CPU with llama.cpp, using the API server library provided by llama-cpp-python. NOTE: I had to discontinue this project because its maintenance takes more time than I can and want to invest. Feel free to fork :)
llama.net - .NET wrapper for LLaMA.cpp for LLaMA language model inference on CPU. 🦙
json-like-parse - JavaScript npm module that finds JSON-like text within a string and then parses it on best effort basis
chitchat - A simple LLM chat front-end that makes it easy to find, download, and mess around with models on your local machine.
SillyTavern - LLM Frontend for Power Users.
code-llama-for-vscode - Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
InfinityArcade - Create any Text Game with AI