gpu_poor
chatd
gpu_poor | chatd | |
---|---|---|
3 | 2 | |
646 | 805 | |
- | - | |
8.3 | 8.7 | |
6 months ago | 2 months ago | |
JavaScript | JavaScript | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpu_poor
-
Ask HN: Cheapest way to run local LLMs?
Here's a simple calculator for LLM inference requirements: https://rahulschand.github.io/gpu_poor/
- How many token/s can I get? A simple GitHub tool to see token/s u can get for a LLM
- Show HN: Can your LLM run this?
chatd
-
feed pdf files into an LLM for question answering tasks
IYH use chatd
-
AI — weekly megathread!
Chatd: a desktop application that lets you use a local large language model (Mistral-7B) to chat with your documents. It comes with the local LLM runner packaged in [Link].
What are some alternatives?
LLamaStack - ASP.NET Core Web, WebApi & WPF implementations for LLama.cpp & LLamaSharp
irccloud-desktop - IRCCloud Desktop App
llama.net - .NET wrapper for LLaMA.cpp for LLaMA language model inference on CPU. 🦙
distil-whisper - Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
chitchat - A simple LLM chat front-end that makes it easy to find, download, and mess around with models on your local machine.
igdm - Desktop application for Instagram DMs
Pacha - "Pacha" TUI (Text User Interface) is a JavaScript application that utilizes the "blessed" library. It serves as a frontend for llama.cpp and provides a convenient and straightforward way to perform inference using local language models.
cabal-desktop - Desktop client for Cabal, the p2p/decentralized/local-first chat platform.
code-llama-for-vscode - Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
SillyTavern - LLM Frontend for Power Users.
langchain - 🦜🔗 Build context-aware reasoning applications