chitchat
A simple LLM chat front-end that makes it easy to find, download, and mess around with models on your local machine. (by clarkmcc)
gpu_poor
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization (by RahulSChand)
chitchat | gpu_poor | |
---|---|---|
2 | 3 | |
64 | 1,164 | |
- | - | |
7.7 | 5.0 | |
over 1 year ago | about 1 month ago | |
JavaScript | JavaScript | |
MIT License | - |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chitchat
Posts with mentions or reviews of chitchat.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-10-25.
-
Jina AI Launches First Open-Source 8K Text Embedding, Rivaling OpenAI
Pretty much! Right now it only supports md, pdf, txt, and html, but supporting additional formats is trivial: https://github.com/clarkmcc/chitchat/blob/main/src-tauri/src....
-
Show HN: Chie – a cross-platform, native, and extensible desktop client for LLMs
Shameless plug for my version of this: https://github.com/clarkmcc/chitchat
gpu_poor
Posts with mentions or reviews of gpu_poor.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-11-26.
-
Ask HN: Cheapest way to run local LLMs?
Here's a simple calculator for LLM inference requirements: https://rahulschand.github.io/gpu_poor/
- How many token/s can I get? A simple GitHub tool to see token/s u can get for a LLM
- Show HN: Can your LLM run this?
What are some alternatives?
When comparing chitchat and gpu_poor you can also consider the following projects:
llamero - A GUI application to easily try out Facebook's LLaMA models.
chatd - Chat with your documents using local AI