chitchat
gpu_poor
chitchat | gpu_poor | |
---|---|---|
2 | 3 | |
60 | 650 | |
- | - | |
7.7 | 8.3 | |
9 months ago | 7 months ago | |
JavaScript | JavaScript | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chitchat
-
Jina AI Launches First Open-Source 8K Text Embedding, Rivaling OpenAI
Pretty much! Right now it only supports md, pdf, txt, and html, but supporting additional formats is trivial: https://github.com/clarkmcc/chitchat/blob/main/src-tauri/src....
-
Show HN: Chie – a cross-platform, native, and extensible desktop client for LLMs
Shameless plug for my version of this: https://github.com/clarkmcc/chitchat
gpu_poor
-
Ask HN: Cheapest way to run local LLMs?
Here's a simple calculator for LLM inference requirements: https://rahulschand.github.io/gpu_poor/
- How many token/s can I get? A simple GitHub tool to see token/s u can get for a LLM
- Show HN: Can your LLM run this?
What are some alternatives?
llm-embed-jina - Embedding models from Jina AI
LLamaStack - ASP.NET Core Web, WebApi & WPF implementations for LLama.cpp & LLamaSharp
calculator-rust-react - Calculadora que realiza las funciones basicas aritmeticas, estas funciones se ejecutan por medio de RUST y la UI esta construida con ReactJS utilizando TauriApp de intermediario entre RUST y REACTJS
chatd - Chat with your documents using local AI
chie - An extensive desktop app for ChatGPT and other LLMs.
llama.net - .NET wrapper for LLaMA.cpp for LLaMA language model inference on CPU. 🦙
llamero - A GUI application to easily try out Facebook's LLaMA models.
Pacha - "Pacha" TUI (Text User Interface) is a JavaScript application that utilizes the "blessed" library. It serves as a frontend for llama.cpp and provides a convenient and straightforward way to perform inference using local language models.
petals - 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
code-llama-for-vscode - Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
Advanced-PassGen - Advanced Password Generator