gpu_poor VS Pacha

Compare gpu_poor vs Pacha and see what are their differences.

gpu_poor

Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization (by RahulSChand)

Pacha

"Pacha" TUI (Text User Interface) is a JavaScript application that utilizes the "blessed" library. It serves as a frontend for llama.cpp and provides a convenient and straightforward way to perform inference using local language models. (by mounta11n)
SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
surveyjs.io
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
gpu_poor Pacha
3 1
646 31
- -
8.3 6.1
6 months ago 10 months ago
JavaScript JavaScript
- Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

gpu_poor

Posts with mentions or reviews of gpu_poor. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-11-26.

Pacha

Posts with mentions or reviews of Pacha. We have used some of these posts to build our list of alternatives and similar projects.

What are some alternatives?

When comparing gpu_poor and Pacha you can also consider the following projects:

LLamaStack - ASP.NET Core Web, WebApi & WPF implementations for LLama.cpp & LLamaSharp

llamero - A GUI application to easily try out Facebook's LLaMA models.

chatd - Chat with your documents using local AI

Eucalyptus-Chat - A frontend for large language models like 🐨 Koala or 🦙 Vicuna running on CPU with llama.cpp, using the API server library provided by llama-cpp-python. NOTE: I had to discontinue this project because its maintenance takes more time than I can and want to invest. Feel free to fork :)

llama.net - .NET wrapper for LLaMA.cpp for LLaMA language model inference on CPU. 🦙

json-like-parse - JavaScript npm module that finds JSON-like text within a string and then parses it on best effort basis

chitchat - A simple LLM chat front-end that makes it easy to find, download, and mess around with models on your local machine.

SillyTavern - LLM Frontend for Power Users.

code-llama-for-vscode - Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.

InfinityArcade - Create any Text Game with AI