llama3-tokenizer-js
JS tokenizer for LLaMA 3 (by belladoreai)
gpu_poor
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization (by RahulSChand)
SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
surveyjs.io
featured
llama3-tokenizer-js | gpu_poor | |
---|---|---|
1 | 3 | |
37 | 650 | |
- | - | |
7.5 | 8.3 | |
10 days ago | 7 months ago | |
JavaScript | JavaScript | |
MIT License | - |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama3-tokenizer-js
Posts with mentions or reviews of llama3-tokenizer-js.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-04-21.
gpu_poor
Posts with mentions or reviews of gpu_poor.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-11-26.
-
Ask HN: Cheapest way to run local LLMs?
Here's a simple calculator for LLM inference requirements: https://rahulschand.github.io/gpu_poor/
- How many token/s can I get? A simple GitHub tool to see token/s u can get for a LLM
- Show HN: Can your LLM run this?
What are some alternatives?
When comparing llama3-tokenizer-js and gpu_poor you can also consider the following projects:
LLamaStack - ASP.NET Core Web, WebApi & WPF implementations for LLama.cpp & LLamaSharp
chatd - Chat with your documents using local AI
llama.net - .NET wrapper for LLaMA.cpp for LLaMA language model inference on CPU. 🦙
chitchat - A simple LLM chat front-end that makes it easy to find, download, and mess around with models on your local machine.
Pacha - "Pacha" TUI (Text User Interface) is a JavaScript application that utilizes the "blessed" library. It serves as a frontend for llama.cpp and provides a convenient and straightforward way to perform inference using local language models.
code-llama-for-vscode - Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.