alpaca-electron
llama.cpp
alpaca-electron | llama.cpp | |
---|---|---|
8 | 780 | |
1,261 | 58,425 | |
- | - | |
5.9 | 10.0 | |
about 2 months ago | 3 days ago | |
JavaScript | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
alpaca-electron
-
Are you sure you are focusing on the right things? (venting)
I sympathize. There are some efforts here and there but it's not something that resonates with the enthusiast crowd much. An abandoned example here: ItsPi3141/alpaca-electron
- Guess I am kinda famous now
-
one-click install LLM desktop apps
Look up troublechute on youtube. Or alpaca electron
- What's the most basic NVIDIA graphics card that will work with mainstream 7B GPU models?
-
Locally Hosted ChatGPT3 or Higher
I recently tried alpaca electron with the 7b model. I am surprised how well this runs on my own hardware with very little CPU and RAM consumption.
- Running oobabooga with Alpaca on Apple Silicon (M1/M2)
- Optimization Of Computational Power & Data Transfer For Elly (Global AI)
-
Cerebras-GPT: A Family of Open, Compute-Efficient, Large Language Models
Here's alpaca running in electron. Not exactly one click but close.
https://github.com/ItsPi3141/alpaca-electron
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
gpt4all - gpt4all: run open-source LLMs anywhere
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
codealpaca
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
catai - Run AI ✨ assistant locally! with simple API for Node.js 🚀
ggml - Tensor library for machine learning
flan-alpaca - This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM