alpaca.cpp
llm
alpaca.cpp | llm | |
---|---|---|
94 | 41 | |
9,878 | 5,911 | |
- | 2.4% | |
9.4 | 9.4 | |
about 1 year ago | about 1 month ago | |
C | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
alpaca.cpp
-
LLaMA Now Goes Faster on CPUs
Where's the 30B-in-6GB claim? ^FGB in your GH link finds [0] which is neither by jart nor by ggerganov but by another user who promptly gets told to look at [1] where Justine denies that claim.
[0] https://github.com/antimatter15/alpaca.cpp/issues/182
-
Is there potential to short NVDA?
You can just download the language model, dude!!! Everyone doesn’t need to make their own and the open source models literally get better every day.
- [Oobabooga] Alpaca.cpp est extrêmement simple à travailler.
-
Hollywood’s Screenwriters Are Right to Fear AI
Alpaca
-
Square Enix’s AI Tech Demo Is a Staggering Failure
Square could have also trained a more specific data source for their NLP, very similar to Alpaca. Alpaca was trained from interactions from a larger dataset. So while it isn't as smart, it's still able to understand instructions and act upon them.
- [Singularity] Ich bin Alpaka 13B - Frag mich alles
-
Alpaca Vs. Final Jeopardy
The model I found was in 8 parts. The alpaca.cpp chat client (chat.cpp) needs to be modified to run the 8 part model, documented here: https://github.com/antimatter15/alpaca.cpp/issues/149
-
LocalAI: OpenAI compatible API to run LLM models locally on consumer grade hardware!
try the instructions on this github repo https://github.com/antimatter15/alpaca.cpp, its not the best one but I was able to run this model on my linux machine with 16GB memory, I think its a good starting point.
-
What educational materials do you think would be most useful during/after collapse?
Doesn't run offline. If you're running something without a beefy-ish GPU, there's https://github.com/antimatter15/alpaca.cpp .
-
ChatGPT Reignited My Passion For Coding
Ye, atm. toying with alpaca 7B/13B in a local install.
llm
-
Open-sourcing a simple automation/agent workflow builder
We're open-sourcing a project that lets you build simple automations/agent workflows that use LLMs for different tasks. Kinda like Zapier or IFTTT but focused on using natural language to accomplish your tasks.It's super early but we'd love to start getting feedback to steer it in the right direction. It currently supports OpenAI and local models through llm.
-
Meta's Segment Anything written with C++ / GGML
> Tensorflow is a C++ framework that has Python bindings and a Python library, but when the models are served they are running on C++
Sure, and it's only a simple 20 step process that involves building Tensorflow from source. Yeay!
https://medium.com/@hamedmp/exporting-trained-tensorflow-mod...
Let me see what the process for compiling a LLM written in Rust is....
https://github.com/rustformers/llm
cargo install llm-cli
-
Announcing Floneum (A open source graph editor for local AI workflows written in rust)
Floneum is a graph editor for local AI workflows. It uses llm to run large language models locally, egui, and dioxus for the frontend, and wasmtime for the plugin system. If you are interested in the project, consider joining the discord, or building a plugin for Floneum in rust using WASI
- are there anytools or frameworks similar to "langchain" or "llamaindexbut implemented or designed in a language other than python?
-
(1/2) May 2023
Run inference for Large Language Models on CPU, with Rust (https://github.com/rustformers/llm)
-
I built a multi-platform desktop app to easily download and run models, open source btw
On the rustformers github page I see that one of the commands to generate the answer is llm llama infer -m ggml-gpt4all-j-v1.3-groovy.bin -p "Rust is a cool programming language because", my basic idea for now is to change the Tauri app to let it do -p prompt, which receives from my code through the link or through a shared variable (if I don't use the link and start different times your app)
- Weekly Megathread - 14 May 2023
-
rustformers/llm: Run inference for Large Language Models on CPU, with Rust 🦀🚀🦙
wonnx has done some fantastic work in this regard, so that's where we plan to start once we get there. In terms of general discussion of alternate backends, see this issue.
- llm: a Rust crate/CLI for CPU inference of LLMs, including LLaMA, GPT-NeoX, GPT-J and more
What are some alternatives?
gpt4all - gpt4all: run open-source LLMs anywhere
llama.cpp - LLM inference in C/C++
ggml - Tensor library for machine learning
coral-pi-rest-server - Perform inferencing of tensorflow-lite models on an RPi with acceleration from Coral USB stick
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
alpaca-lora - Instruct-tune LLaMA on consumer hardware
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
SD-CN-Animation - This script allows to automate video stylization task using StableDiffusion and ControlNet.
character-editor - Create, edit and convert AI character files for CharacterAI, Pygmalion, Text Generation, KoboldAI and TavernAI