alpaca.cpp
flume
Our great sponsors
alpaca.cpp | flume | |
---|---|---|
94 | 1 | |
9,878 | 1,325 | |
- | - | |
9.4 | 4.6 | |
about 1 year ago | 6 months ago | |
C | TypeScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
alpaca.cpp
-
LLaMA Now Goes Faster on CPUs
Where's the 30B-in-6GB claim? ^FGB in your GH link finds [0] which is neither by jart nor by ggerganov but by another user who promptly gets told to look at [1] where Justine denies that claim.
[0] https://github.com/antimatter15/alpaca.cpp/issues/182
-
Is there potential to short NVDA?
You can just download the language model, dude!!! Everyone doesn’t need to make their own and the open source models literally get better every day.
- [Oobabooga] Alpaca.cpp est extrêmement simple à travailler.
-
Hollywood’s Screenwriters Are Right to Fear AI
Alpaca
-
Square Enix’s AI Tech Demo Is a Staggering Failure
Square could have also trained a more specific data source for their NLP, very similar to Alpaca. Alpaca was trained from interactions from a larger dataset. So while it isn't as smart, it's still able to understand instructions and act upon them.
- [Singularity] Ich bin Alpaka 13B - Frag mich alles
-
Alpaca Vs. Final Jeopardy
The model I found was in 8 parts. The alpaca.cpp chat client (chat.cpp) needs to be modified to run the 8 part model, documented here: https://github.com/antimatter15/alpaca.cpp/issues/149
-
LocalAI: OpenAI compatible API to run LLM models locally on consumer grade hardware!
try the instructions on this github repo https://github.com/antimatter15/alpaca.cpp, its not the best one but I was able to run this model on my linux machine with 16GB memory, I think its a good starting point.
-
What educational materials do you think would be most useful during/after collapse?
Doesn't run offline. If you're running something without a beefy-ish GPU, there's https://github.com/antimatter15/alpaca.cpp .
-
ChatGPT Reignited My Passion For Coding
Ye, atm. toying with alpaca 7B/13B in a local install.
flume
-
Flow-Based Programming, a way for AI and humans to develop together
I am very bullish on this approach. Is already showing promise with StableDiffusion (https://github.com/comfyanonymous/ComfyUI).
I have built humans in the loop feedback models before and they are always very targeted to a specific task. This approach modular and intuitive. I think the scope is too small though.
I spent the weekend starting a project to use this approach using a GPT model and FlumeJs (https://github.com/chrisjpatty/flume) but now that I see noFlo i am excited to try it
What are some alternatives?
gpt4all - gpt4all: run open-source LLMs anywhere
ts_injector - Simple and lightweight injector for typescript projects.
llama.cpp - LLM inference in C/C++
hackernews-react-graphql - Hacker News clone rewritten with universal JavaScript, using React and GraphQL.
coral-pi-rest-server - Perform inferencing of tensorflow-lite models on an RPi with acceleration from Coral USB stick
sentry-javascript - Official Sentry SDKs for JavaScript
ggml - Tensor library for machine learning
website - Personal website built using react, nextjs, tailwind, and graphql deployed to vercel.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI