-
simpleAI
An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.
-
Open-Instructions
Open-Instructions: A Pavilion of recent Open Source GPT Projects for decentralized AI.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
As per llama.cpp specifically, you can indeed add any model, it's just a matter of doing a bit of glue code and declaring it in your models.toml config. It's quite straightforward thanks to some provided tools for Python (see here for instance). For any other language it's a matter of integrating it through the gRPC interface (which shouldn't be too hard for Llama.cpp if you're comfortable in C++). I'm planning to also add support for REST for model in the backend at some point too.
I know, right? All of these alpaca or LLaMA variants have been nothing short of fervent and sometimes it makes me feel really puzzling to figure out where to get started, and I believe you feel the same way! This is exactly why I've just released a new open-source project on git named Open-Instructions (https://github.com/langbridgeai/Open-Instructions) to help people like us to come across a start point!
Last option if you cannot find any GPU, I've had an overall good experience using Llama.cpp on CPU, but you would still need a quite powerful machine and a few hundreds of disk space. I am not sure 32GB RAM will be enough for the larger models, which are as expected quite slow on CPU.
13b Alpaca Cleaned (trained on the cleaned dataset) is very impressive and works well as an instruct model w/o any censorship.
Related posts
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
-
More Agents Is All You Need: LLMs performance scales with the number of agents
-
Show HN: macOS GUI for running LLMs locally
-
Ask HN: What are the capabilities of consumer grade hardware to work with LLMs?
-
Meta to release open-source commercial AI model