Alpaca, LLaMa, Vicuna [D]

This page summarizes the projects mentioned and recommended in the original post on /r/MachineLearning

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • simpleAI

    An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.

  • As per llama.cpp specifically, you can indeed add any model, it's just a matter of doing a bit of glue code and declaring it in your models.toml config. It's quite straightforward thanks to some provided tools for Python (see here for instance). For any other language it's a matter of integrating it through the gRPC interface (which shouldn't be too hard for Llama.cpp if you're comfortable in C++). I'm planning to also add support for REST for model in the backend at some point too.

  • Open-Instructions

    Open-Instructions: A Pavilion of recent Open Source GPT Projects for decentralized AI.

  • I know, right? All of these alpaca or LLaMA variants have been nothing short of fervent and sometimes it makes me feel really puzzling to figure out where to get started, and I believe you feel the same way! This is exactly why I've just released a new open-source project on git named Open-Instructions (https://github.com/langbridgeai/Open-Instructions) to help people like us to come across a start point!

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • dalai

    The simplest way to run LLaMA on your local machine

  • llama.cpp

    LLM inference in C/C++

  • Last option if you cannot find any GPU, I've had an overall good experience using Llama.cpp on CPU, but you would still need a quite powerful machine and a few hundreds of disk space. I am not sure 32GB RAM will be enough for the larger models, which are as expected quite slow on CPU.

  • AlpacaDataCleaned

    Alpaca dataset from Stanford, cleaned and curated

  • 13b Alpaca Cleaned (trained on the cleaned dataset) is very impressive and works well as an instruct model w/o any censorship.

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • IBM Granite: A Family of Open Foundation Models for Code Intelligence

    3 projects | news.ycombinator.com | 7 May 2024
  • More Agents Is All You Need: LLMs performance scales with the number of agents

    2 projects | news.ycombinator.com | 6 Apr 2024
  • Show HN: macOS GUI for running LLMs locally

    1 project | news.ycombinator.com | 18 Sep 2023
  • Ask HN: What are the capabilities of consumer grade hardware to work with LLMs?

    1 project | news.ycombinator.com | 3 Aug 2023
  • Meta to release open-source commercial AI model

    3 projects | news.ycombinator.com | 14 Jul 2023