I recently tested the "MPT 1b RedPajama + dolly" model and was pleasantly surprised by its overall quality despite its small model size. Could someone help to convert it to llama.cpp CPU ggml.q4?

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • llm-jeopardy

    Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts

  • Colab to try the model (GPU mode)|Test Questions Source

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Show HN: Beta9 – Open-source, serverless GPU container runtime

    2 projects | news.ycombinator.com | 13 May 2024
  • How to Package Dependency for AWS Lambda with Docker

    1 project | dev.to | 13 May 2024
  • Flatcar: OS Innovation with Systemd-Sysext

    4 projects | news.ycombinator.com | 12 May 2024
  • (re)enabling basic security scans on GitHub

    1 project | news.ycombinator.com | 12 May 2024
  • Creating custom VPC on AWS using OpenTofu

    1 project | dev.to | 12 May 2024