open_llama VS ggml

Compare open_llama vs ggml and see what are their differences.

open_llama

OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset (by openlm-research)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
open_llama ggml
52 69
7,193 9,642
1.3% -
5.3 9.8
10 months ago 6 days ago
C
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

open_llama

Posts with mentions or reviews of open_llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-19.
  • How Open is Generative AI? Part 2
    8 projects | dev.to | 19 Dec 2023
    The RedPajama dataset was adapted by the OpenLLaMA project at UC Berkeley, creating an open-source LLaMA equivalent without Meta’s restrictions. The model's later version also included data from Falcon and StarCoder. This highlights the importance of open-source models and datasets, enabling free repurposing and innovation.
  • GPT-4 API general availability
    15 projects | news.ycombinator.com | 6 Jul 2023
    OpenLLaMA is though. https://github.com/openlm-research/open_llama

    All of these are surmountable problems.

    We can beat OpenAI.

    We can drain their moat.

  • Recommend me a computer for local a.i for 500 $
    2 projects | /r/ArtificialInteligence | 1 Jul 2023
    #1: 🌞 Open-source Reproduction of Meta AI’s LLaMA OpenLLaMA-13B released. (trained for 1T tokens) | 0 comments #2: πŸŽ‰ #1 on HuggingFace.co's Leaderboard Model Falcon 40B is now Free (Apache 2.0 License) | 0 comments #3: 😍 Have you seen this repo? "running LLMs on consumer-grade hardware. compatible models: llama.cpp, alpaca.cpp, gpt4all.cpp, rwkv.cpp, whisper.cpp, vicuna, koala, gpt4all-j, cerebras and many others!" | 0 comments
  • Who is openllama from?
    1 project | /r/LocalLLaMA | 30 Jun 2023
    Trained OpenLLaMA models are from the OpenLM Research team in collaboration with Stability AI: https://github.com/openlm-research/open_llama
  • Personal GPT: A tiny AI Chatbot that runs fully offline on your iPhone
    14 projects | /r/ChatGPT | 30 Jun 2023
    I can't use Llama or any model from the Llama family, due to license restrictions. Although now there's also the OpenLlama family of models, which have the same architecture but were trained on an open dataset (RedPajama, the same dataset the base model in my app was trained on). I'd love to pursue the direction of extended context lengths for on-device LLMs. Likely in a month or so, when I've implemented all the product feature that I currently have on my backlog.
  • XGen-7B, a new 7B foundational model trained on up to 8K length for 1.5T tokens
    3 projects | news.ycombinator.com | 28 Jun 2023
    https://github.com/openlm-research/open_llama#update-0615202...).

    XGen-7B is probably the superior 7B model, it's trained on more tokens and a longer default sequence length (although both presumably can adopt SuperHOT (Position Interpolation) to extend context), but larger models still probably perform better on an absolute basis.

  • MosaicML Agrees to Join Databricks to Power Generative AI for All
    3 projects | /r/LocalLLaMA | 26 Jun 2023
    Compare it to openllama. It github doesn't have a single script on how to do anything.
  • Databricks Strikes $1.3B Deal for Generative AI Startup MosaicML
    4 projects | news.ycombinator.com | 26 Jun 2023
    OpenLLaMA models up to 13B parameters have now been trained on 1T tokens:

    https://github.com/openlm-research/open_llama

  • Containerized AI before Apocalypse πŸ³πŸ€–
    4 projects | dev.to | 25 Jun 2023
    The deployed LLM binary, orca mini, has 3 billion parameters. Orca mini is based on the OpenLLaMA project.
  • AI β€” weekly megathread!
    2 projects | /r/artificial | 23 Jun 2023
    OpenLM Research released its 1T token version of OpenLLaMA 13B - the permissively licensed open source reproduction of Meta AI's LLaMA large language model. [Details].

ggml

Posts with mentions or reviews of ggml. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-11.
  • LLMs on your local Computer (Part 1)
    7 projects | dev.to | 11 Mar 2024
    git clone https://github.com/ggerganov/ggml cd ggml mkdir build cd build cmake .. make -j4 gpt-j ../examples/gpt-j/download-ggml-model.sh 6B
  • GGUF, the Long Way Around
    2 projects | news.ycombinator.com | 29 Feb 2024
    Cool. I was just learning about GGUF by creating my own parser for it based on the spec https://github.com/ggerganov/ggml/blob/master/docs/gguf.md (for educational purposes)
  • Ask HN: People who switched from GPT to their own models. How was it?
    3 projects | news.ycombinator.com | 26 Feb 2024
    If you don't care about the details of how those model servers work, then something that abstracts out the whole process like LM Studio or Ollama is all you need.

    However, if you want to get into the weeds of how this actually works, I recommend you look up model quantization and some libraries like ggml[1] that actually do that for you.

    [1] https://github.com/ggerganov/ggml

  • GGUF File Format
    1 project | news.ycombinator.com | 31 Dec 2023
  • Google just shipped libggml from llama-cpp into its Android AICore
    2 projects | /r/LocalLLaMA | 9 Dec 2023
    Because the library is called ggml, but it supports gguf.
  • Q-Transformer
    2 projects | news.ycombinator.com | 30 Nov 2023
    Apparently this guy like a bunch of others like https://github.com/ggerganov/ggml are implementing transformers from papers for people that want them. Pretty cool.
  • [P] Inference Vision Transformer (ViT) in plain C/C++ with ggml
    2 projects | /r/MachineLearning | 26 Nov 2023
    You can access it here: https://github.com/staghado/vit.cpp It has been added to the ggml library on GitHub: https://github.com/ggerganov/ggml
  • Falcon 180B Released
    1 project | news.ycombinator.com | 6 Sep 2023
    https://github.com/ggerganov/ggml

    One note is that prompt ingestion is extremely slow on CPU compared to GPU. So short prompts are fine (as tokens can be streamed once the prompt is ingested), but long prompts feel extremely sluggish.

  • Stable Diffusion in pure C/C++
    8 projects | news.ycombinator.com | 19 Aug 2023
    I did a quick run under profiler and on my AVX2-laptop the slowest part (>50%) was matrix multiplication (sgemm).

    In current version of GGML if OpenBLAS is enabled, they convert matrices to FP32 before running sgemm.

    If OpenBLAS is disabled, on AVX2 plaftorm they convert FP16 to FP32 on every FMA operation, which even worse (due to repetition). After that, both ggml_vec_dot_f16 and ggml_vec_dot_f32 took first place in profiler.

    Source: https://github.com/ggerganov/ggml/blob/master/src/ggml.c#L10...

  • Accessing Llama 2 from the command-line with the LLM-replicate plugin
    16 projects | news.ycombinator.com | 18 Jul 2023
    For those getting started, the easiest one click installer I've used is Nomic.ai's gpt4all: https://gpt4all.io/

    This runs with a simple GUI on Windows/Mac/Linux, leverages a fork of llama.cpp on the backend and supports GPU acceleration, and LLaMA, Falcon, MPT, and GPT-J models. It also has API/CLI bindings.

    I just saw a slick new tool https://ollama.ai/ that will let you install a llama2-7b with a single `ollama run llama2` command that has a very simple 1-click installer for Apple Silicon Mac (but need to build from source for anything else atm). It looks like it only supports llamas OOTB but it also seems to use llama.cpp (via Go adapter) on the backend - it seemed to be CPU-only on my MBA, but I didn't poke too much and it's brand new, so we'll see.

    For anyone on HN, they should probably be looking at https://github.com/ggerganov/llama.cpp and https://github.com/ggerganov/ggml directly. If you have a high-end Nvidia consumer card (3090/4090) I'd highly recommend looking into https://github.com/turboderp/exllama

    For those generally confused, the r/LocalLLaMA wiki is a good place to start: https://www.reddit.com/r/LocalLLaMA/wiki/guide/

    I've also been porting my own notes into a single location that tracks models, evals, and has guides focused on local models: https://llm-tracker.info/

What are some alternatives?

When comparing open_llama and ggml you can also consider the following projects:

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

llama.cpp - LLM inference in C/C++

alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

alpaca-lora - Instruct-tune LLaMA on consumer hardware

gpt4all - gpt4all: run open-source LLMs anywhere

mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.

gorilla - Gorilla: An API store for LLMs

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

gpt-json - Structured and typehinted GPT responses in Python

llm - An ecosystem of Rust libraries for working with large language models