open_llama VS llm-foundry

Compare open_llama vs llm-foundry and see what are their differences.

open_llama

OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset (by openlm-research)

llm-foundry

LLM training code for Databricks foundation models (by mosaicml)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
open_llama llm-foundry
52 37
7,193 3,710
1.3% 8.2%
5.3 9.7
10 months ago 2 days ago
Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

open_llama

Posts with mentions or reviews of open_llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-19.
  • How Open is Generative AI? Part 2
    8 projects | dev.to | 19 Dec 2023
    The RedPajama dataset was adapted by the OpenLLaMA project at UC Berkeley, creating an open-source LLaMA equivalent without Meta’s restrictions. The model's later version also included data from Falcon and StarCoder. This highlights the importance of open-source models and datasets, enabling free repurposing and innovation.
  • GPT-4 API general availability
    15 projects | news.ycombinator.com | 6 Jul 2023
    OpenLLaMA is though. https://github.com/openlm-research/open_llama

    All of these are surmountable problems.

    We can beat OpenAI.

    We can drain their moat.

  • Recommend me a computer for local a.i for 500 $
    2 projects | /r/ArtificialInteligence | 1 Jul 2023
    #1: 🌞 Open-source Reproduction of Meta AI’s LLaMA OpenLLaMA-13B released. (trained for 1T tokens) | 0 comments #2: πŸŽ‰ #1 on HuggingFace.co's Leaderboard Model Falcon 40B is now Free (Apache 2.0 License) | 0 comments #3: 😍 Have you seen this repo? "running LLMs on consumer-grade hardware. compatible models: llama.cpp, alpaca.cpp, gpt4all.cpp, rwkv.cpp, whisper.cpp, vicuna, koala, gpt4all-j, cerebras and many others!" | 0 comments
  • Who is openllama from?
    1 project | /r/LocalLLaMA | 30 Jun 2023
    Trained OpenLLaMA models are from the OpenLM Research team in collaboration with Stability AI: https://github.com/openlm-research/open_llama
  • Personal GPT: A tiny AI Chatbot that runs fully offline on your iPhone
    14 projects | /r/ChatGPT | 30 Jun 2023
    I can't use Llama or any model from the Llama family, due to license restrictions. Although now there's also the OpenLlama family of models, which have the same architecture but were trained on an open dataset (RedPajama, the same dataset the base model in my app was trained on). I'd love to pursue the direction of extended context lengths for on-device LLMs. Likely in a month or so, when I've implemented all the product feature that I currently have on my backlog.
  • XGen-7B, a new 7B foundational model trained on up to 8K length for 1.5T tokens
    3 projects | news.ycombinator.com | 28 Jun 2023
    https://github.com/openlm-research/open_llama#update-0615202...).

    XGen-7B is probably the superior 7B model, it's trained on more tokens and a longer default sequence length (although both presumably can adopt SuperHOT (Position Interpolation) to extend context), but larger models still probably perform better on an absolute basis.

  • MosaicML Agrees to Join Databricks to Power Generative AI for All
    3 projects | /r/LocalLLaMA | 26 Jun 2023
    Compare it to openllama. It github doesn't have a single script on how to do anything.
  • Databricks Strikes $1.3B Deal for Generative AI Startup MosaicML
    4 projects | news.ycombinator.com | 26 Jun 2023
    OpenLLaMA models up to 13B parameters have now been trained on 1T tokens:

    https://github.com/openlm-research/open_llama

  • Containerized AI before Apocalypse πŸ³πŸ€–
    4 projects | dev.to | 25 Jun 2023
    The deployed LLM binary, orca mini, has 3 billion parameters. Orca mini is based on the OpenLLaMA project.
  • AI β€” weekly megathread!
    2 projects | /r/artificial | 23 Jun 2023
    OpenLM Research released its 1T token version of OpenLLaMA 13B - the permissively licensed open source reproduction of Meta AI's LLaMA large language model. [Details].

llm-foundry

Posts with mentions or reviews of llm-foundry. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-05.
  • Fine Tuning Mistral 7B on Magic the Gathering Draft
    4 projects | news.ycombinator.com | 5 Dec 2023
    Related comment from gwern: https://news.ycombinator.com/item?id=38438859

    Also - why qlora rather than a full finetune? Using LambdaLabs, It'd cost roughly the same as your quote. Cheaper I think if you're willing to gamble with fp8: https://github.com/mosaicml/llm-foundry/tree/main/scripts/tr.... And fewer hyperparameters to tune as well

  • Consortium launched to build the largest open LLM
    1 project | news.ycombinator.com | 18 Oct 2023
    Traditionally, training runs can "explode" and fail, but there are methods to incrementally back them up and resume when that happens, see https://www.mosaicml.com/blog/mpt-7b
  • Applying All Recent Innovations To Train a Code Model
    2 projects | dev.to | 11 Aug 2023
    MosaicML released the MPT-7B model, which has a context of 60k tokens, thanks to the ALiBi position encoding.
  • Fine Tuning Language Models
    1 project | news.ycombinator.com | 3 Jul 2023
    Most AI runners just ignore licensing and run LLaMA finetunes.

    But if you want to avoid the non commercial LLaMA license, you have 3 good options for a base model.

    - OpenLlama 13B

    - MPT 30B

    - Falcon 40B

    Of these, Falcon 40B is very difficult to run (slow in 4 bit, basically requires a professional GPU, no good cpu offloading yet).

    OpenLLaMA 13B only supports a context size of 2048 as of today... But that could change soon.

    So you probably want MPT instruct 30B, specifically this one:

    https://huggingface.co/TheBloke/mpt-30B-instruct-GGML

    As the page says, you can try it out on a decent PC of your own with the OpenCL build of KoboldCPP. Change it to "instruct" mode, use the template on the page, offload as many layers as you can to your PC's dGPU, and run it in instruct mode. It may already work for your summarization needs.

    If not, you can finetune it with MPT's code and summarization d

    https://github.com/mosaicml/llm-foundry

    Or train OpenLLaMA 13B with SuperHOT + summarization data using QLORA.

  • Finetune MPT-30B using QLORA
    2 projects | /r/LocalLLaMA | 3 Jul 2023
    BTW. they finally merged a MPT patch to work with lora: https://github.com/mosaicml/llm-foundry/issues/304
  • [N] Meet MPT-30B: A Fully OpenSouce LLM that Outperforms GPT-3 - Dr. Mandar Karhade, MD. PhD.
    2 projects | /r/MachineLearning | 1 Jul 2023
  • MPT-30B QLoRA on 24 GB VRAM
    2 projects | /r/LocalLLaMA | 30 Jun 2023
    Did you run into this error while using qlora on MPT30b?: https://github.com/mosaicml/llm-foundry/issues/413
  • MosaicML Agrees to Join Databricks to Power Generative AI for All
    3 projects | /r/LocalLLaMA | 26 Jun 2023
    Yes? Their github is under Apache, their base model is under apache, the training data is not theirs, and they provide scripts how to convert it for the pretrain step. They have scripts for pretraining and finetuning as well. Basically for everything.
  • Best model for commercial use?
    1 project | /r/LocalLLaMA | 26 Jun 2023
    mosaicml/llm-foundry: LLM training code for MosaicML foundation models (github.com)
  • MosaicML launches MPT-30B: A new open-source model that outperforms GPT-3
    1 project | /r/mlwires | 25 Jun 2023
    MosaicML, a company that provides a platform for training and deploying large language models (LLMs), has recently released its second open-source foundation model called MPT-30B. The model is part of the MosaicML Foundation Series and comes after the smaller MPT-7B model that was launched in May 2023.

What are some alternatives?

When comparing open_llama and llm-foundry you can also consider the following projects:

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

qlora - QLoRA: Efficient Finetuning of Quantized LLMs

llama.cpp - LLM inference in C/C++

basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

RasaGPT - πŸ’¬ RasaGPT is the first headless LLM chatbot platform built on top of Rasa and Langchain. Built w/ Rasa, FastAPI, Langchain, LlamaIndex, SQLModel, pgvector, ngrok, telegram

gpt4all - gpt4all: run open-source LLMs anywhere

LMFlow - An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.

gorilla - Gorilla: An API store for LLMs

prompt-engineering - ChatGPT Prompt Engineering for Developers - deeplearning.ai

ggml - Tensor library for machine learning

llm-numbers - Numbers every LLM developer should know