llm-foundry VS laion.ai

Compare llm-foundry vs laion.ai and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
llm-foundry laion.ai
37 25
3,730 105
4.0% 5.7%
9.7 8.5
4 days ago 17 days ago
Python HTML
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

llm-foundry

Posts with mentions or reviews of llm-foundry. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-05.
  • Fine Tuning Mistral 7B on Magic the Gathering Draft
    4 projects | news.ycombinator.com | 5 Dec 2023
    Related comment from gwern: https://news.ycombinator.com/item?id=38438859

    Also - why qlora rather than a full finetune? Using LambdaLabs, It'd cost roughly the same as your quote. Cheaper I think if you're willing to gamble with fp8: https://github.com/mosaicml/llm-foundry/tree/main/scripts/tr.... And fewer hyperparameters to tune as well

  • Consortium launched to build the largest open LLM
    1 project | news.ycombinator.com | 18 Oct 2023
    Traditionally, training runs can "explode" and fail, but there are methods to incrementally back them up and resume when that happens, see https://www.mosaicml.com/blog/mpt-7b
  • Applying All Recent Innovations To Train a Code Model
    2 projects | dev.to | 11 Aug 2023
    MosaicML released the MPT-7B model, which has a context of 60k tokens, thanks to the ALiBi position encoding.
  • Fine Tuning Language Models
    1 project | news.ycombinator.com | 3 Jul 2023
    Most AI runners just ignore licensing and run LLaMA finetunes.

    But if you want to avoid the non commercial LLaMA license, you have 3 good options for a base model.

    - OpenLlama 13B

    - MPT 30B

    - Falcon 40B

    Of these, Falcon 40B is very difficult to run (slow in 4 bit, basically requires a professional GPU, no good cpu offloading yet).

    OpenLLaMA 13B only supports a context size of 2048 as of today... But that could change soon.

    So you probably want MPT instruct 30B, specifically this one:

    https://huggingface.co/TheBloke/mpt-30B-instruct-GGML

    As the page says, you can try it out on a decent PC of your own with the OpenCL build of KoboldCPP. Change it to "instruct" mode, use the template on the page, offload as many layers as you can to your PC's dGPU, and run it in instruct mode. It may already work for your summarization needs.

    If not, you can finetune it with MPT's code and summarization d

    https://github.com/mosaicml/llm-foundry

    Or train OpenLLaMA 13B with SuperHOT + summarization data using QLORA.

  • Finetune MPT-30B using QLORA
    2 projects | /r/LocalLLaMA | 3 Jul 2023
    BTW. they finally merged a MPT patch to work with lora: https://github.com/mosaicml/llm-foundry/issues/304
  • [N] Meet MPT-30B: A Fully OpenSouce LLM that Outperforms GPT-3 - Dr. Mandar Karhade, MD. PhD.
    2 projects | /r/MachineLearning | 1 Jul 2023
  • MPT-30B QLoRA on 24 GB VRAM
    2 projects | /r/LocalLLaMA | 30 Jun 2023
    Did you run into this error while using qlora on MPT30b?: https://github.com/mosaicml/llm-foundry/issues/413
  • MosaicML Agrees to Join Databricks to Power Generative AI for All
    3 projects | /r/LocalLLaMA | 26 Jun 2023
    Yes? Their github is under Apache, their base model is under apache, the training data is not theirs, and they provide scripts how to convert it for the pretrain step. They have scripts for pretraining and finetuning as well. Basically for everything.
  • Best model for commercial use?
    1 project | /r/LocalLLaMA | 26 Jun 2023
    mosaicml/llm-foundry: LLM training code for MosaicML foundation models (github.com)
  • MosaicML launches MPT-30B: A new open-source model that outperforms GPT-3
    1 project | /r/mlwires | 25 Jun 2023
    MosaicML, a company that provides a platform for training and deploying large language models (LLMs), has recently released its second open-source foundation model called MPT-30B. The model is part of the MosaicML Foundation Series and comes after the smaller MPT-7B model that was launched in May 2023.

laion.ai

Posts with mentions or reviews of laion.ai. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-19.
  • How Open is Generative AI? Part 2
    8 projects | dev.to | 19 Dec 2023
    LAION (Large-scale Artificial Intelligence Open Network), a German non-profit established in 2020, is dedicated to advancing open-source models and datasets (primarily under Apache 2 and MIT licenses) to foster open research and the evolution of benevolent AI. Their datasets, encompassing both images and text, have been pivotal in the training of renowned text-to-image models like Stable Diffusion.
  • How artists are sabotaging AI to take revenge on image generators
    1 project | news.ycombinator.com | 18 Dec 2023
    > there is going to be a "pre-GPT" internet training set from 2022

    Well, yeah, there are several here, and I think all the major image generators are using some combination of them as their starting points: https://laion.ai/

    > As AI increases as an overall % of all online posts and activity it will death spiral on model quality.

    Nope, it will just mean that it will be more expensive to source additional training data on top of the massive trove of existing "clean" (from intentional poisoning) data (much of which isn't perfectly captioned and human work on improving captioning can improve its utility in model training, as can more advanced models with more advanced text encoders, etc.)

    If poisoning was widespread, it wouldn't impact "big model" quality much -- they aren't grabbing new random data on the internet for continuous training. It might drive up the expense of community fine tuning, which often does depend on sourcing representative imagery for target styles or concepts from, among other places, the internet.

  • [D] Why is most Open Source AI happening outside the USA?
    2 projects | /r/MachineLearning | 6 Dec 2023
    Also don't forget https://laion.ai/ from Germany. They focus more on datasets, but still.
  • OpenAI is too cheap to beat
    4 projects | news.ycombinator.com | 12 Oct 2023
    I think the weird thing about this is that it's completely true right now but in X months it may be totally outdated advice.

    For example, efforts like OpenMOE https://github.com/XueFuzhao/OpenMoE or similar will probably eventually lead to very competitive performance and cost-effectiveness for open source models. At least in terms of competing with GPT-3.5 for many applications.

    Also see https://laion.ai/

    I also believe that within say 1-3 years there will be a different type of training approach that does not require such large datasets or manual human feedback.

  • MJ images sources?
    1 project | /r/midjourney | 9 Jul 2023
    Billions. MJ's initial training dataset was from LAION: https://laion.ai/ . Not sure which version, and I am pretty sure additional data has been added since MJ v1, but MJ doesn't release anything more exact. However my guess is: more billions, lol.
  • AI tools apps in one place sorted by category
    5 projects | /r/ChatGPT | 29 May 2023
    Missing LAION and OpenAssistant: https://laion.ai/
  • GPT detectors are biased against non-native English writers
    2 projects | news.ycombinator.com | 22 May 2023
  • Model Suggestions
    1 project | /r/LargeLanguageModels | 9 May 2023
    As far as I am concerned weights of llama are not allowed for commercial use, but if you are willing to do full training and change it's all weights it would probably be fine. There was a discussion on this topic on forums and no one was sure, you can research it. Also you can take a look at laion.ai and dolly from databricks, they are open source and are allowed for commercial use, if they meet your needs.
  • HuggingChat, the first open source alternative to ChatGPT
    2 projects | news.ycombinator.com | 25 Apr 2023
  • Hugging Face releases its own version of ChatGPT
    4 projects | /r/singularity | 25 Apr 2023
    that's OpenAssistant's / LAION AI model, HuggingFace provided the infrastructure.

What are some alternatives?

When comparing llm-foundry and laion.ai you can also consider the following projects:

qlora - QLoRA: Efficient Finetuning of Quantized LLMs

llama - Inference code for Llama models

basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.

stable-diffusion-webui - Stable Diffusion web UI

RasaGPT - 💬 RasaGPT is the first headless LLM chatbot platform built on top of Rasa and Langchain. Built w/ Rasa, FastAPI, Langchain, LlamaIndex, SQLModel, pgvector, ngrok, telegram

LMFlow - An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.

gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.

prompt-engineering - ChatGPT Prompt Engineering for Developers - deeplearning.ai

Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

llm-numbers - Numbers every LLM developer should know

alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM