Awesome-LLM VS open_llama

Compare Awesome-LLM vs open_llama and see what are their differences.

Awesome-LLM

Awesome-LLM: a curated list of Large Language Model (by Hannibal046)

open_llama

OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset (by openlm-research)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
Awesome-LLM open_llama
10 52
14,654 7,211
- 0.9%
8.6 5.3
8 days ago 10 months ago
Creative Commons Zero v1.0 Universal Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Awesome-LLM

Posts with mentions or reviews of Awesome-LLM. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-28.
  • XGen-7B, a new 7B foundational model trained on up to 8K length for 1.5T tokens
    3 projects | news.ycombinator.com | 28 Jun 2023
    Here are some high level answers:

    "7B" refers to the number of parameters or weights for a model. For a specific model, the versions with more parameters take more compute power to train and perform better.

    A foundational model is the part of a ML model that is "pretrained" on a massive data set (and usually is the bulk of the compute cost). This is usually considered the "raw" model after which it is fine-tuned for specific tasks (turned into a chatbot).

    "8K length" refers to the Context Window length (in tokens). This is basically an LLM's short term memory - you can think of it as its attention span and what it can generate reasonable output for.

    "1.5T tokens" refers to the size of the corpus of the training set.

    In general Wikipedia (or I suppose ChatGPT 4/Bing Chat with Web Browsing) is a decent enough place to start reading/asking basic questions. I'd recommend starting here: https://en.wikipedia.org/wiki/Large_language_model and finding the related concepts.

    For those going deeper, there are lot of general resources lists like https://github.com/Hannibal046/Awesome-LLM or https://github.com/Mooler0410/LLMsPracticalGuide or one I like, https://sebastianraschka.com/blog/2023/llm-reading-list.html (there are a bajillion of these and you'll find more once you get a grasp on the terms you want to surf for). Almost everything is published on arXiv, and most is fairly readable even as a layman.

    For non-ML programmers looking to get up to speed, I feel like Karpathy's Zero to Hero/nanoGPT or Jay Mody's picoGPT https://jaykmody.com/blog/gpt-from-scratch/ are alternative/maybe a better way to understand the basic concepts on a practical level.

  • Couple of questions about a.i that can be run locally
    1 project | /r/ArtificialInteligence | 26 Jun 2023
  • How to dive deeper into LLMs?
    1 project | /r/LocalLLaMA | 21 Jun 2023
  • [Hiring] Developer to build AI-powered chatbots with open source LLMs
    1 project | /r/forhire | 15 Jun 2023
  • Creating a Wiki for all things Local LLM. What do you want to know?
    2 projects | /r/LocalLLaMA | 14 Jun 2023
    Check out this repo, there should be some useful things worth noting https://github.com/Hannibal046/Awesome-LLM
  • Large Language Model (LLM) Resources
    3 projects | /r/learnmachinelearning | 11 Jun 2023
  • Curated list for LLMs: papers, training frameworks, tools to deploy, public APIs
    1 project | news.ycombinator.com | 1 Jun 2023
  • Performance of GPT-4 vs PaLM 2
    9 projects | /r/singularity | 17 May 2023
    First this is a pretty good starting point as a resource for learning about and finding open source models and the overall public history of progress of LLMs.
  • FreedomGPT: AI with no censorship
    3 projects | /r/KotakuInAction | 12 May 2023
    This seems fishy as fuck. First red flag is a fishy installer instead of any huggingface link for the model. Upon further search I found this: https://desuarchive.org/g/thread/92686632/#92692092 There are posts in its own sub, r slash freedomgpt, raising concerns, and many new accounts with low karma replying to them(I don't think I can link other subs here, check them yourself), 100% some botting/astroturfing going on. Not touching this. Even in the best case scenario that this is legit with no funny business, this is supposed to be based on llama, which is substantially different tiny model(hence why it can be run on your computer at all). This is no Chatgpt equivalent eitherway. I would recommend getting something more reputable from github if you are interested in running LLMs yourself.
  • Ask HN: Foundational Papers in AI
    1 project | news.ycombinator.com | 4 May 2023
    https://github.com/Hannibal046/Awesome-LLM has a curated list of LLM specific resources.

    Not the creator, just happened upon it when researching LLMs today.

open_llama

Posts with mentions or reviews of open_llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-19.
  • How Open is Generative AI? Part 2
    8 projects | dev.to | 19 Dec 2023
    The RedPajama dataset was adapted by the OpenLLaMA project at UC Berkeley, creating an open-source LLaMA equivalent without Meta’s restrictions. The model's later version also included data from Falcon and StarCoder. This highlights the importance of open-source models and datasets, enabling free repurposing and innovation.
  • GPT-4 API general availability
    15 projects | news.ycombinator.com | 6 Jul 2023
    OpenLLaMA is though. https://github.com/openlm-research/open_llama

    All of these are surmountable problems.

    We can beat OpenAI.

    We can drain their moat.

  • Recommend me a computer for local a.i for 500 $
    2 projects | /r/ArtificialInteligence | 1 Jul 2023
    #1: 🌞 Open-source Reproduction of Meta AI’s LLaMA OpenLLaMA-13B released. (trained for 1T tokens) | 0 comments #2: πŸŽ‰ #1 on HuggingFace.co's Leaderboard Model Falcon 40B is now Free (Apache 2.0 License) | 0 comments #3: 😍 Have you seen this repo? "running LLMs on consumer-grade hardware. compatible models: llama.cpp, alpaca.cpp, gpt4all.cpp, rwkv.cpp, whisper.cpp, vicuna, koala, gpt4all-j, cerebras and many others!" | 0 comments
  • Who is openllama from?
    1 project | /r/LocalLLaMA | 30 Jun 2023
    Trained OpenLLaMA models are from the OpenLM Research team in collaboration with Stability AI: https://github.com/openlm-research/open_llama
  • Personal GPT: A tiny AI Chatbot that runs fully offline on your iPhone
    14 projects | /r/ChatGPT | 30 Jun 2023
    I can't use Llama or any model from the Llama family, due to license restrictions. Although now there's also the OpenLlama family of models, which have the same architecture but were trained on an open dataset (RedPajama, the same dataset the base model in my app was trained on). I'd love to pursue the direction of extended context lengths for on-device LLMs. Likely in a month or so, when I've implemented all the product feature that I currently have on my backlog.
  • XGen-7B, a new 7B foundational model trained on up to 8K length for 1.5T tokens
    3 projects | news.ycombinator.com | 28 Jun 2023
    https://github.com/openlm-research/open_llama#update-0615202...).

    XGen-7B is probably the superior 7B model, it's trained on more tokens and a longer default sequence length (although both presumably can adopt SuperHOT (Position Interpolation) to extend context), but larger models still probably perform better on an absolute basis.

  • MosaicML Agrees to Join Databricks to Power Generative AI for All
    3 projects | /r/LocalLLaMA | 26 Jun 2023
    Compare it to openllama. It github doesn't have a single script on how to do anything.
  • Databricks Strikes $1.3B Deal for Generative AI Startup MosaicML
    4 projects | news.ycombinator.com | 26 Jun 2023
    OpenLLaMA models up to 13B parameters have now been trained on 1T tokens:

    https://github.com/openlm-research/open_llama

  • Containerized AI before Apocalypse πŸ³πŸ€–
    4 projects | dev.to | 25 Jun 2023
    The deployed LLM binary, orca mini, has 3 billion parameters. Orca mini is based on the OpenLLaMA project.
  • AI β€” weekly megathread!
    2 projects | /r/artificial | 23 Jun 2023
    OpenLM Research released its 1T token version of OpenLLaMA 13B - the permissively licensed open source reproduction of Meta AI's LLaMA large language model. [Details].

What are some alternatives?

When comparing Awesome-LLM and open_llama you can also consider the following projects:

langchain - ⚑ Building applications with LLMs through composability ⚑ [Moved to: https://github.com/langchain-ai/langchain]

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

FreedomGPT - This codebase is for a React and Electron-based app that executes the FreedomGPT LLM locally (offline and private) on Mac and Windows using a chat-based interface

llama.cpp - LLM inference in C/C++

LLMZoo - ⚑LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚑

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

gpt4all - gpt4all: run open-source LLMs anywhere

dalai - The simplest way to run LLaMA on your local machine

gorilla - Gorilla: An API store for LLMs

langchain - πŸ¦œπŸ”— Build context-aware reasoning applications

ggml - Tensor library for machine learning