open_llama VS sharegpt

Compare open_llama vs sharegpt and see what are their differences.

open_llama

OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset (by openlm-research)

sharegpt

Easily share permanent links to ChatGPT conversations with your friends (by domeccleston)
Our great sponsors
  • SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
open_llama sharegpt
52 37
7,193 1,674
1.3% -
5.3 6.9
10 months ago 5 months ago
TypeScript
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

open_llama

Posts with mentions or reviews of open_llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-19.
  • How Open is Generative AI? Part 2
    8 projects | dev.to | 19 Dec 2023
    The RedPajama dataset was adapted by the OpenLLaMA project at UC Berkeley, creating an open-source LLaMA equivalent without Meta’s restrictions. The model's later version also included data from Falcon and StarCoder. This highlights the importance of open-source models and datasets, enabling free repurposing and innovation.
  • GPT-4 API general availability
    15 projects | news.ycombinator.com | 6 Jul 2023
    OpenLLaMA is though. https://github.com/openlm-research/open_llama

    All of these are surmountable problems.

    We can beat OpenAI.

    We can drain their moat.

  • Recommend me a computer for local a.i for 500 $
    2 projects | /r/ArtificialInteligence | 1 Jul 2023
    #1: 🌞 Open-source Reproduction of Meta AI’s LLaMA OpenLLaMA-13B released. (trained for 1T tokens) | 0 comments #2: 🎉 #1 on HuggingFace.co's Leaderboard Model Falcon 40B is now Free (Apache 2.0 License) | 0 comments #3: 😍 Have you seen this repo? "running LLMs on consumer-grade hardware. compatible models: llama.cpp, alpaca.cpp, gpt4all.cpp, rwkv.cpp, whisper.cpp, vicuna, koala, gpt4all-j, cerebras and many others!" | 0 comments
  • Who is openllama from?
    1 project | /r/LocalLLaMA | 30 Jun 2023
    Trained OpenLLaMA models are from the OpenLM Research team in collaboration with Stability AI: https://github.com/openlm-research/open_llama
  • Personal GPT: A tiny AI Chatbot that runs fully offline on your iPhone
    14 projects | /r/ChatGPT | 30 Jun 2023
    I can't use Llama or any model from the Llama family, due to license restrictions. Although now there's also the OpenLlama family of models, which have the same architecture but were trained on an open dataset (RedPajama, the same dataset the base model in my app was trained on). I'd love to pursue the direction of extended context lengths for on-device LLMs. Likely in a month or so, when I've implemented all the product feature that I currently have on my backlog.
  • XGen-7B, a new 7B foundational model trained on up to 8K length for 1.5T tokens
    3 projects | news.ycombinator.com | 28 Jun 2023
    https://github.com/openlm-research/open_llama#update-0615202...).

    XGen-7B is probably the superior 7B model, it's trained on more tokens and a longer default sequence length (although both presumably can adopt SuperHOT (Position Interpolation) to extend context), but larger models still probably perform better on an absolute basis.

  • MosaicML Agrees to Join Databricks to Power Generative AI for All
    3 projects | /r/LocalLLaMA | 26 Jun 2023
    Compare it to openllama. It github doesn't have a single script on how to do anything.
  • Databricks Strikes $1.3B Deal for Generative AI Startup MosaicML
    4 projects | news.ycombinator.com | 26 Jun 2023
    OpenLLaMA models up to 13B parameters have now been trained on 1T tokens:

    https://github.com/openlm-research/open_llama

  • Containerized AI before Apocalypse 🐳🤖
    4 projects | dev.to | 25 Jun 2023
    The deployed LLM binary, orca mini, has 3 billion parameters. Orca mini is based on the OpenLLaMA project.
  • AI — weekly megathread!
    2 projects | /r/artificial | 23 Jun 2023
    OpenLM Research released its 1T token version of OpenLLaMA 13B - the permissively licensed open source reproduction of Meta AI's LLaMA large language model. [Details].

sharegpt

Posts with mentions or reviews of sharegpt. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-19.
  • How Open is Generative AI? Part 2
    8 projects | dev.to | 19 Dec 2023
    Vicuna is another instruction-focused LLM rooted in LLaMA, developed by researchers from UC Berkeley, Carnegie Mellon University, Stanford, and UC San Diego. They adapted Alpaca’s training code and incorporated 70,000 examples from ShareGPT, a platform for sharing ChatGPT interactions.
  • create the best coder open-source in the world?
    2 projects | /r/LocalLLaMA | 21 Jun 2023
    We can say that a 13B model per language is reasonable. Then it means we need to create a democratic way for teaching coding by examples and solutions and algorithms, that we create, curate and use open-source. Much like sharegpt.com but for coding tasks, solutions ways of thinking. We should be wary of 'enforcing' principles rather showing different approaches, as all approaches can have advantages and disadvantages.
  • Thank you ChatGPT
    1 project | /r/ChatGPT | 26 May 2023
    You can see the url in the comment, https://sharegpt.com and if you go there it gives you the option for installing the chrome extension, after that it shouldn’t be hard to use it
  • The conversation started as what would AI do if it became self aware and humans tried to shut it down. The we got into interdimensional beings. Most profound GPT conversation I have had.
    1 project | /r/ChatGPT | 14 May 2023
  • Übersicht aller nützlichen Links für ChatGPT Prompt Engineering
    20 projects | /r/ChatGPTPro_DE | 8 May 2023
    ShareGPT - Share your prompts and your entire conversations
  • (Reverse psychology FTW) Congratulations, you've played yourself.
    1 project | /r/ChatGPT | 29 Apr 2023
    Or used https://sharegpt.com
  • "Prompt engineering" is easy as shit and anybody who tells you otherwise is a fucking clown.
    6 projects | /r/ChatGPT | 23 Apr 2023
    you can gets lots of ideas here > https://sharegpt.com/ (180,000+ prompts)
  • I built a ChatGPT Mac app in just 20 minutes with no coding experience - thanks ChatGPT!
    1 project | /r/OpenAI | 21 Apr 2023
    I would love to read the whole conversation: Check out this cool little GPT sharing extension: https://sharegpt.com - that way the code snippets can be copied easily
  • Teaching ChatGPT to Speak My Son’s Invented Language
    3 projects | news.ycombinator.com | 10 Apr 2023
    > Cool, that’s really the only point I’m making.

    To be clear, I'm saying that I don't know if they are, not that we know that it's not the same.

    It's not at all clear that humans do much more than "that basic token sequence prediction" for our reasoning itself. There are glaringly obvious auxiliary differences, such as memory, but we just don't know how human reasoning works, so writing off a predictive mechanism like this is just as unjustified as assuming it's the same. It's highly likely there are differences, but whether they are significant remains to be seen.

    > Not necessarily scaling limitations fundamental to the architecture as such, but limitations in our ability to develop sufficiently well developed training texts and strategies across so many problem domains.

    I think there are several big issues with that thinking. One is that this constraint is an issue now in large part because GPT doesn't have "memory" or an ability to continue learning. Those two need to be overcome to let it truly scale, but once they are, the game fundamentally changes.

    The second is that we're already at a stage where using LLMs to generate and validate training data works well for a whole lot of domains, and that will accelerate, especially when coupled with "plugins" and the ability to capture interactions with real-life users [1]

    E.g. a large part of human ability to do maths with any kind of efficiency comes down to rote repetition and generating large sets of simple quizzes for such areas is near trivial if you combine an LLM at tools for it to validate its answers. And unlike with humans where we have to do this effort for billions of humans, once you have an ability to let these models continue learning you make this investment in training once (or once per major LLM effort).

    A third is that GPT hasn't even scratched the surface in what is available in digital collections alone. E.g. GPT3 was trained on "only" about 200 million Norwegian words (I don't have data for GPT4). Norwegian is a tiny language - this was 0.1% of GPT3's total corpus. But the Norwegian National Library has 8.5m items, which includes something like 10-20 billion words in books alone, and many tens of billions more in newspapers, magazines and other data. That's one tiny language. We're many generations of LLM's away from even approaching exhausting the already available digital collections alone, and that's before we look at having the models trained on that data generate and judge training data.

    [1] https://sharegpt.com/

  • Humans in Humans Out: GPT Converging Toward Common Sense in Both Success/Failure
    3 projects | news.ycombinator.com | 8 Apr 2023
    of that conversation. Perhaps something like shareGPT[1] can help?

    [1] https://sharegpt.com

What are some alternatives?

When comparing open_llama and sharegpt you can also consider the following projects:

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

ChatGPT - Lightweight package for interacting with ChatGPT's API by OpenAI. Uses reverse engineered official API.

llama.cpp - LLM inference in C/C++

llm-workflow-engine - Power CLI and Workflow manager for LLMs (core package)

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

unofficial-chatgpt-api - This repo is unofficial ChatGPT api. It is based on Daniel Gross's WhatsApp GPT

gpt4all - gpt4all: run open-source LLMs anywhere

openai-python - The official Python library for the OpenAI API

gorilla - Gorilla: An API store for LLMs

chatgpt-conversation - Have a conversation with ChatGPT using your voice, and have it talk back.

ggml - Tensor library for machine learning

langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]