ChatGPT VS sharegpt

Compare ChatGPT vs sharegpt and see what are their differences.

sharegpt

Easily share permanent links to ChatGPT conversations with your friends (by domeccleston)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
WorkOS - The modern identity platform for B2B SaaS
The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
workos.com
featured
ChatGPT sharegpt
50 37
46,892 1,680
- -
6.4 6.9
20 days ago 5 months ago
Rust TypeScript
GNU Affero General Public License v3.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ChatGPT

Posts with mentions or reviews of ChatGPT. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-01.

sharegpt

Posts with mentions or reviews of sharegpt. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-19.
  • How Open is Generative AI? Part 2
    8 projects | dev.to | 19 Dec 2023
    Vicuna is another instruction-focused LLM rooted in LLaMA, developed by researchers from UC Berkeley, Carnegie Mellon University, Stanford, and UC San Diego. They adapted Alpaca’s training code and incorporated 70,000 examples from ShareGPT, a platform for sharing ChatGPT interactions.
  • create the best coder open-source in the world?
    2 projects | /r/LocalLLaMA | 21 Jun 2023
    We can say that a 13B model per language is reasonable. Then it means we need to create a democratic way for teaching coding by examples and solutions and algorithms, that we create, curate and use open-source. Much like sharegpt.com but for coding tasks, solutions ways of thinking. We should be wary of 'enforcing' principles rather showing different approaches, as all approaches can have advantages and disadvantages.
  • Thank you ChatGPT
    1 project | /r/ChatGPT | 26 May 2023
    You can see the url in the comment, https://sharegpt.com and if you go there it gives you the option for installing the chrome extension, after that it shouldn’t be hard to use it
  • The conversation started as what would AI do if it became self aware and humans tried to shut it down. The we got into interdimensional beings. Most profound GPT conversation I have had.
    1 project | /r/ChatGPT | 14 May 2023
  • Ăśbersicht aller nĂĽtzlichen Links fĂĽr ChatGPT Prompt Engineering
    20 projects | /r/ChatGPTPro_DE | 8 May 2023
    ShareGPT - Share your prompts and your entire conversations
  • (Reverse psychology FTW) Congratulations, you've played yourself.
    1 project | /r/ChatGPT | 29 Apr 2023
    Or used https://sharegpt.com
  • "Prompt engineering" is easy as shit and anybody who tells you otherwise is a fucking clown.
    6 projects | /r/ChatGPT | 23 Apr 2023
    you can gets lots of ideas here > https://sharegpt.com/ (180,000+ prompts)
  • I built a ChatGPT Mac app in just 20 minutes with no coding experience - thanks ChatGPT!
    1 project | /r/OpenAI | 21 Apr 2023
    I would love to read the whole conversation: Check out this cool little GPT sharing extension: https://sharegpt.com - that way the code snippets can be copied easily
  • Teaching ChatGPT to Speak My Son’s Invented Language
    3 projects | news.ycombinator.com | 10 Apr 2023
    > Cool, that’s really the only point I’m making.

    To be clear, I'm saying that I don't know if they are, not that we know that it's not the same.

    It's not at all clear that humans do much more than "that basic token sequence prediction" for our reasoning itself. There are glaringly obvious auxiliary differences, such as memory, but we just don't know how human reasoning works, so writing off a predictive mechanism like this is just as unjustified as assuming it's the same. It's highly likely there are differences, but whether they are significant remains to be seen.

    > Not necessarily scaling limitations fundamental to the architecture as such, but limitations in our ability to develop sufficiently well developed training texts and strategies across so many problem domains.

    I think there are several big issues with that thinking. One is that this constraint is an issue now in large part because GPT doesn't have "memory" or an ability to continue learning. Those two need to be overcome to let it truly scale, but once they are, the game fundamentally changes.

    The second is that we're already at a stage where using LLMs to generate and validate training data works well for a whole lot of domains, and that will accelerate, especially when coupled with "plugins" and the ability to capture interactions with real-life users [1]

    E.g. a large part of human ability to do maths with any kind of efficiency comes down to rote repetition and generating large sets of simple quizzes for such areas is near trivial if you combine an LLM at tools for it to validate its answers. And unlike with humans where we have to do this effort for billions of humans, once you have an ability to let these models continue learning you make this investment in training once (or once per major LLM effort).

    A third is that GPT hasn't even scratched the surface in what is available in digital collections alone. E.g. GPT3 was trained on "only" about 200 million Norwegian words (I don't have data for GPT4). Norwegian is a tiny language - this was 0.1% of GPT3's total corpus. But the Norwegian National Library has 8.5m items, which includes something like 10-20 billion words in books alone, and many tens of billions more in newspapers, magazines and other data. That's one tiny language. We're many generations of LLM's away from even approaching exhausting the already available digital collections alone, and that's before we look at having the models trained on that data generate and judge training data.

    [1] https://sharegpt.com/

  • Humans in Humans Out: GPT Converging Toward Common Sense in Both Success/Failure
    3 projects | news.ycombinator.com | 8 Apr 2023
    of that conversation. Perhaps something like shareGPT[1] can help?

    [1] https://sharegpt.com

What are some alternatives?

When comparing ChatGPT and sharegpt you can also consider the following projects:

askai - Command Line Interface for OpenAi ChatGPT

ChatGPT - Lightweight package for interacting with ChatGPT's API by OpenAI. Uses reverse engineered official API.

BetterChatGPT - An amazing UI for OpenAI's ChatGPT (Website + Windows + MacOS + Linux)

llm-workflow-engine - Power CLI and Workflow manager for LLMs (core package)

nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.

unofficial-chatgpt-api - This repo is unofficial ChatGPT api. It is based on Daniel Gross's WhatsApp GPT

chatgpt-raycast - ChatGPT raycast extension

openai-python - The official Python library for the OpenAI API

PyChatGPT - ⚡️ Python client for the unofficial ChatGPT API with auto token regeneration, conversation tracking, proxy support and more.

chatgpt-conversation - Have a conversation with ChatGPT using your voice, and have it talk back.

chat-ai-desktop - Unofficial ChatGPT desktop app for Mac & Windows menubar using Tauri & Rust

langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]