InfluxDB Platform is powered by columnar analytics, optimized for cost-efficient storage, and built with open data standards. Learn more →
Sharegpt Alternatives
Similar projects and alternatives to sharegpt
-
-
InfluxDB
Purpose built for real-time analytics at any scale. InfluxDB Platform is powered by columnar analytics, optimized for cost-efficient storage, and built with open data standards.
-
-
-
-
Learn_Prompting
Prompt Engineering, Generative AI, and LLM Guide by Learn Prompting | Join our discord for the largest Prompt Engineering learning community
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
ChatGPT
Discontinued Lightweight package for interacting with ChatGPT's API by OpenAI. Uses reverse engineered official API. (by acheong08)
-
-
-
-
-
-
-
-
unofficial-chatgpt-api
This repo is unofficial ChatGPT api. It is based on Daniel Gross's WhatsApp GPT
-
-
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
sharegpt discussion
sharegpt reviews and mentions
-
5 github profiles every developer must follow
he also created cool projects like https://oneword.domains/, https://sharegpt.com/, https://novel.sh/ and https://extrapolate.app/
-
How Open is Generative AI? Part 2
Vicuna is another instruction-focused LLM rooted in LLaMA, developed by researchers from UC Berkeley, Carnegie Mellon University, Stanford, and UC San Diego. They adapted Alpaca’s training code and incorporated 70,000 examples from ShareGPT, a platform for sharing ChatGPT interactions.
-
create the best coder open-source in the world?
We can say that a 13B model per language is reasonable. Then it means we need to create a democratic way for teaching coding by examples and solutions and algorithms, that we create, curate and use open-source. Much like sharegpt.com but for coding tasks, solutions ways of thinking. We should be wary of 'enforcing' principles rather showing different approaches, as all approaches can have advantages and disadvantages.
-
Thank you ChatGPT
You can see the url in the comment, https://sharegpt.com and if you go there it gives you the option for installing the chrome extension, after that it shouldn’t be hard to use it
- The conversation started as what would AI do if it became self aware and humans tried to shut it down. The we got into interdimensional beings. Most profound GPT conversation I have had.
-
Übersicht aller nützlichen Links für ChatGPT Prompt Engineering
ShareGPT - Share your prompts and your entire conversations
-
(Reverse psychology FTW) Congratulations, you've played yourself.
Or used https://sharegpt.com
-
"Prompt engineering" is easy as shit and anybody who tells you otherwise is a fucking clown.
you can gets lots of ideas here > https://sharegpt.com/ (180,000+ prompts)
-
I built a ChatGPT Mac app in just 20 minutes with no coding experience - thanks ChatGPT!
I would love to read the whole conversation: Check out this cool little GPT sharing extension: https://sharegpt.com - that way the code snippets can be copied easily
-
Teaching ChatGPT to Speak My Son’s Invented Language
> Cool, that’s really the only point I’m making.
To be clear, I'm saying that I don't know if they are, not that we know that it's not the same.
It's not at all clear that humans do much more than "that basic token sequence prediction" for our reasoning itself. There are glaringly obvious auxiliary differences, such as memory, but we just don't know how human reasoning works, so writing off a predictive mechanism like this is just as unjustified as assuming it's the same. It's highly likely there are differences, but whether they are significant remains to be seen.
> Not necessarily scaling limitations fundamental to the architecture as such, but limitations in our ability to develop sufficiently well developed training texts and strategies across so many problem domains.
I think there are several big issues with that thinking. One is that this constraint is an issue now in large part because GPT doesn't have "memory" or an ability to continue learning. Those two need to be overcome to let it truly scale, but once they are, the game fundamentally changes.
The second is that we're already at a stage where using LLMs to generate and validate training data works well for a whole lot of domains, and that will accelerate, especially when coupled with "plugins" and the ability to capture interactions with real-life users [1]
E.g. a large part of human ability to do maths with any kind of efficiency comes down to rote repetition and generating large sets of simple quizzes for such areas is near trivial if you combine an LLM at tools for it to validate its answers. And unlike with humans where we have to do this effort for billions of humans, once you have an ability to let these models continue learning you make this investment in training once (or once per major LLM effort).
A third is that GPT hasn't even scratched the surface in what is available in digital collections alone. E.g. GPT3 was trained on "only" about 200 million Norwegian words (I don't have data for GPT4). Norwegian is a tiny language - this was 0.1% of GPT3's total corpus. But the Norwegian National Library has 8.5m items, which includes something like 10-20 billion words in books alone, and many tens of billions more in newspapers, magazines and other data. That's one tiny language. We're many generations of LLM's away from even approaching exhausting the already available digital collections alone, and that's before we look at having the models trained on that data generate and judge training data.
[1] https://sharegpt.com/
-
A note from our sponsor - InfluxDB
www.influxdata.com | 9 Sep 2024
Stats
domeccleston/sharegpt is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of sharegpt is TypeScript.