Teaching ChatGPT to Speak My Son’s Invented Language

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
surveyjs.io
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
  • ChatGPT_DAN

    ChatGPT DAN, Jailbreaks prompt

  • Is giving the system a 'goal' the reason why the DAN prompt with the tokens is effective?

    https://github.com/0xk1h0/ChatGPT_DAN

  • sharegpt

    Easily share permanent links to ChatGPT conversations with your friends

  • > Cool, that’s really the only point I’m making.

    To be clear, I'm saying that I don't know if they are, not that we know that it's not the same.

    It's not at all clear that humans do much more than "that basic token sequence prediction" for our reasoning itself. There are glaringly obvious auxiliary differences, such as memory, but we just don't know how human reasoning works, so writing off a predictive mechanism like this is just as unjustified as assuming it's the same. It's highly likely there are differences, but whether they are significant remains to be seen.

    > Not necessarily scaling limitations fundamental to the architecture as such, but limitations in our ability to develop sufficiently well developed training texts and strategies across so many problem domains.

    I think there are several big issues with that thinking. One is that this constraint is an issue now in large part because GPT doesn't have "memory" or an ability to continue learning. Those two need to be overcome to let it truly scale, but once they are, the game fundamentally changes.

    The second is that we're already at a stage where using LLMs to generate and validate training data works well for a whole lot of domains, and that will accelerate, especially when coupled with "plugins" and the ability to capture interactions with real-life users [1]

    E.g. a large part of human ability to do maths with any kind of efficiency comes down to rote repetition and generating large sets of simple quizzes for such areas is near trivial if you combine an LLM at tools for it to validate its answers. And unlike with humans where we have to do this effort for billions of humans, once you have an ability to let these models continue learning you make this investment in training once (or once per major LLM effort).

    A third is that GPT hasn't even scratched the surface in what is available in digital collections alone. E.g. GPT3 was trained on "only" about 200 million Norwegian words (I don't have data for GPT4). Norwegian is a tiny language - this was 0.1% of GPT3's total corpus. But the Norwegian National Library has 8.5m items, which includes something like 10-20 billion words in books alone, and many tens of billions more in newspapers, magazines and other data. That's one tiny language. We're many generations of LLM's away from even approaching exhausting the already available digital collections alone, and that's before we look at having the models trained on that data generate and judge training data.

    [1] https://sharegpt.com/

  • SurveyJS

    Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App. With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.

    SurveyJS logo
  • clownfish

    Constrained Decoding for LLMs against JSON Schema

  • It doesn't help with repetition, but when it comes to force structure on the output data, this approach looks interesting:

    https://github.com/newhouseb/clownfish

    TL;DR: it exploits the fact that the model returns probabilities for all the possible following tokens to enforce a JSON schema on the output as it is produced, backtracking as needed.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Tell HN: ChatGPT cut off date now Jan 2022

    1 project | news.ycombinator.com | 19 Sep 2023
  • Be honest, whats the dumbest thing you used Chat GPT for?

    1 project | /r/ChatGPT | 10 Jul 2023
  • I asked ChatGPT, "Is it really true that the astronauts left bags of shit on the moon?" with a developer mode prompt

    1 project | /r/ChatGPT | 30 May 2023
  • Gotta love the new Guanaco model (13b here).

    1 project | /r/Oobabooga | 29 May 2023
  • Looking for help, assistance, roadmap or tutorial to learn blockchain development

    4 projects | /r/ethdev | 14 May 2023