Constrained-Text-Genera
clownfish
Constrained-Text-Genera | clownfish | |
---|---|---|
11 | 11 | |
- | 303 | |
- | - | |
- | 4.3 | |
- | 12 months ago | |
Python | ||
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Constrained-Text-Genera
-
Photoshop for Text (2022)
Oh my god. I wrote a whole library called "Constrained Text Generation Studio" where I mused that I wanted a "Photoshop for Text". I'm not even sure which work predates the other: https://github.com/Hellisotherpeople/Constrained-Text-Genera...
The core idea of a "photoshop for text", specifically a word processor made for prosumers supporting GenAI first class (i.e oobabooga but actually good) - is worth so much. If you're a VC reading this, chances are I want to talk to you to actually execute on the idea from the OP
-
Ask HN: What have you built with LLMs?
I was working on this stuff before it was cool, so in the sense of the precursor to LLMs (and sometimes supporting LLMs still) I've built many things:
1. Games you can play with word2vec or related models (could be drop in replaced with sentence transformer). It's crazy that this is 5 years old now: https://github.com/Hellisotherpeople/Language-games
2. "Constrained Text Generation Studio" - A research project I wrote when I was trying to solve LLM's inability to follow syntactic, phonetic, or semantic constraints: https://github.com/Hellisotherpeople/Constrained-Text-Genera...
3. DebateKG - A bunch of "Semantic Knowledge Graphs" built on my pet debate evidence dataset (LLM backed embeddings indexes synchronized with a graphDB and a sqlDB via txtai). Can create compelling policy debate cases https://github.com/Hellisotherpeople/DebateKG
4. My failed attempt at a good extractive summarizer. My life work is dedicated to one day solving the problems I tried to fix with this project: https://github.com/Hellisotherpeople/CX_DB8
-
You need a mental model of LLMs to build or use a LLM-based product
My mental model for LLMs was built by carefully studying the distribution of its output vocabulary at every time step.
There are tools that allow you to right click and see all possible continuations for an LLM like you would in a code IDE[1]. Seeing what this vocabulary is[2] and how trivial modifications to the prompt can impact probabilities will do a lot for improving the mental model of how LLM operate.
Shameless self plug, but software which can do what I am describing is here, and it's worth noting that it ended up as peer reviewed research.
[1] https://github.com/Hellisotherpeople/Constrained-Text-Genera...
-
Ask HN: How training of LLM dedicated to code is different from LLM of “text”
Yeah, the LLM outputs a distribution of likely next tokens. It is up to the decoder to select one, and it can use a grammar to enforce certain rules on the output. https://github.com/Hellisotherpeople/Constrained-Text-Genera... or https://github.com/ggerganov/llama.cpp/blob/master/grammars/... for example.
- Show HN: LLMs can generate valid JSON 100% of the time
-
Llama: Add Grammar-Based Sampling
I am in love with this, I tried my hand at building a Constrained Text Generation Studio (https://github.com/Hellisotherpeople/Constrained-Text-Genera...), and got published at COLING 2022 for my paper on it (https://paperswithcode.com/paper/most-language-models-can-be...), but I always knew that something like this or the related idea enumerated in this paper: https://arxiv.org/abs/2306.03081 was the way to go.
-
Understanding GPT Tokenizers
I agree with you, and I'm SHOCKED at how little work there actually is in phonetics within the NLP community. Consider that most of the phonetic tools that I am using to enforce rhyming or similar syntactic constrained in constrained text generation studio (https://github.com/Hellisotherpeople/Constrained-Text-Genera...) were built circa 2014, such as the CMU rhyming dictionary. In most cases, I could not find better modern implementations of these tools.
I did learn an awful lot about phonetic representations and matching algorithms. Things like "soundex" and "double metaphone" now make sense to me and are fascinating to read about.
-
Don Knuth Plays with ChatGPT
https://github.com/hellisotherpeople/constrained-text-genera...
Just ban the damn tokens and try again. I wish that folks had more intuition around tokenization, and why LLMs struggle to follow syntactic, lexical, or phonetic constraints.
-
GPT-3 Creative Fiction
My work on constrained text generation / filter assisted decoding for LLMS is cited in this article! One of my proudest moments was being noticed by my senpai Gwern!
https://paperswithcode.com/paper/most-language-models-can-be...
I want to update that just because GPT-4 appears to be far better at following constraints, doesn't mean that it's anywhere near perfect at following them. It's better now at my easy example of "ban the letter e" but if you ask for several constraints, or mixing lexical and phonetic constraints, it gets pretty awful pretty quickly. Filter assisted decoding can make any LLM (no matter how awful they are) follow constraints perfectly.
I can't wait to get someone whose better at coding than me to implement these techniques in the major LLM frontends (oogabooga, llamma.ccp, etc) since my attempt at it was quite poopy research code: https://github.com/hellisotherpeople/constrained-text-genera...
-
Photoshop for Text
The paper at COLING 2022 that I wrote, titled "Most language models can be poets too" included a GUI constrained text generation studio that I market as being "Like Photoshop but for text"
https://github.com/Hellisotherpeople/Constrained-Text-Genera...
clownfish
-
Show HN: LLMs can generate valid JSON 100% of the time
I'm not sure how this is different than:
https://github.com/1rgs/jsonformer
or
https://github.com/newhouseb/clownfish
or
https://github.com/mkuchnik/relm
or
https://github.com/ggerganov/llama.cpp/pull/1773
or
https://github.com/Shopify/torch-grammar
Overall there are a ton of these logit based guidance systems, the reason they don't get tons of traction is the SOTA models are behind REST APIs that don't enable this fine-grained approach.
Those models perform so much better that people generally settle for just re-requesting until they get the correct format (and with GPT-4 that ends up being a fairly rare occurrence in my experience)
- OpenAI Function calling and API updates
-
Adding GPT to a web app. The real experience.
I can see some specific problems there, like malformed json (or json not matching intended schema being generated). Approaches like https://github.com/1rgs/jsonformer and https://github.com/newhouseb/clownfish could be interesting there, as well as approaches to validate outputs like https://medium.com/@markherhold/validating-json-patch-requests-44ca5981a7fc (references jsonpatch which could be interesting as well, but the approach is somewhat agnostic to how the changes actually get applied while still allowing you to enforce structure around what changes and how).
-
When you lose the ability to write, you also lose some of your ability to think
https://github.com/newhouseb/clownfish
Structural Alignment: Modifying Transformers (like GPT) to Follow a JSON Schema
- Clownfish: Constrained Decoding for LLMs Against JSON Schema
-
Jsonformer: A bulletproof way to generate structured output from LLMs
Oh nice! I built a similar system a few weeks ago: https://github.com/newhouseb/clownfish
I think the main differentiating factor here is that this is better if you have a simpler JSON schema without enums or oneOf constraints. If you do have these constraints, i.e. let's say you wanted an array of different types that represented a items on a menu { kind: pizza, toppings: [pepperoni] } or { kind: ice_cream, flavor: vanilla | strawberry } then you would need something more sophisticated like clownfish that can ask the LLM to pick specific properties.
-
Prompt injection: what’s the worst that can happen?
And on the other end, there's https://github.com/newhouseb/clownfish to force the model to produce structured output.
-
Teaching ChatGPT to Speak My Son’s Invented Language
It doesn't help with repetition, but when it comes to force structure on the output data, this approach looks interesting:
https://github.com/newhouseb/clownfish
TL;DR: it exploits the fact that the model returns probabilities for all the possible following tokens to enforce a JSON schema on the output as it is produced, backtracking as needed.
- Structural Alignment: Modifying Transformers (Like GPT) to Follow a JSON Schema
- Structural Alignment of LLMs with ControLogits
What are some alternatives?
outlines - Structured Text Generation
jsonformer - A Bulletproof Way to Generate Structured JSON from Language Models
Constrained-Text-Generation-Studio - Code repo for "Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio" at the (CAI2) workshop, jointly held at (COLING 2022)
lmql - A language for constraint-guided and efficient LLM programming.
agency - Agency: Robust LLM Agent Management with Go
tokenizer - Pure Go implementation of OpenAI's tiktoken tokenizer
evals - Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
torch-grammar
ChatGPT_DAN - ChatGPT DAN, Jailbreaks prompt
relm - ReLM is a Regular Expression engine for Language Models
kodumisto - GitHub Issue as ChatGPT Prompt; ChatGPT's Response as a Pull Request