-
Constrained-Text-Generation-Studio
Code repo for "Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio" at the (CAI2) workshop, jointly held at (COLING 2022)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
My work on constrained text generation / filter assisted decoding for LLMS is cited in this article! One of my proudest moments was being noticed by my senpai Gwern!
https://paperswithcode.com/paper/most-language-models-can-be...
I want to update that just because GPT-4 appears to be far better at following constraints, doesn't mean that it's anywhere near perfect at following them. It's better now at my easy example of "ban the letter e" but if you ask for several constraints, or mixing lexical and phonetic constraints, it gets pretty awful pretty quickly. Filter assisted decoding can make any LLM (no matter how awful they are) follow constraints perfectly.
I can't wait to get someone whose better at coding than me to implement these techniques in the major LLM frontends (oogabooga, llamma.ccp, etc) since my attempt at it was quite poopy research code: https://github.com/hellisotherpeople/constrained-text-genera...
My work on constrained text generation / filter assisted decoding for LLMS is cited in this article! One of my proudest moments was being noticed by my senpai Gwern!
https://paperswithcode.com/paper/most-language-models-can-be...
I want to update that just because GPT-4 appears to be far better at following constraints, doesn't mean that it's anywhere near perfect at following them. It's better now at my easy example of "ban the letter e" but if you ask for several constraints, or mixing lexical and phonetic constraints, it gets pretty awful pretty quickly. Filter assisted decoding can make any LLM (no matter how awful they are) follow constraints perfectly.
I can't wait to get someone whose better at coding than me to implement these techniques in the major LLM frontends (oogabooga, llamma.ccp, etc) since my attempt at it was quite poopy research code: https://github.com/hellisotherpeople/constrained-text-genera...
Related posts
-
GPUsGoBurr: Get up to 2x higher performance by Tuning LLM Inference Deployment
-
Show HN: Tarsier – vision for text-only LLM web agents that beats GPT-4o
-
PaliGemma: Open-Source Multimodal Model by Google
-
Project Gameface Launches on Android
-
AutoCrawler: A Progressive Understanding Web Agent for Web Crawler Generation