flashtext
Constrained-Text-Generation-Studio
flashtext | Constrained-Text-Generation-Studio | |
---|---|---|
8 | 25 | |
5,535 | 197 | |
- | - | |
0.0 | 4.1 | |
7 months ago | 9 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
flashtext
-
Show HN: LLMs can generate valid JSON 100% of the time
I have some other comment on this thread where I point out why I don’t think it’s superficial. Would love to get your feedback on that if you feel like spending more time on this thread.
But it’s not obscure? FlashText was a somewhat popular paper at the time (2017) with a popular repo (https://github.com/vi3k6i5/flashtext). Their paper was pretty derivative of Aho-Corasick, which they cited. If you think they genuinely fucked up, leave an issue on their repo (I’m, maybe to your surprise lol, not the author).
Anyway, I’m not a fan of the whatabboutery here. I don’t think OG’s paper is up to snuff on its lit review - do you?
-
[P] what is the most efficient way to pattern matching word-to-word?
The library flashtext basically creates these tries based on keywords you give it.
-
What is the most efficient way to find substrings in strings?
Seems like https://github.com/vi3k6i5/flashtext would be better suited here.
-
[P] Library for end-to-end neural search pipelines
I started developing this tool after using haystack. Pipelines are easier to build with cherche because of the operators. Also, cherche offers FlashText, Lunr.py retrievers that are not available in Haystack and that I needed for the project I wanted to solve. Haystack is clearly more complete but I think also more complex to use.
-
How can I speed up thousands of re.subs()?
For the text part not requiring regex, https://github.com/vi3k6i5/flashtext might help
-
My first NLP pipeline using SpaCy: detect news headlines with company acquisitions
Spacy for parsing the Headlines, remove stop words etc. might be ok but I think the problem is quite narrow so a set of fixed regex searches might work quite well. If regex is too slow, try: https://github.com/vi3k6i5/flashtext
-
What tech do I need to learn to programmatically parse ingredients from a recipe?
I would probably use something like [flashtext](https://github.com/vi3k6i5/flashtext) which should not be too hard to port to kotlin.
- Quickest way to check that 14000 strings arent in An original string.
Constrained-Text-Generation-Studio
-
Photoshop for Text (2022)
Oh my god. I wrote a whole library called "Constrained Text Generation Studio" where I mused that I wanted a "Photoshop for Text". I'm not even sure which work predates the other: https://github.com/Hellisotherpeople/Constrained-Text-Genera...
The core idea of a "photoshop for text", specifically a word processor made for prosumers supporting GenAI first class (i.e oobabooga but actually good) - is worth so much. If you're a VC reading this, chances are I want to talk to you to actually execute on the idea from the OP
-
Ask HN: What have you built with LLMs?
I was working on this stuff before it was cool, so in the sense of the precursor to LLMs (and sometimes supporting LLMs still) I've built many things:
1. Games you can play with word2vec or related models (could be drop in replaced with sentence transformer). It's crazy that this is 5 years old now: https://github.com/Hellisotherpeople/Language-games
2. "Constrained Text Generation Studio" - A research project I wrote when I was trying to solve LLM's inability to follow syntactic, phonetic, or semantic constraints: https://github.com/Hellisotherpeople/Constrained-Text-Genera...
3. DebateKG - A bunch of "Semantic Knowledge Graphs" built on my pet debate evidence dataset (LLM backed embeddings indexes synchronized with a graphDB and a sqlDB via txtai). Can create compelling policy debate cases https://github.com/Hellisotherpeople/DebateKG
4. My failed attempt at a good extractive summarizer. My life work is dedicated to one day solving the problems I tried to fix with this project: https://github.com/Hellisotherpeople/CX_DB8
-
You need a mental model of LLMs to build or use a LLM-based product
My mental model for LLMs was built by carefully studying the distribution of its output vocabulary at every time step.
There are tools that allow you to right click and see all possible continuations for an LLM like you would in a code IDE[1]. Seeing what this vocabulary is[2] and how trivial modifications to the prompt can impact probabilities will do a lot for improving the mental model of how LLM operate.
Shameless self plug, but software which can do what I am describing is here, and it's worth noting that it ended up as peer reviewed research.
[1] https://github.com/Hellisotherpeople/Constrained-Text-Genera...
-
Ask HN: How training of LLM dedicated to code is different from LLM of “text”
Yeah, the LLM outputs a distribution of likely next tokens. It is up to the decoder to select one, and it can use a grammar to enforce certain rules on the output. https://github.com/Hellisotherpeople/Constrained-Text-Genera... or https://github.com/ggerganov/llama.cpp/blob/master/grammars/... for example.
- Show HN: LLMs can generate valid JSON 100% of the time
-
Llama: Add Grammar-Based Sampling
I am in love with this, I tried my hand at building a Constrained Text Generation Studio (https://github.com/Hellisotherpeople/Constrained-Text-Genera...), and got published at COLING 2022 for my paper on it (https://paperswithcode.com/paper/most-language-models-can-be...), but I always knew that something like this or the related idea enumerated in this paper: https://arxiv.org/abs/2306.03081 was the way to go.
-
LLMs are too easy to automatically red team into toxicity
It's far too easy to destroy any type of RLHF done to try to prevent bad behavior from an LLM.
For example, if you want a LLM to generate things that look like social security numbers, you may try to prompt it asking for social security numbers. It will of course give you "I'm sorry hal I can't do that..."
Then start using a technique like token filtering/filter assisted decoding, to make it where the LLM can only generate hyphens and numbers, and suddenly it does what you ask despite RLHF
I explored this a tiny bit in the later sections of my paper studying what happens when you restrict an LLMs vocabulary: https://aclanthology.org/2022.cai-1.pdf#page=17
You can even play with this with open source models using CTGS: https://github.com/Hellisotherpeople/Constrained-Text-Genera...
-
Understanding GPT Tokenizers
I agree with you, and I'm SHOCKED at how little work there actually is in phonetics within the NLP community. Consider that most of the phonetic tools that I am using to enforce rhyming or similar syntactic constrained in constrained text generation studio (https://github.com/Hellisotherpeople/Constrained-Text-Genera...) were built circa 2014, such as the CMU rhyming dictionary. In most cases, I could not find better modern implementations of these tools.
I did learn an awful lot about phonetic representations and matching algorithms. Things like "soundex" and "double metaphone" now make sense to me and are fascinating to read about.
-
Don Knuth Plays with ChatGPT
https://github.com/hellisotherpeople/constrained-text-genera...
Just ban the damn tokens and try again. I wish that folks had more intuition around tokenization, and why LLMs struggle to follow syntactic, lexical, or phonetic constraints.
- Constrained Text Generation Studio
What are some alternatives?
KeyBERT - Minimal keyword extraction with BERT
Constrained-Text-Genera
rake-nltk - Python implementation of the Rapid Automatic Keyword Extraction algorithm using NLTK.
guidance - A guidance language for controlling large language models.
magnitude - A fast, efficient universal vector embedding utility package.
torch-grammar
Optimus - :truck: Agile Data Preparation Workflows made easy with Pandas, Dask, cuDF, Dask-cuDF, Vaex and PySpark
agency - Agency: Robust LLM Agent Management with Go
yake - Single-document unsupervised keyword extraction
llama-tokenizer-js - JS tokenizer for LLaMA and LLaMA 2
gensim - Topic Modelling for Humans
outlines - Structured Text Generation