flashtext
ad-llama
flashtext | ad-llama | |
---|---|---|
8 | 6 | |
5,535 | 47 | |
- | - | |
0.0 | 8.9 | |
6 months ago | 27 days ago | |
Python | TypeScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
flashtext
-
Show HN: LLMs can generate valid JSON 100% of the time
I have some other comment on this thread where I point out why I don’t think it’s superficial. Would love to get your feedback on that if you feel like spending more time on this thread.
But it’s not obscure? FlashText was a somewhat popular paper at the time (2017) with a popular repo (https://github.com/vi3k6i5/flashtext). Their paper was pretty derivative of Aho-Corasick, which they cited. If you think they genuinely fucked up, leave an issue on their repo (I’m, maybe to your surprise lol, not the author).
Anyway, I’m not a fan of the whatabboutery here. I don’t think OG’s paper is up to snuff on its lit review - do you?
-
[P] what is the most efficient way to pattern matching word-to-word?
The library flashtext basically creates these tries based on keywords you give it.
-
What is the most efficient way to find substrings in strings?
Seems like https://github.com/vi3k6i5/flashtext would be better suited here.
-
[P] Library for end-to-end neural search pipelines
I started developing this tool after using haystack. Pipelines are easier to build with cherche because of the operators. Also, cherche offers FlashText, Lunr.py retrievers that are not available in Haystack and that I needed for the project I wanted to solve. Haystack is clearly more complete but I think also more complex to use.
-
How can I speed up thousands of re.subs()?
For the text part not requiring regex, https://github.com/vi3k6i5/flashtext might help
-
My first NLP pipeline using SpaCy: detect news headlines with company acquisitions
Spacy for parsing the Headlines, remove stop words etc. might be ok but I think the problem is quite narrow so a set of fixed regex searches might work quite well. If regex is too slow, try: https://github.com/vi3k6i5/flashtext
-
What tech do I need to learn to programmatically parse ingredients from a recipe?
I would probably use something like [flashtext](https://github.com/vi3k6i5/flashtext) which should not be too hard to port to kotlin.
- Quickest way to check that 14000 strings arent in An original string.
ad-llama
- Show HN: A murder mystery game built on an open-source gen-AI agent framework
-
Guidance: A guidance language for controlling large language models
I took a stab at making something[1] like guidance - I'm not sure exactly how guidance does it (and I'm also really curious how it would work with chat api's) but here's how my solution works.
Each expression becomes a new inference request, so it's not a single inference pass. Because each subsequent pass includes the previously inferenced text, the LLM ends up doing a lot of prefill and less decode. You only decode as much as you actually inference, the repeated passes only end up costing more in prefill (which tend to be much faster tok/s).
To work with chat tuned instruction models, you can basically still treat it as a completion model. I provide the previously completed inference text as a partially completed assistant response, e.g. with llama 2 it goes after [/INST]. You can add a bit of instruction for each inference expression which gets added to the [INST]. This approach lets you start off the inference with `{ "someField": "` for example to guarantee (at least the start of) a json response and allow you to add a little bit of instruction or context just for that field.
I didn't even try with openai api's since afaict you can't provide a partial assistant response for it to continue from. Even if you were to request a single token at a time and use logit_bias for biased sampling, I don't see how you can get it to continue a partially completed inference.
[1] https://github.com/gsuuon/ad-llama
-
Simulating History with ChatGPT
Can you point me to some text-adventure engines? I'm hacking on an in-browser local llm structured inference library[1] and am trying to put together a text game demo[2] for it. It didn't even occur to me that text-adventure game engines exist, I was apparently re-inventing the wheel.
[1] https://github.com/gsuuon/ad-llama
[2] https://ad-llama.vercel.app/murder/
-
Ask HN: Which programming language to learn in AI era?
Yup, I'm building a library that runs LLM's in browser with tagged template literals: https://github.com/gsuuon/ad-llama
I think it has fundamental DX benefits over python for complex prompt chaining (or I wouldn't be building it!) Even still -- if their focus is purely on AI, python is still the better choice starting from scratch. The python AI ecosystem has many more libraries, stack overflow answers, tutorials, etc available.
-
Show HN: LLMs can generate valid JSON 100% of the time
Generating an FSM over the vocabulary is a really interesting approach to guided sampling! I'm hacking on a structured inference library (https://github.com/gsuuon/ad-llama) - I also tried to add a vocab preprocessing step to generate a valid tokens mask (just with regex or static strings initially) but discovered that doing so would cause unlikely / unnatural tokens to be masked rather than the token which represents the natural encoding given the existing sampled tokens.
Given the stateful nature of tokenizers, I decided that trying to preprocess the individual token ids was a losing battle. Even in the simple case of whitespace - tokenizer merges can really screw up generating a static mask, e.g. we expect a space next, but a token decodes to 'foo', but is actually a '_foo' and would've decoded with a whitespace if it were following a valid pair. When I go to construct the static vocab mask, it would then end up matching against 'foo' instead of ' foo'.
How did you work around this for the FSM approach? Does it somehow include information about merges / whitespace / tokenizer statefulness?
What are some alternatives?
KeyBERT - Minimal keyword extraction with BERT
llm - Access large language models from the command-line
rake-nltk - Python implementation of the Rapid Automatic Keyword Extraction algorithm using NLTK.
grontown - A murder mystery featuring generative agents
magnitude - A fast, efficient universal vector embedding utility package.
llm-mlc - LLM plugin for running models using MLC
Optimus - :truck: Agile Data Preparation Workflows made easy with Pandas, Dask, cuDF, Dask-cuDF, Vaex and PySpark
eastworld - Framework for Generative Agents in Games
yake - Single-document unsupervised keyword extraction
hof - Framework that joins data models, schemas, code generation, and a task engine. Language and technology agnostic.
gensim - Topic Modelling for Humans
outlines - Structured Text Generation