cria
clownfish
cria | clownfish | |
---|---|---|
4 | 11 | |
77 | 303 | |
- | - | |
2.5 | 4.3 | |
about 1 year ago | about 1 year ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cria
-
Show HN: Speeding up LLM inference 2x times (possibly)
It originally started as a fork to Recmo’s cria pure numpy llama impl :)
https://github.com/recmo/cria
Took a whole night to compute a few
-
Jsonformer: A bulletproof way to generate structured output from LLMs
Not op, but I can share my approach - I went line by line by Recmo's Cria: https://github.com/recmo/cria - which is an implementation of Llama in Numpy - so very low level. Took me I think 3-4 days x 10 hours + 1-2 days of reading about Transformers to understand what's going on - but from that you can see how models generate text and have a deep understanding of what's going on.
- LLaMA for poor
clownfish
-
Show HN: LLMs can generate valid JSON 100% of the time
I'm not sure how this is different than:
https://github.com/1rgs/jsonformer
or
https://github.com/newhouseb/clownfish
or
https://github.com/mkuchnik/relm
or
https://github.com/ggerganov/llama.cpp/pull/1773
or
https://github.com/Shopify/torch-grammar
Overall there are a ton of these logit based guidance systems, the reason they don't get tons of traction is the SOTA models are behind REST APIs that don't enable this fine-grained approach.
Those models perform so much better that people generally settle for just re-requesting until they get the correct format (and with GPT-4 that ends up being a fairly rare occurrence in my experience)
- OpenAI Function calling and API updates
-
Adding GPT to a web app. The real experience.
I can see some specific problems there, like malformed json (or json not matching intended schema being generated). Approaches like https://github.com/1rgs/jsonformer and https://github.com/newhouseb/clownfish could be interesting there, as well as approaches to validate outputs like https://medium.com/@markherhold/validating-json-patch-requests-44ca5981a7fc (references jsonpatch which could be interesting as well, but the approach is somewhat agnostic to how the changes actually get applied while still allowing you to enforce structure around what changes and how).
-
When you lose the ability to write, you also lose some of your ability to think
https://github.com/newhouseb/clownfish
Structural Alignment: Modifying Transformers (like GPT) to Follow a JSON Schema
- Clownfish: Constrained Decoding for LLMs Against JSON Schema
-
Jsonformer: A bulletproof way to generate structured output from LLMs
Oh nice! I built a similar system a few weeks ago: https://github.com/newhouseb/clownfish
I think the main differentiating factor here is that this is better if you have a simpler JSON schema without enums or oneOf constraints. If you do have these constraints, i.e. let's say you wanted an array of different types that represented a items on a menu { kind: pizza, toppings: [pepperoni] } or { kind: ice_cream, flavor: vanilla | strawberry } then you would need something more sophisticated like clownfish that can ask the LLM to pick specific properties.
-
Prompt injection: what’s the worst that can happen?
And on the other end, there's https://github.com/newhouseb/clownfish to force the model to produce structured output.
-
Teaching ChatGPT to Speak My Son’s Invented Language
It doesn't help with repetition, but when it comes to force structure on the output data, this approach looks interesting:
https://github.com/newhouseb/clownfish
TL;DR: it exploits the fact that the model returns probabilities for all the possible following tokens to enforce a JSON schema on the output as it is produced, backtracking as needed.
- Structural Alignment: Modifying Transformers (Like GPT) to Follow a JSON Schema
- Structural Alignment of LLMs with ControLogits
What are some alternatives?
transmogrifier - Unstructured data goes in, structured data comes out. Sometimes comedically.
jsonformer - A Bulletproof Way to Generate Structured JSON from Language Models
magic - AI functions for Typescript
lmql - A language for constraint-guided and efficient LLM programming.
effort - An implementation of bucketMul LLM inference
outlines - Structured Text Generation
evals - Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
ChatGPT_DAN - ChatGPT DAN, Jailbreaks prompt
kodumisto - GitHub Issue as ChatGPT Prompt; ChatGPT's Response as a Pull Request
AICommand - ChatGPT integration with Unity Editor
sharegpt - Easily share permanent links to ChatGPT conversations with your friends
aider - aider is AI pair programming in your terminal