jsonformer
frogmouth
jsonformer | frogmouth | |
---|---|---|
25 | 14 | |
3,868 | 2,272 | |
- | 2.4% | |
5.4 | 6.7 | |
3 months ago | about 2 months ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
jsonformer
- Forcing AI to Follow a Specific Answer Pattern Using GBNF Grammar
-
Refact LLM: New 1.6B code model reaches 32% HumanEval and is SOTA for the size
- Tools like jsonformer https://github.com/1rgs/jsonformer are not possible with OpenAIs API.
-
Show HN: LLMs can generate valid JSON 100% of the time
How does this compare in terms of latency, cost, and effectiveness to jsonformer? https://github.com/1rgs/jsonformer
-
Ask HN: Explain how size of input changes ChatGPT performance
You're correct with interpreting how the model works wrt it returning tokens one at a time. The model returns one token, and the entire context window gets shifted right by one to for account it when generating the next one.
As for model performance at different context sizes, it's seems a bit complicated. From what I understand, even if models are tweaked (for example using the superHOT RoPE hack or sparse attention) to be able to use longer contexts, they still have to be fined tuned on input of this increased context to actually utilize it, but performance seems to degrade regardless as input length increases.
For your question about fine tuning models to respond with only "yes" or "no", I recommend looking into how the jsonformers library works: https://github.com/1rgs/jsonformer . Essentially, you still let the model generate many tokens for the next position, and only accept the ones that satisfy certain criteria (such as the token for "yes" and the token for "no".
You can do this with openAI API too, using tiktoken https://twitter.com/AAAzzam/status/1669753722828730378?t=d_W... . Be careful though as results will be different on different selections of tokens, as "YES", "Yes", "yes", etc are all different tokens to the best of my knowledge
- A framework to securely use LLMs in companies – Part 1: Overview of Risks
-
LLMs for Schema Augmentation
From here, we just need to continue generating tokens until we get to a closing quote. This approach was borrowed from Jsonformer which uses a similar approach to induce LLMs to generate structured output. Continuing to do so for each property using Replit's code LLM gives the following output:
-
Doesn't a 4090 massively overpower a 3090 for running local LLMs?
https://github.com/1rgs/jsonformer or https://github.com/microsoft/guidance may help get better results, but I ended up with a bit more of a custom solution.
-
“Sam altman won't tell you that GPT-4 has 220B parameters and is 16-way mixture model with 8 sets of weights”
I think function calling is just JSONformer idk: https://github.com/1rgs/jsonformer
- Inference Speed vs. Quality Hacks?
-
Best bet for parseable output?
jsonformer: https://github.com/1rgs/jsonformer
frogmouth
-
Show HN: Consol3 – A 3D engine in the terminal that executes on the CPU
Textual is not 3d too, but is also great for TUIs.
Textualize/Frogmouth has a TUI tree control: https://github.com/Textualize/frogmouth
FWICS browsh
-
Live markdown preview?
No, since Vim uses a TUI and Markdown doesn't only display text. You could use something like Obsidian, which can display live previews of Markdown files side-by-side with the raw text and has support for a subset of Vim keybindings. Or use a terminal multiplexer like Tmux and open a split with a preview with something like Frogmouth (your preview will still be in a TUI but it would look nicer than the source file). Emacs might also have something that does what you are looking for (when combined with evil-mode if you want to preserve Vim keybindings) but I haven't looked into it.
-
Frogmouth 0.5.0 - Markdown viewer / browser for your terminal
Instead of latest release notes, https://github.com/Textualize/frogmouth would've been a better submission link imo.
- FLiPN-FLaNK Stack Weekly May 8 2023
- GitHub - Textualize/frogmouth: A Markdown browser for your terminal
- Show HN: Frogmouth – A Markdown browser for your terminal
- Textualize/frogmouth: A Markdown browser for your terminal
What are some alternatives?
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
baca - TUI Ebook Reader
aider - aider is AI pair programming in your terminal
thinkgpt - Agent techniques to augment your LLM and push it beyong its limits
clownfish - Constrained Decoding for LLMs against JSON Schema
roadmapper - Roadmapper - A Roadmap as Code (Rac) python library. Generate professional roadmap diagram using python code.
outlines - Structured Text Generation
AudioGPT - AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head
gpt-json - Structured and typehinted GPT responses in Python
FLaNK-TravelAdvisory - Travel Advisory - RSS Processing - Apache NiFi - Apache Kafka - Apache Flink - SQL
jikkou - The Open source Resource as Code framework for Apache Kafka
mason.nvim - Portable package manager for Neovim that runs everywhere Neovim runs. Easily install and manage LSP servers, DAP servers, linters, and formatters.