jsonformer
llama.cpp
jsonformer | llama.cpp | |
---|---|---|
25 | 780 | |
3,868 | 57,984 | |
- | - | |
5.4 | 10.0 | |
3 months ago | 5 days ago | |
Jupyter Notebook | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
jsonformer
- Forcing AI to Follow a Specific Answer Pattern Using GBNF Grammar
-
Refact LLM: New 1.6B code model reaches 32% HumanEval and is SOTA for the size
- Tools like jsonformer https://github.com/1rgs/jsonformer are not possible with OpenAIs API.
-
Show HN: LLMs can generate valid JSON 100% of the time
How does this compare in terms of latency, cost, and effectiveness to jsonformer? https://github.com/1rgs/jsonformer
-
Ask HN: Explain how size of input changes ChatGPT performance
You're correct with interpreting how the model works wrt it returning tokens one at a time. The model returns one token, and the entire context window gets shifted right by one to for account it when generating the next one.
As for model performance at different context sizes, it's seems a bit complicated. From what I understand, even if models are tweaked (for example using the superHOT RoPE hack or sparse attention) to be able to use longer contexts, they still have to be fined tuned on input of this increased context to actually utilize it, but performance seems to degrade regardless as input length increases.
For your question about fine tuning models to respond with only "yes" or "no", I recommend looking into how the jsonformers library works: https://github.com/1rgs/jsonformer . Essentially, you still let the model generate many tokens for the next position, and only accept the ones that satisfy certain criteria (such as the token for "yes" and the token for "no".
You can do this with openAI API too, using tiktoken https://twitter.com/AAAzzam/status/1669753722828730378?t=d_W... . Be careful though as results will be different on different selections of tokens, as "YES", "Yes", "yes", etc are all different tokens to the best of my knowledge
- A framework to securely use LLMs in companies – Part 1: Overview of Risks
-
LLMs for Schema Augmentation
From here, we just need to continue generating tokens until we get to a closing quote. This approach was borrowed from Jsonformer which uses a similar approach to induce LLMs to generate structured output. Continuing to do so for each property using Replit's code LLM gives the following output:
-
Doesn't a 4090 massively overpower a 3090 for running local LLMs?
https://github.com/1rgs/jsonformer or https://github.com/microsoft/guidance may help get better results, but I ended up with a bit more of a custom solution.
-
“Sam altman won't tell you that GPT-4 has 220B parameters and is 16-way mixture model with 8 sets of weights”
I think function calling is just JSONformer idk: https://github.com/1rgs/jsonformer
- Inference Speed vs. Quality Hacks?
-
Best bet for parseable output?
jsonformer: https://github.com/1rgs/jsonformer
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
aider - aider is AI pair programming in your terminal
gpt4all - gpt4all: run open-source LLMs anywhere
clownfish - Constrained Decoding for LLMs against JSON Schema
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
outlines - Structured Text Generation
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
gpt-json - Structured and typehinted GPT responses in Python
ggml - Tensor library for machine learning
jikkou - The Open source Resource as Code framework for Apache Kafka
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM