guidance VS ad-llama

Compare guidance vs ad-llama and see what are their differences.

guidance

A guidance language for controlling large language models. (by guidance-ai)

ad-llama

Structured inference with Llama 2 in your browser (by gsuuon)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
guidance ad-llama
23 6
17,357 47
2.7% -
9.8 8.9
6 days ago 28 days ago
Jupyter Notebook TypeScript
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

guidance

Posts with mentions or reviews of guidance. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-08.
  • Anthropic's Haiku Beats GPT-4 Turbo in Tool Use
    5 projects | news.ycombinator.com | 8 Apr 2024
    [1]: https://github.com/guidance-ai/guidance/tree/main
  • Show HN: Prompts as (WASM) Programs
    9 projects | news.ycombinator.com | 11 Mar 2024
    > The most obvious usage of this is forcing a model to output valid JSON

    Isn't this something that Outlines [0], Guidance [1] and others [2] already solve much more elegantly?

    0. https://github.com/outlines-dev/outlines

    1. https://github.com/guidance-ai/guidance

    2. https://github.com/sgl-project/sglang

  • Show HN: Fructose, LLM calls as strongly typed functions
    10 projects | news.ycombinator.com | 6 Mar 2024
  • LiteLlama-460M-1T has 460M parameters trained with 1T tokens
    1 project | news.ycombinator.com | 7 Jan 2024
    Or combine it with something like llama.cpp's grammer or microsoft's guidance-ai[0] (which I prefer) which would allow adding some react-style prompting and external tools. As others have mentioned, instruct tuning would help too.

    [0] https://github.com/guidance-ai/guidance

  • Forcing AI to Follow a Specific Answer Pattern Using GBNF Grammar
    2 projects | /r/LocalLLaMA | 10 Dec 2023
  • Prompting LLMs to constrain output
    2 projects | /r/LocalLLaMA | 8 Dec 2023
    have been experimenting with guidance and lmql. a bit too early to give any well formed opinions but really do like the idea of constraining llm output.
  • Guidance is back 🥳
    1 project | /r/LocalLLaMA | 16 Nov 2023
  • New: LangChain templates – fastest way to build a production-ready LLM app
    6 projects | news.ycombinator.com | 1 Nov 2023
  • Is supervised learning dead for computer vision?
    9 projects | news.ycombinator.com | 28 Oct 2023
    Thanks for your comment.

    I did not know about "Betteridge's law of headlines", quite interesting. Thanks for sharing :)

    You raise some interesting points.

    1) Safety: It is true that LVMs and LLMs have unknown biases and could potentially create unsafe content. However, this is not necessarily unique to them, for example, Google had the same problem with their supervised learning model https://www.theverge.com/2018/1/12/16882408/google-racist-go.... It all depends on the original data. I believe we need systems on top of our models to ensure safety. It is also possible to restrict the output domain of our models (https://github.com/guidance-ai/guidance). Instead of allowing our LVMs to output any words, we could restrict it to only being able to answer "red, green, blue..." when giving the color of a car.

    2) Cost: You are right right now LVMs are quite expensive to run. As you said are a great way to go to market faster but they cannot run on low-cost hardware for the moment. However, they could help with training those smaller models. Indeed, with see in the NLP domain that a lot of smaller models are trained on data created with GPT models. You can still distill the knowledge of your LVMs into a custom smaller model that can run on embedded devices. The advantage is that you can use your LVMs to generate data when it is scarce and use it as a fallback when your smaller device is uncertain of the answer.

    3) Labelling data: I don't think labeling data is necessarily cheap. First, you have to collect the data, depending on the frequency of your events could take months of monitoring if you want to build a large-scale dataset. Lastly, not all labeling is necessarily cheap. I worked at a semiconductor company and labeled data was scarce as it required expert knowledge and could only be done by experienced employees. Indeed not all labelling can be done externally.

    However, both approaches are indeed complementary and I think systems that will work the best will rely on both.

    Thanks again for the thought-provoking discussion. I hope this answer some of the concerns you raised

  • Show HN: Elelem – TypeScript LLMs with tracing, retries, and type safety
    2 projects | news.ycombinator.com | 12 Oct 2023
    I've had a bit of trouble getting function calling to work with cases that aren't just extracting some data from the input. The format is correct but it was harder to get the correct data if it wasn't a simple extraction.

    Hopefully OpenAI and others will offer something like https://github.com/guidance-ai/guidance at some point to guarantee overall output structure.

    Failed validations will retry, but from what I've seen JSONSchema + generated JSON examples are decently reliable in practice for gpt-3.5-turbo and extremely reliable on gpt-4.

ad-llama

Posts with mentions or reviews of ad-llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-18.
  • Show HN: A murder mystery game built on an open-source gen-AI agent framework
    3 projects | news.ycombinator.com | 18 Sep 2023
  • Guidance: A guidance language for controlling large language models
    10 projects | news.ycombinator.com | 16 Sep 2023
    I took a stab at making something[1] like guidance - I'm not sure exactly how guidance does it (and I'm also really curious how it would work with chat api's) but here's how my solution works.

    Each expression becomes a new inference request, so it's not a single inference pass. Because each subsequent pass includes the previously inferenced text, the LLM ends up doing a lot of prefill and less decode. You only decode as much as you actually inference, the repeated passes only end up costing more in prefill (which tend to be much faster tok/s).

    To work with chat tuned instruction models, you can basically still treat it as a completion model. I provide the previously completed inference text as a partially completed assistant response, e.g. with llama 2 it goes after [/INST]. You can add a bit of instruction for each inference expression which gets added to the [INST]. This approach lets you start off the inference with `{ "someField": "` for example to guarantee (at least the start of) a json response and allow you to add a little bit of instruction or context just for that field.

    I didn't even try with openai api's since afaict you can't provide a partial assistant response for it to continue from. Even if you were to request a single token at a time and use logit_bias for biased sampling, I don't see how you can get it to continue a partially completed inference.

    [1] https://github.com/gsuuon/ad-llama

  • Simulating History with ChatGPT
    1 project | news.ycombinator.com | 12 Sep 2023
    Can you point me to some text-adventure engines? I'm hacking on an in-browser local llm structured inference library[1] and am trying to put together a text game demo[2] for it. It didn't even occur to me that text-adventure game engines exist, I was apparently re-inventing the wheel.

    [1] https://github.com/gsuuon/ad-llama

    [2] https://ad-llama.vercel.app/murder/

  • Ask HN: Which programming language to learn in AI era?
    1 project | news.ycombinator.com | 30 Aug 2023
    Yup, I'm building a library that runs LLM's in browser with tagged template literals: https://github.com/gsuuon/ad-llama

    I think it has fundamental DX benefits over python for complex prompt chaining (or I wouldn't be building it!) Even still -- if their focus is purely on AI, python is still the better choice starting from scratch. The python AI ecosystem has many more libraries, stack overflow answers, tutorials, etc available.

  • Show HN: LLMs can generate valid JSON 100% of the time
    25 projects | news.ycombinator.com | 14 Aug 2023
    Generating an FSM over the vocabulary is a really interesting approach to guided sampling! I'm hacking on a structured inference library (https://github.com/gsuuon/ad-llama) - I also tried to add a vocab preprocessing step to generate a valid tokens mask (just with regex or static strings initially) but discovered that doing so would cause unlikely / unnatural tokens to be masked rather than the token which represents the natural encoding given the existing sampled tokens.

    Given the stateful nature of tokenizers, I decided that trying to preprocess the individual token ids was a losing battle. Even in the simple case of whitespace - tokenizer merges can really screw up generating a static mask, e.g. we expect a space next, but a token decodes to 'foo', but is actually a '_foo' and would've decoded with a whitespace if it were following a valid pair. When I go to construct the static vocab mask, it would then end up matching against 'foo' instead of ' foo'.

    How did you work around this for the FSM approach? Does it somehow include information about merges / whitespace / tokenizer statefulness?

What are some alternatives?

When comparing guidance and ad-llama you can also consider the following projects:

lmql - A language for constraint-guided and efficient LLM programming.

llm - Access large language models from the command-line

semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps

grontown - A murder mystery featuring generative agents

langchain - 🦜🔗 Build context-aware reasoning applications

llm-mlc - LLM plugin for running models using MLC

NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.

eastworld - Framework for Generative Agents in Games

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

hof - Framework that joins data models, schemas, code generation, and a task engine. Language and technology agnostic.

outlines - Structured Text Generation