guidance VS dspy

Compare guidance vs dspy and see what are their differences.

guidance

A guidance language for controlling large language models. (by guidance-ai)

dspy

DSPy: The framework for programming—not prompting—foundation models (by stanfordnlp)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
guidance dspy
23 22
17,357 10,820
2.7% 17.5%
9.8 9.9
6 days ago 3 days ago
Jupyter Notebook Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

guidance

Posts with mentions or reviews of guidance. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-08.
  • Anthropic's Haiku Beats GPT-4 Turbo in Tool Use
    5 projects | news.ycombinator.com | 8 Apr 2024
    [1]: https://github.com/guidance-ai/guidance/tree/main
  • Show HN: Prompts as (WASM) Programs
    9 projects | news.ycombinator.com | 11 Mar 2024
    > The most obvious usage of this is forcing a model to output valid JSON

    Isn't this something that Outlines [0], Guidance [1] and others [2] already solve much more elegantly?

    0. https://github.com/outlines-dev/outlines

    1. https://github.com/guidance-ai/guidance

    2. https://github.com/sgl-project/sglang

  • Show HN: Fructose, LLM calls as strongly typed functions
    10 projects | news.ycombinator.com | 6 Mar 2024
  • LiteLlama-460M-1T has 460M parameters trained with 1T tokens
    1 project | news.ycombinator.com | 7 Jan 2024
    Or combine it with something like llama.cpp's grammer or microsoft's guidance-ai[0] (which I prefer) which would allow adding some react-style prompting and external tools. As others have mentioned, instruct tuning would help too.

    [0] https://github.com/guidance-ai/guidance

  • Forcing AI to Follow a Specific Answer Pattern Using GBNF Grammar
    2 projects | /r/LocalLLaMA | 10 Dec 2023
  • Prompting LLMs to constrain output
    2 projects | /r/LocalLLaMA | 8 Dec 2023
    have been experimenting with guidance and lmql. a bit too early to give any well formed opinions but really do like the idea of constraining llm output.
  • Guidance is back 🥳
    1 project | /r/LocalLLaMA | 16 Nov 2023
  • New: LangChain templates – fastest way to build a production-ready LLM app
    6 projects | news.ycombinator.com | 1 Nov 2023
  • Is supervised learning dead for computer vision?
    9 projects | news.ycombinator.com | 28 Oct 2023
    Thanks for your comment.

    I did not know about "Betteridge's law of headlines", quite interesting. Thanks for sharing :)

    You raise some interesting points.

    1) Safety: It is true that LVMs and LLMs have unknown biases and could potentially create unsafe content. However, this is not necessarily unique to them, for example, Google had the same problem with their supervised learning model https://www.theverge.com/2018/1/12/16882408/google-racist-go.... It all depends on the original data. I believe we need systems on top of our models to ensure safety. It is also possible to restrict the output domain of our models (https://github.com/guidance-ai/guidance). Instead of allowing our LVMs to output any words, we could restrict it to only being able to answer "red, green, blue..." when giving the color of a car.

    2) Cost: You are right right now LVMs are quite expensive to run. As you said are a great way to go to market faster but they cannot run on low-cost hardware for the moment. However, they could help with training those smaller models. Indeed, with see in the NLP domain that a lot of smaller models are trained on data created with GPT models. You can still distill the knowledge of your LVMs into a custom smaller model that can run on embedded devices. The advantage is that you can use your LVMs to generate data when it is scarce and use it as a fallback when your smaller device is uncertain of the answer.

    3) Labelling data: I don't think labeling data is necessarily cheap. First, you have to collect the data, depending on the frequency of your events could take months of monitoring if you want to build a large-scale dataset. Lastly, not all labeling is necessarily cheap. I worked at a semiconductor company and labeled data was scarce as it required expert knowledge and could only be done by experienced employees. Indeed not all labelling can be done externally.

    However, both approaches are indeed complementary and I think systems that will work the best will rely on both.

    Thanks again for the thought-provoking discussion. I hope this answer some of the concerns you raised

  • Show HN: Elelem – TypeScript LLMs with tracing, retries, and type safety
    2 projects | news.ycombinator.com | 12 Oct 2023
    I've had a bit of trouble getting function calling to work with cases that aren't just extracting some data from the input. The format is correct but it was harder to get the correct data if it wasn't a simple extraction.

    Hopefully OpenAI and others will offer something like https://github.com/guidance-ai/guidance at some point to guarantee overall output structure.

    Failed validations will retry, but from what I've seen JSONSchema + generated JSON examples are decently reliable in practice for gpt-3.5-turbo and extremely reliable on gpt-4.

dspy

Posts with mentions or reviews of dspy. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-02.
  • Computer Vision Meetup: Develop a Legal Search Application from Scratch using Milvus and DSPy!
    2 projects | dev.to | 2 May 2024
    Legal practitioners often need to find specific cases and clauses across thousands of dense documents. While traditional keyword-based search techniques are useful, they fail to fully capture semantic content of queries and case files. Vector search engines and large language models provide an intriguing alternative. In this talk, I will show you how to build a legal search application using the DSPy framework and the Milvus vector search engine.
  • Pydantic Logfire
    7 projects | news.ycombinator.com | 30 Apr 2024
    I’ve observed that Pydantic - which we’ve used for years in our API stack - has become very popular in LLM applications, for its type-adjacent features. It serves as a foundational technology for prompting libraries like [DSPy](https://github.com/stanfordnlp/dspy) which are abstracting “up the stack” of LLM apps. (some opinions there)

    Operating AI apps reveals a big challenge, in that debugging probabilistic code paths requires more than the usual introspective abilities, and in an environment where function calls can have very real monetary impact we have to be able to see what’s happening in the runtime. See LangChain’s hosted solution (can’t recall the name) that allows an operator to see prompts and responses “on the wire”. (It just occurred to me that Langchain and Pydantic have a lot in common here, in approach.)

    Having a coupling between Pydantic - which is *just about* the data layer itself - and an observability tool seems very interesting to me, and having this come from the folks who built it does not seem unreasonable. WRT open source and monetization, I would be lying if I said I wasn’t a little worried - given the recent few months - but I am choosing to see this in a positive light, given this team’s “believability weight” (to overuse Dalio) and history of delivering solid and really useful tooling.

  • Ask HN: Most efficient way to fine-tune an LLM in 2024?
    6 projects | news.ycombinator.com | 4 Apr 2024
  • Princeton group open sources "SWE-agent", with 12.3% fix rate for GitHub issues
    3 projects | news.ycombinator.com | 2 Apr 2024
    DSPy is the best tool for optimizing prompts [0]: https://github.com/stanfordnlp/dspy

    Think of it as a meta-prompt optimizer, it uses a LLM to optimize your prompts, to optimize your LLM.

  • Winner of the SF Mistral AI Hackathon: Automated Test Driven Prompting
    2 projects | news.ycombinator.com | 27 Mar 2024
    Isn’t this just a very naive implementation of what DsPY does?

    https://github.com/stanfordnlp/dspy

    I don’t understand what is exceptional here.

  • Show HN: Fructose, LLM calls as strongly typed functions
    10 projects | news.ycombinator.com | 6 Mar 2024
    Have you done any comparison with DSPy ? (https://github.com/stanfordnlp/dspy)

    Feels very similiar to DSPy except you dont have optimizations yet. But I like your API and the programming model your are enforcing through this.

  • AI Prompt Engineering Is Dead
    1 project | news.ycombinator.com | 6 Mar 2024
    I'm interested in hearing if anyone has used DSPy (https://github.com/stanfordnlp/dspy) just for prompt optimization for GPT-3.5 or GPT-4. Was it worth the effort and much better than manual prompt iteration? Was the optimized prompt some weird incantation? Any other insights?
  • Ask HN: Are you using a GPT to prompt-engineer another GPT?
    2 projects | news.ycombinator.com | 29 Jan 2024
    You should check out x.com/lateinteraction's DSPy — which is like an optimizer for prompts — https://github.com/stanfordnlp/dspy
  • SuperDuperDB - how to use it to talk to your documents locally using llama 7B or Mistral 7B?
    7 projects | /r/LocalLLaMA | 9 Dec 2023
  • FLaNK Stack Weekly for 12 September 2023
    26 projects | dev.to | 12 Sep 2023

What are some alternatives?

When comparing guidance and dspy you can also consider the following projects:

lmql - A language for constraint-guided and efficient LLM programming.

semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps

open-interpreter - A natural language interface for computers

langchain - 🦜🔗 Build context-aware reasoning applications

playground - Play with neural networks!

NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.

FastMJPG - FastMJPG is a command line tool for capturing, sending, receiving, rendering, piping, and recording MJPG video with extremely low latency. It is optimized for running on constrained hardware and battery powered devices.

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

MLflow - Open source platform for the machine learning lifecycle

outlines - Structured Text Generation

prompt-engine-py - A utility library for creating and maintaining prompts for Large Language Models