SaaSHub helps you find the best software and product alternatives Learn more →
Guidance Alternatives
Similar projects and alternatives to guidance
-
Microsoft-Activation-Scripts
Discontinued A collection of scripts for activating Microsoft products using HWID / KMS38 / Online KMS activation methods with a focus on open-source code, less antivirus detection and user-friendliness.
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
text-generation-webui
A Gradio web UI for Large Language Models with support for multiple inference backends.
-
-
-
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
-
tree-of-thought-llm
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
-
-
-
WizardLM
Discontinued Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath
-
AGiXT
AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.
-
-
NeMo-Guardrails
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
-
simpleaichat
Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
-
-
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
guidance discussion
guidance reviews and mentions
-
Universal Personal Assistant with LLMs
On the other hand, LLMs can be tricked by sophisticated prompts into revealing their training data or generating inappropriate texts. This danger, especially harmful when the access to an LLM is public, emphasizes the importance of careful prompt and LLM answer moderation. Libraries that tackle this challenge are Guardrails and Guidance, and likewise, LLM invocation frameworks add functions to manage prompts more effectively.
-
Guidance: A guidance language for controlling large language models
This IS Microsoft Guidance, they seem to have spun off a separate GitHub organization for it.
https://github.com/microsoft/guidance redirects to https://github.com/guidance-ai/guidance now.
- LangChain Agent Simulation – Multi-Player Dungeons and Dragons
-
Llama: Add Grammar-Based Sampling
... and it sets the value of "armor" to "leather" so that you can use that value later in your code if you wish to. Guidance is pretty powerful, but I find the grammar hard to work with. I think the idea of being able to upload a bit of code or a context-free grammar to guide the model is super smart.
https://github.com/microsoft/guidance/blob/d2c5e3cbb730e337b...
-
Introducing TypeChat from Microsoft
Here's one thing I don't get.
Why all the rigamarole of hoping you get a valid response, adding last-mile validators to detect invalid responses, trying to beg the model to pretty please give me the syntax I'm asking for...
...when you can guarantee a valid JSON syntax by only sampling tokens that are valid? Instead of greedily picking the highest-scoring token every time, you select the highest-scoring token that conforms to the requested format.
This is what Guidance does already, also from Microsoft: https://github.com/microsoft/guidance
But OpenAI apparently does not expose the full scores of all tokens, it only exposes the highest-scoring token. Which is so odd, because if you run models locally, using Guidance is trivial, and you can guarantee your json is correct every time. It's faster to generate, too!
-
Accessing Llama 2 from the command-line with the LLM-replicate plugin
Perhaps something as simple as stating it was first built around OpenAI models and later expanded to local via plugins?
I've been meaning to ask you, have you seen/used MS Guidance[0] 'language' at all? I don't know if it's the right abstraction to interface as a plugin with what you've got in llm cli but there's a lot about Guidance that seems incredibly useful to local inference [token healing and acceleration especially].
[0]https://github.com/microsoft/guidance
-
AutoChain, lightweight and testable alternative to LangChain
LangChain is just too much, personal solutions are great, until you need to compare metrics or methodologies of prompt generation. Then the onus is on these n-parties who are sharing their resources to ensure that all of them used the same templates, they were generated the same way, with the only diff being the models these prompts were run on.
So maybe a simpler library like Microsoft's Guidance (https://github.com/microsoft/guidance)? It does this really well.
-
Structured Output from LLMs (Without Reprompting!)
I am unclear on the status of the project but here is the conversation that seem to be tracking it: https://github.com/microsoft/guidance/discussions/201
-
/r/guidance is now a subreddit for Guidance, Microsoft's template language for controlling language models!
Let's have a subreddit about Guidance!
- Is there a UI that can limit LLM tokens to a preset list?
-
A note from our sponsor - SaaSHub
www.saashub.com | 6 Feb 2025
Stats
microsoft/guidance is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of guidance is Jupyter Notebook.