guidance
TypeChat
guidance | TypeChat | |
---|---|---|
89 | 12 | |
12,248 | 7,875 | |
- | 2.6% | |
9.5 | 9.1 | |
9 months ago | 4 days ago | |
Jupyter Notebook | TypeScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
guidance
-
Guidance: A guidance language for controlling large language models
This IS Microsoft Guidance, they seem to have spun off a separate GitHub organization for it.
https://github.com/microsoft/guidance redirects to https://github.com/guidance-ai/guidance now.
- LangChain Agent Simulation – Multi-Player Dungeons and Dragons
-
Llama: Add Grammar-Based Sampling
... and it sets the value of "armor" to "leather" so that you can use that value later in your code if you wish to. Guidance is pretty powerful, but I find the grammar hard to work with. I think the idea of being able to upload a bit of code or a context-free grammar to guide the model is super smart.
https://github.com/microsoft/guidance/blob/d2c5e3cbb730e337b...
-
Introducing TypeChat from Microsoft
Here's one thing I don't get.
Why all the rigamarole of hoping you get a valid response, adding last-mile validators to detect invalid responses, trying to beg the model to pretty please give me the syntax I'm asking for...
...when you can guarantee a valid JSON syntax by only sampling tokens that are valid? Instead of greedily picking the highest-scoring token every time, you select the highest-scoring token that conforms to the requested format.
This is what Guidance does already, also from Microsoft: https://github.com/microsoft/guidance
But OpenAI apparently does not expose the full scores of all tokens, it only exposes the highest-scoring token. Which is so odd, because if you run models locally, using Guidance is trivial, and you can guarantee your json is correct every time. It's faster to generate, too!
-
Accessing Llama 2 from the command-line with the LLM-replicate plugin
Perhaps something as simple as stating it was first built around OpenAI models and later expanded to local via plugins?
I've been meaning to ask you, have you seen/used MS Guidance[0] 'language' at all? I don't know if it's the right abstraction to interface as a plugin with what you've got in llm cli but there's a lot about Guidance that seems incredibly useful to local inference [token healing and acceleration especially].
[0]https://github.com/microsoft/guidance
-
AutoChain, lightweight and testable alternative to LangChain
LangChain is just too much, personal solutions are great, until you need to compare metrics or methodologies of prompt generation. Then the onus is on these n-parties who are sharing their resources to ensure that all of them used the same templates, they were generated the same way, with the only diff being the models these prompts were run on.
So maybe a simpler library like Microsoft's Guidance (https://github.com/microsoft/guidance)? It does this really well.
-
Structured Output from LLMs (Without Reprompting!)
I am unclear on the status of the project but here is the conversation that seem to be tracking it: https://github.com/microsoft/guidance/discussions/201
-
/r/guidance is now a subreddit for Guidance, Microsoft's template language for controlling language models!
Let's have a subreddit about Guidance!
- Is there a UI that can limit LLM tokens to a preset list?
-
Any suggestions for an open source model for parsing real estate listings?
You should look at guidance for an LLM to fill out a template. Define the output data structure and provide the real estate listing in the context (see the JSON template example here https://github.com/microsoft/guidance)
TypeChat
-
Fuck You, Show Me the Prompt
Not sure it's related to function calling. GPT4 can do function calling without using the specific function-calling API just by injecting the schema you want into the prompt with directions and asking it to return JSON. It works like >99% of the time. Same with 3.5-turbo.
The problem is these libraries convert pydantic models into json schemas and inject them into the prompt, which uses up like 80% more tokens than just describing the schema using typescript type syntax for example. See https://microsoft.github.io/TypeChat/, where they prompt using typescript type descriptions to get json data from LLMs. It's similar to what we built but with more boilerplate.
-
Semantic Kernel
Semantic Memory (renamed to Kernel Memory - https://github.com/microsoft/kernel-memory) complements SK. Guidance's features are being absorbed into SK, following the departure of that team from Microsoft. Additionally, we have TypeChat (https://github.com/microsoft/TypeChat), which aims to ensure type-safe responses from LLMs. Most features of Autogen are also being integrated into SK, along with Assistants. SK serves as the orchestration engine powering Microsoft Copilots.
- Good LLM Validation Is Just Good Validation
-
Show HN: Symphony – Make functions invokable by GPT-4
I tried TypeChat for my use case and ended up defining functions as typescript data types. This approach sounds much better, and leverages the newer OpenAI function calling, which should be more reliable I would think. Thanks for creating+sharing.
https://microsoft.github.io/TypeChat/
-
Show HN: LLMs can generate valid JSON 100% of the time
That re-prompting error on is what this new Microsoft library does, too: https://github.com/microsoft/TypeChat
Here's their prompt for that: https://github.com/microsoft/TypeChat/blob/c45460f4030938da3...
I think the approach using grammars (seen here, but also in things like https://github.com/ggerganov/llama.cpp/pull/1773 ) is a much more elegant solution.
- TypeChat replaces prompt engineering with schema engineering
-
Introducing TypeChat from Microsoft
I'm very surprised that they're not using `guidance` [0] here.
It not only would allow them to suggest that required fields be completed (avoiding the need for validation [1]) and probably save them GPU time in the end.
There must be a reason and I'm dying to know what it is! :)
[0] https://github.com/microsoft/guidance
[1] https://github.com/microsoft/TypeChat/blob/main/src/typechat...
What are some alternatives?
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
guidance - A guidance language for controlling large language models.
lmql - A language for constraint-guided and efficient LLM programming.
outlines - Structured Text Generation
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
jsonformer - A Bulletproof Way to Generate Structured JSON from Language Models
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
ts-patch - Augment the TypeScript compiler to support extended functionality
llama-cpp-python - Python bindings for llama.cpp
ai-agents-laravel - Build AI Agents for popular LLMs quick and easy in Laravel
langchainrb - Build LLM-powered applications in Ruby
shelby_as_a_service - Production-ready LLM Agents. Just add API keys