openai-cookbook
guidance
openai-cookbook | guidance | |
---|---|---|
215 | 89 | |
55,954 | 12,248 | |
1.0% | - | |
9.5 | 9.5 | |
6 days ago | 9 months ago | |
MDX | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
openai-cookbook
-
Question-Answer System Architectures using LLMs
A pretrained LLM is a closed-book system: It can only access information that it was trained on. With domain fine-tuning, the system manifests additional material. An early prototype of this technique was shown in this OpenAi cookbook: For the target domain, text was embedded using an API, and then when using the LLM, embeddings were retrieved using semantic similarity search to formulate an answer. Although this approach evolved to retrieval-augmented generation, its still a technique to adapt a Gen2 (2020) or Gen3 (2022) LLM into a question-answering system.
-
Ask HN: High quality Python scripts or small libraries to learn from
https://github.com/openai/openai-cookbook/blob/main/examples...
- Collection of notebooks showcasing some fun and effective ways of using Claude
- OpenAI Cookbook: Techniques to improve reliability
- OpenAI Cookbooks
-
How to fine tune vit/convnet to focus on the layout of the input room image and ignore other things ?
It sounds like you are trying to tweak embeddings for similarity search. Rather than fine-tune the model's layers, you may want to try training a linear transformation the existing model's output embedding. Openai has a cookbook on how to do that. You will need some data though - but I think you can try it with ~20 pieces of synthetically generated data.
-
Best base model 1B or 7B for full finetuning
tutorial from OpenAI https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb
-
Resources to learn ChatGPT and the OpenAI API
OpenAI Cookbook
- OpenAI Cookbook
-
Another Major Outage Across ChatGPT and API
OpenAI community repo with lots of examples: https://github.com/openai/openai-cookbook
guidance
-
Guidance: A guidance language for controlling large language models
This IS Microsoft Guidance, they seem to have spun off a separate GitHub organization for it.
https://github.com/microsoft/guidance redirects to https://github.com/guidance-ai/guidance now.
- LangChain Agent Simulation – Multi-Player Dungeons and Dragons
-
Llama: Add Grammar-Based Sampling
... and it sets the value of "armor" to "leather" so that you can use that value later in your code if you wish to. Guidance is pretty powerful, but I find the grammar hard to work with. I think the idea of being able to upload a bit of code or a context-free grammar to guide the model is super smart.
https://github.com/microsoft/guidance/blob/d2c5e3cbb730e337b...
-
Introducing TypeChat from Microsoft
Here's one thing I don't get.
Why all the rigamarole of hoping you get a valid response, adding last-mile validators to detect invalid responses, trying to beg the model to pretty please give me the syntax I'm asking for...
...when you can guarantee a valid JSON syntax by only sampling tokens that are valid? Instead of greedily picking the highest-scoring token every time, you select the highest-scoring token that conforms to the requested format.
This is what Guidance does already, also from Microsoft: https://github.com/microsoft/guidance
But OpenAI apparently does not expose the full scores of all tokens, it only exposes the highest-scoring token. Which is so odd, because if you run models locally, using Guidance is trivial, and you can guarantee your json is correct every time. It's faster to generate, too!
-
Accessing Llama 2 from the command-line with the LLM-replicate plugin
Perhaps something as simple as stating it was first built around OpenAI models and later expanded to local via plugins?
I've been meaning to ask you, have you seen/used MS Guidance[0] 'language' at all? I don't know if it's the right abstraction to interface as a plugin with what you've got in llm cli but there's a lot about Guidance that seems incredibly useful to local inference [token healing and acceleration especially].
[0]https://github.com/microsoft/guidance
-
AutoChain, lightweight and testable alternative to LangChain
LangChain is just too much, personal solutions are great, until you need to compare metrics or methodologies of prompt generation. Then the onus is on these n-parties who are sharing their resources to ensure that all of them used the same templates, they were generated the same way, with the only diff being the models these prompts were run on.
So maybe a simpler library like Microsoft's Guidance (https://github.com/microsoft/guidance)? It does this really well.
-
Structured Output from LLMs (Without Reprompting!)
I am unclear on the status of the project but here is the conversation that seem to be tracking it: https://github.com/microsoft/guidance/discussions/201
-
/r/guidance is now a subreddit for Guidance, Microsoft's template language for controlling language models!
Let's have a subreddit about Guidance!
- Is there a UI that can limit LLM tokens to a preset list?
-
Any suggestions for an open source model for parsing real estate listings?
You should look at guidance for an LLM to fill out a template. Define the output data structure and provide the real estate listing in the context (see the JSON template example here https://github.com/microsoft/guidance)
What are some alternatives?
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs
lmql - A language for constraint-guided and efficient LLM programming.
chatgpt-retrieval-plugin - The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
askai - Command Line Interface for OpenAi ChatGPT
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
llama-cpp-python - Python bindings for llama.cpp
txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
langchainrb - Build LLM-powered applications in Ruby