tree-of-thought-llm
guidance
tree-of-thought-llm | guidance | |
---|---|---|
41 | 89 | |
4,228 | 12,248 | |
4.3% | - | |
7.2 | 9.5 | |
3 months ago | 9 months ago | |
Python | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tree-of-thought-llm
-
AI Chat Applications with the Metacognition Approach: Tree of Thoughts (ToT)
[2305.10601] Tree of Thoughts: Deliberate Problem Solving with Large Language Models (arxiv.org)
-
Last night /u/ alesneolith posted a very serious writeup claiming to have worked in one of the projects. The writeup is more elaborate than expected and got surprisingly little attention. His account has been since deleted.
Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, “Tree of Thoughts” (ToT), which generalizes over the popular “Chain of Thought” approach to prompting language models, and enables exploration over coherent units of text (“thoughts”) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models’ problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%. Code repo with all prompts: https://github.com/princeton-nlp/tree-of-thought-llm.
-
Ultra Fast Bert
GPU utilization should be down when using this technique. I’m hoping this could allow for more efficient batch inference on GPUs. If you can predict 10 tokens for the price of 1 it should allow you to do tree of thought much more efficiently.
https://github.com/princeton-nlp/tree-of-thought-llm
-
Is it best to not pay attention to AI news and/or find ways to delude ourselves into believing better outcomes?
For those familiar with Daniel Kahneman's Thinking Fast and Slow, the current LLMs (such as GPT-4 via ChatGPT) seem to resemble System 1 thinking (near-instantaneous, automatic, intuitive processes like next-word prediction). However, they lack System 2 thinking (slow, effortful, logical, planning, reasoning). What I learned today is that Google's Gemini (an LLM in training now) not only has more modalities (I think all Youtube Video and audio??), more compute, and almost twice the training data, but they're building in AlphaGo-type learning, which resembles tree of thoughts and looks a LOT like the missing puzzle piece of System 2 thinking. Will it be AGI? Maybe, and it's coming this winter.
-
Langchain Is Pointless
Tree of thoughts: https://arxiv.org/abs/2305.10601
Good video on "Tree of thoughts" which also reviews / puts it in the context of other methods: https://www.youtube.com/watch?v=ut5kp56wW_4
Completion vs conversational interface is something you can read about in the OpenAI API documentation.
For the remaining things I don't have single specific pointer at hand.
-
To all skeptics with a background in AI/CS : what is your realistic timeline for AGI/ASI ?
What do you think about the combination of Tree of Thoughts: Deliberate Problem Solving with Large Language Models LongNet: Scaling Transformers to 1,000,000,000 Tokens Textbooks Are All You Need Attention Is All You Need Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
-
Why do language models appear to work left-to-right?
You are right. Tree of Thoughts: Deliberate Problem Solving with Large Language Models proposes to solve this via MCTS-style generation (similar to how AlphaGo worked, and a lot of planning & control problems are executed).
-
Munk Debate on Artificial Intelligence
The transformer was developed in 2017 and it powers all modern LLMs. If you're familiar with Daniel Kahneman's work from Thinking Fast and Slow, you could easily summarize LLMs as excellent System 1 thinking: our fast, automatic, unconscious responses (e.g. autocomplete). I'd argue that we're one development (similar to the transformer) away from creating System 2 thinking: deliberate and strategic thinking. In fact, with merely GPT-4 and some clever architectures, researchers have developed chain-of-thought prompting and, more recently, tree-of-thoughts reasoning. While external to the LLM architecture, embedding these concepts into a LLM could very likely solve the creation of System 2 thinking and produce the first real AGI. Adding more modalities (e.g. audio, images, video, topography, etc.) will simply add more nuance in the weights and biases of a complete system.
-
Question regarding model compatibility for Alpaca Turbo
There are a bunch of other methods to improve quality and performance like tree-of-thought-llm, connecting a LLM to a database or have it review its own output.
- Tree of thoughts build in open-source model
guidance
-
Guidance: A guidance language for controlling large language models
This IS Microsoft Guidance, they seem to have spun off a separate GitHub organization for it.
https://github.com/microsoft/guidance redirects to https://github.com/guidance-ai/guidance now.
- LangChain Agent Simulation – Multi-Player Dungeons and Dragons
-
Llama: Add Grammar-Based Sampling
... and it sets the value of "armor" to "leather" so that you can use that value later in your code if you wish to. Guidance is pretty powerful, but I find the grammar hard to work with. I think the idea of being able to upload a bit of code or a context-free grammar to guide the model is super smart.
https://github.com/microsoft/guidance/blob/d2c5e3cbb730e337b...
-
Introducing TypeChat from Microsoft
Here's one thing I don't get.
Why all the rigamarole of hoping you get a valid response, adding last-mile validators to detect invalid responses, trying to beg the model to pretty please give me the syntax I'm asking for...
...when you can guarantee a valid JSON syntax by only sampling tokens that are valid? Instead of greedily picking the highest-scoring token every time, you select the highest-scoring token that conforms to the requested format.
This is what Guidance does already, also from Microsoft: https://github.com/microsoft/guidance
But OpenAI apparently does not expose the full scores of all tokens, it only exposes the highest-scoring token. Which is so odd, because if you run models locally, using Guidance is trivial, and you can guarantee your json is correct every time. It's faster to generate, too!
-
Accessing Llama 2 from the command-line with the LLM-replicate plugin
Perhaps something as simple as stating it was first built around OpenAI models and later expanded to local via plugins?
I've been meaning to ask you, have you seen/used MS Guidance[0] 'language' at all? I don't know if it's the right abstraction to interface as a plugin with what you've got in llm cli but there's a lot about Guidance that seems incredibly useful to local inference [token healing and acceleration especially].
[0]https://github.com/microsoft/guidance
-
AutoChain, lightweight and testable alternative to LangChain
LangChain is just too much, personal solutions are great, until you need to compare metrics or methodologies of prompt generation. Then the onus is on these n-parties who are sharing their resources to ensure that all of them used the same templates, they were generated the same way, with the only diff being the models these prompts were run on.
So maybe a simpler library like Microsoft's Guidance (https://github.com/microsoft/guidance)? It does this really well.
-
Structured Output from LLMs (Without Reprompting!)
I am unclear on the status of the project but here is the conversation that seem to be tracking it: https://github.com/microsoft/guidance/discussions/201
-
/r/guidance is now a subreddit for Guidance, Microsoft's template language for controlling language models!
Let's have a subreddit about Guidance!
- Is there a UI that can limit LLM tokens to a preset list?
-
Any suggestions for an open source model for parsing real estate listings?
You should look at guidance for an LLM to fill out a template. Define the output data structure and provide the real estate listing in the context (see the JSON template example here https://github.com/microsoft/guidance)
What are some alternatives?
Voyager - An Open-Ended Embodied Agent with Large Language Models
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
tree-of-thoughts - Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%
lmql - A language for constraint-guided and efficient LLM programming.
Neurite - Fractal Graph Desktop for Ai-Agents, Web-Browsing, Note-Taking, and Code.
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
hamilton - Hamilton helps data scientists and engineers define testable, modular, self-documenting dataflows, that encode lineage and metadata. Runs and scales everywhere python does.
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
Mr.-Ranedeer-AI-Tutor - A GPT-4 AI Tutor Prompt for customizable personalized learning experiences.
llama-cpp-python - Python bindings for llama.cpp
SillyTavern - LLM Frontend for Power Users.
langchainrb - Build LLM-powered applications in Ruby