tree-of-thought-llm
langchain
tree-of-thought-llm | langchain | |
---|---|---|
41 | 152 | |
4,228 | 56,526 | |
4.3% | - | |
7.2 | 10.0 | |
3 months ago | 10 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tree-of-thought-llm
-
AI Chat Applications with the Metacognition Approach: Tree of Thoughts (ToT)
[2305.10601] Tree of Thoughts: Deliberate Problem Solving with Large Language Models (arxiv.org)
-
Last night /u/ alesneolith posted a very serious writeup claiming to have worked in one of the projects. The writeup is more elaborate than expected and got surprisingly little attention. His account has been since deleted.
Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, “Tree of Thoughts” (ToT), which generalizes over the popular “Chain of Thought” approach to prompting language models, and enables exploration over coherent units of text (“thoughts”) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models’ problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%. Code repo with all prompts: https://github.com/princeton-nlp/tree-of-thought-llm.
-
Ultra Fast Bert
GPU utilization should be down when using this technique. I’m hoping this could allow for more efficient batch inference on GPUs. If you can predict 10 tokens for the price of 1 it should allow you to do tree of thought much more efficiently.
https://github.com/princeton-nlp/tree-of-thought-llm
-
Is it best to not pay attention to AI news and/or find ways to delude ourselves into believing better outcomes?
For those familiar with Daniel Kahneman's Thinking Fast and Slow, the current LLMs (such as GPT-4 via ChatGPT) seem to resemble System 1 thinking (near-instantaneous, automatic, intuitive processes like next-word prediction). However, they lack System 2 thinking (slow, effortful, logical, planning, reasoning). What I learned today is that Google's Gemini (an LLM in training now) not only has more modalities (I think all Youtube Video and audio??), more compute, and almost twice the training data, but they're building in AlphaGo-type learning, which resembles tree of thoughts and looks a LOT like the missing puzzle piece of System 2 thinking. Will it be AGI? Maybe, and it's coming this winter.
-
Langchain Is Pointless
Tree of thoughts: https://arxiv.org/abs/2305.10601
Good video on "Tree of thoughts" which also reviews / puts it in the context of other methods: https://www.youtube.com/watch?v=ut5kp56wW_4
Completion vs conversational interface is something you can read about in the OpenAI API documentation.
For the remaining things I don't have single specific pointer at hand.
-
To all skeptics with a background in AI/CS : what is your realistic timeline for AGI/ASI ?
What do you think about the combination of Tree of Thoughts: Deliberate Problem Solving with Large Language Models LongNet: Scaling Transformers to 1,000,000,000 Tokens Textbooks Are All You Need Attention Is All You Need Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
-
Why do language models appear to work left-to-right?
You are right. Tree of Thoughts: Deliberate Problem Solving with Large Language Models proposes to solve this via MCTS-style generation (similar to how AlphaGo worked, and a lot of planning & control problems are executed).
-
Munk Debate on Artificial Intelligence
The transformer was developed in 2017 and it powers all modern LLMs. If you're familiar with Daniel Kahneman's work from Thinking Fast and Slow, you could easily summarize LLMs as excellent System 1 thinking: our fast, automatic, unconscious responses (e.g. autocomplete). I'd argue that we're one development (similar to the transformer) away from creating System 2 thinking: deliberate and strategic thinking. In fact, with merely GPT-4 and some clever architectures, researchers have developed chain-of-thought prompting and, more recently, tree-of-thoughts reasoning. While external to the LLM architecture, embedding these concepts into a LLM could very likely solve the creation of System 2 thinking and produce the first real AGI. Adding more modalities (e.g. audio, images, video, topography, etc.) will simply add more nuance in the weights and biases of a complete system.
-
Question regarding model compatibility for Alpaca Turbo
There are a bunch of other methods to improve quality and performance like tree-of-thought-llm, connecting a LLM to a database or have it review its own output.
- Tree of thoughts build in open-source model
langchain
-
🗣️🤖 Ask to your Neo4J knowledge base in NLP & get KPIs
Langchain and the implementation of Custom Tools also is a great (and very efficient) way to setup a dedicated Q&A (for example for chat purpose) agent.
- LangChain – Some quick, high level thoughts on improvements/changes
-
Claude 2 Internal API Client and CLI
We're using it via langchain talking to Amazon Bedrock which is hosting Claude 1.x. It's comparable to GPT3.x, not bad. The integration doesn't seem to be fully there though, I think langchain is expecting "Human:" and "AI:", but Claude uses "Assistant:".
https://github.com/hwchase17/langchain/issues/2638
-
Any better alternatives to fine-tuning GPT-3 yet to create a custom chatbot persona based on provided knowledge for others to use?
Depending on how much work you want to put into it, you can get started at HuggingFace with their models and datasets, but you'd need compute power, multiple MLOps, etc. I was introduced to the concept in this video, since Google has their Vertex AI tools on Google Cloud, and there's always LangChain but I'm not sure about anything recent.
-
langchain VS griptape - a user suggested alternative
2 projects | 11 Jul 20232 projects | 9 Jul 2023
-
Vector storage is coming to Meilisearch to empower search through AI
a documentation chatbot proof of concept using GPT3.5 and LangChain
-
ChatPDF: What ChatGPT Can't Do, This Can!
I encourage everyone to pay attention to the Langchain open-source project and leverage it to achieve tasks that ChatGPT cannot handle.
- LangChain Arbitrary Command Execution - CVE-2023-34541
-
Langchain Is Pointless
Yeah I never know where memory goes exactly in langchain, it's not exactly clear all the time. But sure, the main insight I remember is this, take a look at their MULTI_PROMPT_ROUTER_TEMPLATE: https://github.com/hwchase17/langchain/blob/560c4dfc98287da1...
It's a lot of instructions for an LLM, they seem to forget an LLM is an auto-completion machine, and which data it is trained on. Using <<>> for sections is not a normal thing, it's not markdown, which probably the thing read way more often on the internet, instead of open json comments, why not type signatures, instead of so many rules, why not give it examples? It is an autocomplete machine!
They are relying too much on the LLM being smart because they probably only test stuff in GPT-4 and 3.5, but with GPT4All models this prompt was not working at all, so I had to rewrite it, for simple routing, we don't even need json, carying the `next_inputs` here is weird if you don't need it.
So this is my version of it: https://gist.github.com/rogeriochaves/b67676977eebb1936b9b5c...
It's so basic it's dumb, yet it is more powerful, as it does not rely on GPT-4 level intelligence, it's just what I needed
What are some alternatives?
Voyager - An Open-Ended Embodied Agent with Large Language Models
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
guidance - A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance]
llama_index - LlamaIndex is a data framework for your LLM applications
tree-of-thoughts - Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%
llama - Inference code for Llama models
Neurite - Fractal Graph Desktop for Ai-Agents, Web-Browsing, Note-Taking, and Code.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
hamilton - Hamilton helps data scientists and engineers define testable, modular, self-documenting dataflows, that encode lineage and metadata. Runs and scales everywhere python does.
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
Mr.-Ranedeer-AI-Tutor - A GPT-4 AI Tutor Prompt for customizable personalized learning experiences.
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.