tree-of-thought-llm
transynthetical-engine
tree-of-thought-llm | transynthetical-engine | |
---|---|---|
41 | 6 | |
4,228 | 26 | |
4.3% | - | |
7.2 | 6.2 | |
3 months ago | about 1 year ago | |
Python | TypeScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tree-of-thought-llm
-
AI Chat Applications with the Metacognition Approach: Tree of Thoughts (ToT)
[2305.10601] Tree of Thoughts: Deliberate Problem Solving with Large Language Models (arxiv.org)
-
Last night /u/ alesneolith posted a very serious writeup claiming to have worked in one of the projects. The writeup is more elaborate than expected and got surprisingly little attention. His account has been since deleted.
Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, “Tree of Thoughts” (ToT), which generalizes over the popular “Chain of Thought” approach to prompting language models, and enables exploration over coherent units of text (“thoughts”) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models’ problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%. Code repo with all prompts: https://github.com/princeton-nlp/tree-of-thought-llm.
-
Ultra Fast Bert
GPU utilization should be down when using this technique. I’m hoping this could allow for more efficient batch inference on GPUs. If you can predict 10 tokens for the price of 1 it should allow you to do tree of thought much more efficiently.
https://github.com/princeton-nlp/tree-of-thought-llm
-
Is it best to not pay attention to AI news and/or find ways to delude ourselves into believing better outcomes?
For those familiar with Daniel Kahneman's Thinking Fast and Slow, the current LLMs (such as GPT-4 via ChatGPT) seem to resemble System 1 thinking (near-instantaneous, automatic, intuitive processes like next-word prediction). However, they lack System 2 thinking (slow, effortful, logical, planning, reasoning). What I learned today is that Google's Gemini (an LLM in training now) not only has more modalities (I think all Youtube Video and audio??), more compute, and almost twice the training data, but they're building in AlphaGo-type learning, which resembles tree of thoughts and looks a LOT like the missing puzzle piece of System 2 thinking. Will it be AGI? Maybe, and it's coming this winter.
-
Langchain Is Pointless
Tree of thoughts: https://arxiv.org/abs/2305.10601
Good video on "Tree of thoughts" which also reviews / puts it in the context of other methods: https://www.youtube.com/watch?v=ut5kp56wW_4
Completion vs conversational interface is something you can read about in the OpenAI API documentation.
For the remaining things I don't have single specific pointer at hand.
-
To all skeptics with a background in AI/CS : what is your realistic timeline for AGI/ASI ?
What do you think about the combination of Tree of Thoughts: Deliberate Problem Solving with Large Language Models LongNet: Scaling Transformers to 1,000,000,000 Tokens Textbooks Are All You Need Attention Is All You Need Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
-
Why do language models appear to work left-to-right?
You are right. Tree of Thoughts: Deliberate Problem Solving with Large Language Models proposes to solve this via MCTS-style generation (similar to how AlphaGo worked, and a lot of planning & control problems are executed).
-
Munk Debate on Artificial Intelligence
The transformer was developed in 2017 and it powers all modern LLMs. If you're familiar with Daniel Kahneman's work from Thinking Fast and Slow, you could easily summarize LLMs as excellent System 1 thinking: our fast, automatic, unconscious responses (e.g. autocomplete). I'd argue that we're one development (similar to the transformer) away from creating System 2 thinking: deliberate and strategic thinking. In fact, with merely GPT-4 and some clever architectures, researchers have developed chain-of-thought prompting and, more recently, tree-of-thoughts reasoning. While external to the LLM architecture, embedding these concepts into a LLM could very likely solve the creation of System 2 thinking and produce the first real AGI. Adding more modalities (e.g. audio, images, video, topography, etc.) will simply add more nuance in the weights and biases of a complete system.
-
Question regarding model compatibility for Alpaca Turbo
There are a bunch of other methods to improve quality and performance like tree-of-thought-llm, connecting a LLM to a database or have it review its own output.
- Tree of thoughts build in open-source model
transynthetical-engine
-
Native JSON Output from GPT-4
Here’s an approach to return just JavaScript:
https://github.com/williamcotton/transynthetical-engine
The key is the addition of few-shot exemplars.
-
The Dual LLM pattern for building AI assistants that can resist prompt injection
I think the two-layer approach is worthwhile if only for limiting tokens!
Here’s an example of what I mean:
https://github.com/williamcotton/transynthetical-engine#brow...
By keeping the main discourse between the user and the LLM from containing all of the generated code and instead just using that main “thread” to orchestrate instructions to write code it allows for more back-and-forth.
It’s a good technique in general!
I’m still too paranoid to execute instructions via email without a very limited set of abilities!
-
Prompt Engineering vs. Blind Prompting
Here is an example of some prompt engineering in order to build augmentations for factual question-and-answer as well as building web applications:
https://github.com/williamcotton/transynthetical-engine
-
Ask HN: People who were laid off or quit recently, how are you doing?
Hey Simon! I've been digging your writings on LLMs lately.
I've been having some decent luck with some of the approaches that I've discussed in the following articles and projects:
From Prompt Alchemy to Prompt Engineering: An Introduction to Analytic Augmentation: https://github.com/williamcotton/empirical-philosophy/blob/m...
https://www.williamcotton.com/articles/writing-web-applicati...
https://github.com/williamcotton/transynthetical-engine
I'd love to hear your thoughts on the matter!
-
We need to tell people ChatGPT will lie to them, not debate linguistics
Sure you can. The easiest way is to go to https://chat.openai.com/chat and paste in a Wikipedia article.
There are more involved manners like this: https://github.com/williamcotton/transynthetical-engine/blob...
-
ChatGPT-Linux-Assistant
Parsel : A (De-)compositional Framework for Algorithmic Reasoning with Language Models
https://arxiv.org/abs/2212.10561
Here's a notebook with an introduction:
https://github.com/ezelikman/parsel/blob/main/parsel.ipynb
And here's a GUI interface the author has been developing:
http://zelikman.me/parsel/interface.html
I've been working on an augmented large language model that given these few-shot exemplars can build the below fully-functional ToDo App: ==
https://github.com/williamcotton/transynthetical-engine/tree...
https://www.williamcotton.com/articles/junie-browser-builder...
All of this is still very rough around the edges, prone to errors of various kinds, and generally not ready for prime time, but anyone is welcome to play around with what is there!
What are some alternatives?
Voyager - An Open-Ended Embodied Agent with Large Language Models
simpleaichat - Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
guidance - A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance]
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
tree-of-thoughts - Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%
chrono - A natural language date parser in Javascript
Neurite - Fractal Graph Desktop for Ai-Agents, Web-Browsing, Note-Taking, and Code.
chatgpt-linux-assistant - An ai assistant in your CLI. But it knows what's on your system and can help you get things done.
hamilton - Hamilton helps data scientists and engineers define testable, modular, self-documenting dataflows, that encode lineage and metadata. Runs and scales everywhere python does.
geppetto - Your personal assistant with ChatGPT and Linux superpowers, ready for any task!
Mr.-Ranedeer-AI-Tutor - A GPT-4 AI Tutor Prompt for customizable personalized learning experiences.
openai-cookbook - Examples and guides for using the OpenAI API