tree-of-thought-prompting
txtinstruct
tree-of-thought-prompting | txtinstruct | |
---|---|---|
8 | 13 | |
589 | 215 | |
- | 2.8% | |
5.3 | 5.0 | |
6 months ago | 8 months ago | |
Python | ||
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tree-of-thought-prompting
- Ask HN: Any good collection of writing prompts for GPT 3.5/4?
-
GitHub - Secrets of Tree of Thoughts for Programmers 🌳👨💻
Tree of Thoughts Prompting or framework is techniques to get the model to diversify its output and self-evaluate its response.
-
Questions about memory, tree-of-thought, planning
2 - Probably too early in testing and development for there to be a 'standard'. A quick google search will find you some stuff to read like https://github.com/dave1010/tree-of-thought-prompting, but your best bet is to read through the stuff other people are doing and try things for yourself. You might end up discovering something new that nobody has thought of yet. Kaio Ken literally just changed the game overnight and figured out how to expand context to 8k for llama-based models with 2 lines of code. Things are evolving fast and the community desperately needs people willing to spend time reading papers on Arxiv, digging through githubs, and testing stuff out.
- What size model is needed for Reasoning?
-
Puzzle GPT: Highly Effective and Fun Puzzle-Solving Prompt for GPT-4 (Uses CoT & ToT)
Source: Conversation with Bing, 6/4/2023 (1) Chain-of-Thought Prompting | Prompt Engineering Guide. https://www.promptingguide.ai/techniques/cot. (2) [2305.10601] Tree of Thoughts: Deliberate Problem Solving with Large .... https://arxiv.org/abs/2305.10601. (3) [2201.11903] Chain-of-Thought Prompting Elicits Reasoning in Large .... https://arxiv.org/abs/2201.11903. (4) Using Tree-of-Thought Prompting to boost ChatGPT's reasoning. https://github.com/dave1010/tree-of-thought-prompting. (5) Tree of Thoughts: Deliberate Problem Solving with Large Language Models. https://arxiv.org/pdf/2305.10601.pdf. ```
- Tekoäly on jo osittain ohittanut ihmisen. Kehitysvauhdin kiihtyessä tärkeä kysymys on, kenen etiikkaa AI noudattaa. Ykkösaamun vieraana on professori Teemu Roos Suomen tekoälykeskuksesta. Seija Vaaherkumpu haastattelee.
-
How close are we to an AutoGPT (or similar programme) that can improve its own code recursively?
That’s not exactly correct. Tree of thought prompting can boost reasoning. Check out the GitHub. https://github.com/dave1010/tree-of-thought-prompting
- Using Tree of Thought Prompting to boost ChatGPT's reasoning
txtinstruct
-
Questions about memory, tree-of-thought, planning
I tried cromadb but had terrible performance and could not pin down the cause (likely a problem on my end). Weaviate was easy to setup and had excellent performance, this is probably what I will use in the future. Next on my list is txtinstruct, to finetune a model with data that does not change and using a vector db for everything else seems promising.
-
[R] Let Language Models be Language Models
The closest thing I've seen to this is txtinstruct
-
Create a ChatGPT-like program using an open source model and custom data.
txtinstruct is a framework for training instruction-tuned models
-
Stability AI Launches the First of Its StableLM Suite of Language Models
Great to see the continued release of open models. The only disappointing thing is that models keep building on CC-BY-NC licensed datasets, which severely limits their use.
Hopefully, people consider txtinstruct (https://github.com/neuml/txtinstruct) and other approaches to generate instruction-tuning datasets without the baggage.
- Build open instruction-tuned datasets and models (r/MachineLearning)
- Build open instruction-tuned datasets and models
- [P] Build open instruction-tuned datasets and models
- Create open instruction-tuned datasets and LLM models
- Show HN: Build open instruction-tuned datasets and models
What are some alternatives?
tree-of-thought-llm - [NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
StableLM - StableLM: Stability AI Language Models
gpt_jailbreak_status - This is a repository that aims to provide updates on the status of jailbreaking the OpenAI GPT language model.
safetensors - Simple, safe way to store and distribute tensors
llama-retrieval-plugin - LLaMa retrieval plugin script using OpenAI's retrieval plugin
AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated
geov - The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER). We have shared a pre-trained 9B parameter model.
cataclysm - Cataclysm - Code generation library for the end game
instruct-eval - This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
lm-evaluation-harness - A framework for few-shot evaluation of autoregressive language models.
lm-evaluation-harness - A framework for few-shot evaluation of language models.