tree-of-thought-prompting
ChatRWKV
tree-of-thought-prompting | ChatRWKV | |
---|---|---|
8 | 28 | |
593 | 9,318 | |
- | - | |
5.3 | 8.4 | |
6 months ago | 27 days ago | |
Python | ||
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tree-of-thought-prompting
- Ask HN: Any good collection of writing prompts for GPT 3.5/4?
-
GitHub - Secrets of Tree of Thoughts for Programmers 🌳👨💻
Tree of Thoughts Prompting or framework is techniques to get the model to diversify its output and self-evaluate its response.
-
Questions about memory, tree-of-thought, planning
2 - Probably too early in testing and development for there to be a 'standard'. A quick google search will find you some stuff to read like https://github.com/dave1010/tree-of-thought-prompting, but your best bet is to read through the stuff other people are doing and try things for yourself. You might end up discovering something new that nobody has thought of yet. Kaio Ken literally just changed the game overnight and figured out how to expand context to 8k for llama-based models with 2 lines of code. Things are evolving fast and the community desperately needs people willing to spend time reading papers on Arxiv, digging through githubs, and testing stuff out.
- What size model is needed for Reasoning?
-
Puzzle GPT: Highly Effective and Fun Puzzle-Solving Prompt for GPT-4 (Uses CoT & ToT)
Source: Conversation with Bing, 6/4/2023 (1) Chain-of-Thought Prompting | Prompt Engineering Guide. https://www.promptingguide.ai/techniques/cot. (2) [2305.10601] Tree of Thoughts: Deliberate Problem Solving with Large .... https://arxiv.org/abs/2305.10601. (3) [2201.11903] Chain-of-Thought Prompting Elicits Reasoning in Large .... https://arxiv.org/abs/2201.11903. (4) Using Tree-of-Thought Prompting to boost ChatGPT's reasoning. https://github.com/dave1010/tree-of-thought-prompting. (5) Tree of Thoughts: Deliberate Problem Solving with Large Language Models. https://arxiv.org/pdf/2305.10601.pdf. ```
- Tekoäly on jo osittain ohittanut ihmisen. Kehitysvauhdin kiihtyessä tärkeä kysymys on, kenen etiikkaa AI noudattaa. Ykkösaamun vieraana on professori Teemu Roos Suomen tekoälykeskuksesta. Seija Vaaherkumpu haastattelee.
-
How close are we to an AutoGPT (or similar programme) that can improve its own code recursively?
That’s not exactly correct. Tree of thought prompting can boost reasoning. Check out the GitHub. https://github.com/dave1010/tree-of-thought-prompting
- Using Tree of Thought Prompting to boost ChatGPT's reasoning
ChatRWKV
- People who've used RWKV, whats your wishlist for it?
- How the RWKV language model works
-
Questions about memory, tree-of-thought, planning
Most LLMs actually do a decent job out of the box if you ask them for step by step instructions. Tree of tough is one way to improve the results, reflexion is another that can be used separate or additionally. The downside is that most models will run quickly into their token limit (around 2k for most). However the new SuperHot models can handle up to 8k and then there are the RMVK-Raven models, they are RNNs and not transformers like all the other LLMs and can theoretically handle infinite context lengths (but they loose "focus" after a while).
-
New model: RWKV-4-Raven-7B-v12-Eng49%-Chn49%-Jpn1%-Other1%-20230530-ctx8192.pth
RWKV models inference: https://github.com/BlinkDL/ChatRWKV (fast CUDA).
-
KoboldCpp - Combining all the various ggml.cpp CPU LLM inference projects with a WebUI and API (formerly llamacpp-for-kobold)
I'm most interested in that last one. I think I heard the RWKV models are very fast, don't need much Ram, and can have huge context tokens, so maybe their 14b can work for me. I wasn't sure how ready for use they were though, but looking more into it, stuff like rwkv.cpp and ChatRWKV and a whole lot of other community projects are mentioned on their github.
- I created a simple implementation of the RWKV language model (RWKV competes with the dominant Transformers-based approach which is the "T" in GPT)
-
[P] Raven 7B & 14B 🐦(RWKV finetuned on Alpaca+CodeAlpaca+Guanaco) and Gradio Demo for Raven 7B
You can use ChatRWKV v2 (https://github.com/BlinkDL/ChatRWKV) to run Raven🐦 (compatible with vanilla RWKV):
-
What's the current state of actually free and open source LLMs?
I feel compelled to summon /u/bo_peng here and to mention his work on RWKV. (See https://github.com/BlinkDL/ChatRWKV and related repos.)
- Try Google's Bard
-
[D] Totally Open Alternatives to ChatGPT
Please test https://github.com/BlinkDL/ChatRWKV which is a good chatbot despite only trained on the Pile :)
What are some alternatives?
tree-of-thought-llm - [NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
gpt_jailbreak_status - This is a repository that aims to provide updates on the status of jailbreaking the OpenAI GPT language model.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llama-retrieval-plugin - LLaMa retrieval plugin script using OpenAI's retrieval plugin
SillyTavern - LLM Frontend for Power Users.
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]
gpt4all - gpt4all: run open-source LLMs anywhere
KoboldAI
alpaca-lora - Instruct-tune LLaMA on consumer hardware
SpikeGPT - Implementation of "SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks"
rwkv.cpp - INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model