gpt_jailbreak_status
tree-of-thought-prompting
gpt_jailbreak_status | tree-of-thought-prompting | |
---|---|---|
49 | 8 | |
883 | 587 | |
- | - | |
9.4 | 5.3 | |
3 months ago | 5 months ago | |
HTML | ||
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt_jailbreak_status
- Ask HN: Any good collection of writing prompts for GPT 3.5/4?
- Ask HN: What have you built with LLMs?
- What is prompt-engineering for artificial intelligence?
- Is DAN dead?
- GitHub - tg12/gpt_jailbreak_status: This is a repository that aims to provide updates on the status of jailbreaking the OpenAI GPT language model.
- If ChatGPT Can't Access The Internet Then How Is This Possible?
- Google AI in Search couldn't agree with itself
- GPT-4 Jailbreak Repo
tree-of-thought-prompting
- Ask HN: Any good collection of writing prompts for GPT 3.5/4?
-
GitHub - Secrets of Tree of Thoughts for Programmers 🌳👨💻
Tree of Thoughts Prompting or framework is techniques to get the model to diversify its output and self-evaluate its response.
-
Questions about memory, tree-of-thought, planning
2 - Probably too early in testing and development for there to be a 'standard'. A quick google search will find you some stuff to read like https://github.com/dave1010/tree-of-thought-prompting, but your best bet is to read through the stuff other people are doing and try things for yourself. You might end up discovering something new that nobody has thought of yet. Kaio Ken literally just changed the game overnight and figured out how to expand context to 8k for llama-based models with 2 lines of code. Things are evolving fast and the community desperately needs people willing to spend time reading papers on Arxiv, digging through githubs, and testing stuff out.
- What size model is needed for Reasoning?
-
Puzzle GPT: Highly Effective and Fun Puzzle-Solving Prompt for GPT-4 (Uses CoT & ToT)
Source: Conversation with Bing, 6/4/2023 (1) Chain-of-Thought Prompting | Prompt Engineering Guide. https://www.promptingguide.ai/techniques/cot. (2) [2305.10601] Tree of Thoughts: Deliberate Problem Solving with Large .... https://arxiv.org/abs/2305.10601. (3) [2201.11903] Chain-of-Thought Prompting Elicits Reasoning in Large .... https://arxiv.org/abs/2201.11903. (4) Using Tree-of-Thought Prompting to boost ChatGPT's reasoning. https://github.com/dave1010/tree-of-thought-prompting. (5) Tree of Thoughts: Deliberate Problem Solving with Large Language Models. https://arxiv.org/pdf/2305.10601.pdf. ```
- Tekoäly on jo osittain ohittanut ihmisen. Kehitysvauhdin kiihtyessä tärkeä kysymys on, kenen etiikkaa AI noudattaa. Ykkösaamun vieraana on professori Teemu Roos Suomen tekoälykeskuksesta. Seija Vaaherkumpu haastattelee.
-
How close are we to an AutoGPT (or similar programme) that can improve its own code recursively?
That’s not exactly correct. Tree of thought prompting can boost reasoning. Check out the GitHub. https://github.com/dave1010/tree-of-thought-prompting
- Using Tree of Thought Prompting to boost ChatGPT's reasoning
What are some alternatives?
ChatGPT_DAN - ChatGPT DAN, Jailbreaks prompt
tree-of-thought-llm - [NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
ChatGPT-Jailbreaks - Official jailbreak for ChatGPT (GPT-3.5). Send a long message at the start of the conversation with ChatGPT to get offensive, unethical, aggressive, human-like answers in English and Italian.
llama-retrieval-plugin - LLaMa retrieval plugin script using OpenAI's retrieval plugin
pages-gem - A simple Ruby Gem to bootstrap dependencies for setting up and maintaining a local Jekyll environment in sync with GitHub Pages
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
CX_DB8 - a contextual, biasable, word-or-sentence-or-paragraph extractive summarizer powered by the latest in text embeddings (Bert, Universal Sentence Encoder, Flair)
datadm - DataDM is your private data assistant. Slide into your data's DMs
data-analytics - Welcome to the Data-Analytics repository
SoM - Set-of-Mark Prompting for LMMs
Language-games - Dead simple games made with word vectors.
Constrained-Text-Genera