trex
graph-of-thoughts
trex | graph-of-thoughts | |
---|---|---|
3 | 2 | |
238 | 1,882 | |
0.4% | 4.6% | |
6.6 | 6.4 | |
8 months ago | about 1 month ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
trex
-
Show HN: Generate JSON mock data for testing/initial app development
A friend of mine built a tool called Trex that you might find helpful, check it out here: https://github.com/automorphic-ai/trex
It's very consistent at generating templated data.
- Intelligently transform unstructured to structured output (JSON, Regex, CFG)
graph-of-thoughts
-
Q* Could Be It - Forget AlphaGO - It's Diplomacy - Peg 1 May Have Fallen - Noam Brown May Have Achieved The Improbable - Is this Q* Leak 2.0?
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-ofThought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information (“LLM thoughts”) are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks. Website & code: https://github.com/spcl/graph-of-thoughts
-
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
https://github.com/spcl/graph-of-thoughts Code for the paper too
What are some alternatives?
PentestGPT - A GPT-empowered penetration testing tool
prompttools - Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroma, Weaviate, LanceDB).
sycamore - 🍁 Sycamore is an LLM-powered search and analytics platform for unstructured data.
llm-guard - The Security Toolkit for LLM Interactions
autolabel - Label, clean and enrich text datasets with LLMs.
prompt-lib - A set of utilities for running few-shot prompting experiments on large-language models
ChatGLM2-6B - ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
ToolEmu - A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use
JSON-Schema Faker - JSON-Schema + fake data generators
ChainFury - 🦋 Production grade chaining engine behind TuneChat. Self host today!
safe-rlhf - Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
promptmap - automatically tests prompt injection attacks on ChatGPT instances