SuperAGI
tree-of-thoughts
Our great sponsors
SuperAGI | tree-of-thoughts | |
---|---|---|
82 | 26 | |
14,373 | 4,016 | |
- | - | |
9.9 | 8.8 | |
11 days ago | about 2 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SuperAGI
- Introducing GPTs
-
🐍🐍 23 issues to grow yourself as an exceptional open-source Python expert 🧑💻 🥇
Repo : https://github.com/TransformerOptimus/SuperAGI
-
Introduction to Agent Summary – Improving Agent Output by Using LTS & STM
The recent introduction of the “Agent Summary” feature in SuperAGI version 0.0.10 has brought a drastic difference in agent performance – improving the quality of agent output. Agent Summary helps AI agents maintain a larger context about their goals while executing complex tasks that require longer conversations (iterations).
-
🚀✨SuperAGI v0.0.10✨is now live on GitHub
Checkout the full release here: https://github.com/TransformerOptimus/SuperAGI/releases/tag/v0.0.10
-
Top 20 Must Try AI Tools for Developers in 2023
10. SuperAGI
-
We're bringing in Google 's PaLM2 🦬 Bison LLM API support into SuperAGI in our upcoming v0.0.8 release
Currently, PaLM2 Bison is live on the dev branch of SuperAGI GitHub for the community to try: https://github.com/TransformerOptimus/SuperAGI/tree/dev
-
Why use SuperAGI
SuperAGI is made with developers in mind, therefore it takes into account their requirements and preferences when making autonomous AI agents. It has a number of advantages, including:
- In five years, there will be no programmers left, believes Stability AI CEO
-
LLM Powered Autonomous Agents
I think for agents to truly find adoption in real world, agent trajectory fine tuning is critical component - how do you make an agent perform better to achieve particular objective with every subsequent run. Basically making the agents learn similar to how we learn when we
Also I think current LLMs might not fit well for agent use cases in mid to long term because the RL they go through is based on input-best output methods whereas the intelligence that you need in agents is more around how to build an algorithm to achieve an objective on the fly - this requires perhaps new type of large models ( Large Agent Models ? ) which are trained using RLfD ( Reinforcement Learning from demonstration )
Also I think one of the key missing piece is a highly configurable software middle ware between Intelligence ( LLMs ), Memory ( Vector Dbs ~LTMs, STMs ), Tools and workflows across every iteration. Current agent core loop to find next best action is too simplistic. For example if core self prompting loop or iteration of an agent can be configured for the use case in hand. Eg for BabyAGI, every iteration goes through workflow of Plan, Prioritize and Execute or in AutoGPT it finds the next best action based on LTM/STM, or GPTEngineer it is to write specs > write tests > write code. Now for dev infra monitoring agent this workflow might be totally different - it would look like consume logs from different tools like Grafana, Splunk, APMs > See if it doesnt have an anomaly > if it has an anomaly then take human input for feedback. Every use case in real world has it's own workflow and current construct of agent frameworks have this thing hard coded in base prompt. In SuperAGI( https://superagi.com) ( disclaimer : Im creator of it ), core iteration workflow of agent can be defined as part of agent provisioning.
Another missing piece is notion of Knowledge. Agents currently depend entirely upon knowledge of LLMs or search results to execute on tasks, but if a specialised knowledge set is plugged to an agent, it performs significantly better.
-
Created a simple chrome dino game using SuperAGI's SuperCoder 😵 The dino changes color on every run :P (without writing a single line of code myself)
Build your own game here: https://github.com/TransformerOptimus/SuperAGI
tree-of-thoughts
-
[D] Potential scammer on github stealing work of other ML researchers?
I checked the issues and found https://github.com/kyegomez/tree-of-thoughts/issues/78
-
(2/2) May 2023
Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70% (https://github.com/kyegomez/tree-of-thoughts)
-
Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures
same deal with amplification research like Tree of Thoughts, AdaPlanner, and Ghost in the Minecraft. same deal with agentized LLMs like Auto-GPT emphasizing testing regimens. they want efficiency and explainability, not this "mine is bigger than yours" nonsense coming out of Microsoft, Google, or Meta (which isn't even the entire picture of the opensource ML research within those firms either). There's this idealized "neurosymbolic AI" where everyone just wants code to do a job, so there should only be so much probabilistic behavior to learn the jobs that aren't learned to begin with, but the fact remains that the actual researchers and engineers want something that is as deterministic as imperative language can be. perhaps we'll achieve functional depth, and instead of some outdated "paperclip maximizer", we summon Maxwell's demon via a "complete" Church-Turing thesis. in other words, while a "vastly superior being in intelligence" is a really bad time for anyone that has an intellect-based superiority complex, the rest of us are humble enough to utilize this information science to further explore the unknown.
- Tree of Thought (ToT) and AutoGPT
-
Tree of Thoughts
This is Shunyu, author of Tree oF Thoughts (arxiv.org/abs/2305.10601).
The official code to replicate paper results is https://github.com/ysymyth/tree-of-thought-llm
Not https://github.com/kyegomez/tree-of-thoughts which according to many who told me, is not right/good implementation of ToT, and damages the reputation of ToT
I explained the situation here: https://twitter.com/ShunyuYao12/status/1663946702754021383
I'd appreciate your help by unstaring his and staring mine, as currently Github and Google searches go to his repo by default, and it has been very misleading for many users.
-
Has anybody tried their models with "Tree of Thoughts"?
I hacked a dirty PR into this derivative repo, to run it with oobabooga API: https://github.com/kyegomez/tree-of-thoughts/pull/8
- Tree of Thoughts: Deliberate Problem Solving with LLMs
What are some alternatives?
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Awesome-Prompt-Engineering - This repository contains a hand-curated resources for Prompt Engineering with a focus on Generative Pre-trained Transformer (GPT), ChatGPT, PaLM etc
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/AutoGPT]
tree-of-thought-llm - [NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
autogen - A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap
GirlfriendGPT - Girlfriend GPT is a Python project to build your own AI girlfriend using ChatGPT4.0
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
prompt-engineering - Tips and tricks for working with Large Language Models like OpenAI's GPT-4.
AgentGPT - 🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
Neurite - Fractal Graph Desktop for Ai-Agents, Web-Browsing, Note-Taking, and Code.
AutoLearn-GPT - ChatGPT learns automatically.
Mr.-Ranedeer-AI-Tutor - A GPT-4 AI Tutor Prompt for customizable personalized learning experiences.