SuperAGI
gorilla
SuperAGI | gorilla | |
---|---|---|
82 | 51 | |
14,491 | 10,118 | |
- | - | |
9.8 | 8.9 | |
6 days ago | 3 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SuperAGI
- Introducing GPTs
-
🐍🐍 23 issues to grow yourself as an exceptional open-source Python expert 🧑💻 🥇
Repo : https://github.com/TransformerOptimus/SuperAGI
-
Introduction to Agent Summary – Improving Agent Output by Using LTS & STM
The recent introduction of the “Agent Summary” feature in SuperAGI version 0.0.10 has brought a drastic difference in agent performance – improving the quality of agent output. Agent Summary helps AI agents maintain a larger context about their goals while executing complex tasks that require longer conversations (iterations).
-
🚀✨SuperAGI v0.0.10✨is now live on GitHub
Checkout the full release here: https://github.com/TransformerOptimus/SuperAGI/releases/tag/v0.0.10
-
Top 20 Must Try AI Tools for Developers in 2023
10. SuperAGI
-
We're bringing in Google 's PaLM2 🦬 Bison LLM API support into SuperAGI in our upcoming v0.0.8 release
Currently, PaLM2 Bison is live on the dev branch of SuperAGI GitHub for the community to try: https://github.com/TransformerOptimus/SuperAGI/tree/dev
-
Why use SuperAGI
SuperAGI is made with developers in mind, therefore it takes into account their requirements and preferences when making autonomous AI agents. It has a number of advantages, including:
- In five years, there will be no programmers left, believes Stability AI CEO
-
LLM Powered Autonomous Agents
I think for agents to truly find adoption in real world, agent trajectory fine tuning is critical component - how do you make an agent perform better to achieve particular objective with every subsequent run. Basically making the agents learn similar to how we learn when we
Also I think current LLMs might not fit well for agent use cases in mid to long term because the RL they go through is based on input-best output methods whereas the intelligence that you need in agents is more around how to build an algorithm to achieve an objective on the fly - this requires perhaps new type of large models ( Large Agent Models ? ) which are trained using RLfD ( Reinforcement Learning from demonstration )
Also I think one of the key missing piece is a highly configurable software middle ware between Intelligence ( LLMs ), Memory ( Vector Dbs ~LTMs, STMs ), Tools and workflows across every iteration. Current agent core loop to find next best action is too simplistic. For example if core self prompting loop or iteration of an agent can be configured for the use case in hand. Eg for BabyAGI, every iteration goes through workflow of Plan, Prioritize and Execute or in AutoGPT it finds the next best action based on LTM/STM, or GPTEngineer it is to write specs > write tests > write code. Now for dev infra monitoring agent this workflow might be totally different - it would look like consume logs from different tools like Grafana, Splunk, APMs > See if it doesnt have an anomaly > if it has an anomaly then take human input for feedback. Every use case in real world has it's own workflow and current construct of agent frameworks have this thing hard coded in base prompt. In SuperAGI( https://superagi.com) ( disclaimer : Im creator of it ), core iteration workflow of agent can be defined as part of agent provisioning.
Another missing piece is notion of Knowledge. Agents currently depend entirely upon knowledge of LLMs or search results to execute on tasks, but if a specialised knowledge set is plugged to an agent, it performs significantly better.
-
Created a simple chrome dino game using SuperAGI's SuperCoder 😵 The dino changes color on every run :P (without writing a single line of code myself)
Build your own game here: https://github.com/TransformerOptimus/SuperAGI
gorilla
-
Launch HN: Nango (YC W23) – Open-Source Unified API
Do you leverage https://gorilla.cs.berkeley.edu/ at all? If not, perhaps consider if it would solve some pain for you.
- Autonomous LLM agents with human-out-of-loop
- Show HN: I made a script to scrape your Facebook group
-
Pushing ChatGPT's Structured Data Support to Its Limits
* Gorilla [https://github.com/ShishirPatil/gorilla]
Could be interesting to try some of these exercises with these models.
-
Guidance for selecting a function-calling library?
gorilla
- Gorilla: An API Store for LLMs
-
Show HN: OpenAPI DevTools – Chrome ext. that generates an API spec as you browse
Nice this made me go back and check up on the Gorilla LLM project [1] to see whats they are doing with API and if they have applied their fine tuning to any of the newer foundation models but looks like things have slowed down since they launched (?) or maybe development is happening elsewhere on some invisible discord channel but I hope the intersection of API calling and LLM as a logic processing function keep getting focus it's an important direction for interop across the web.
[1] https://github.com/ShishirPatil/gorilla
-
RestGPT
"Gorilla: Large Language Model Connected with Massive APIs" (2023) https://gorilla.cs.berkeley.edu/ :
> Gorilla enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla comes up with the semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. We also release APIBench, the largest collection of APIs, curated and easy to be trained on! Join us, as we try to expand the largest API store and teach LLMs how to write them!
eval/:
- Calling APIs with Natural Language
-
Shishir Patil: Teaching AI to Use APIs with Gorilla LLM – Humans of AI Podcast
Humans of AI Podcast #7
An amazing conversation with Shishir Patil the creator of the Gorilla LLM, a large language model specifically trained to use APIs!
Shishir is currently a 5th year PhD student at the University of California, Berkeley whose work broadly covers ML-Systems, LLMs, Edge-ML, and Sky computing.
Definitely give the episode a listen to hear Shishir's story.
And to read more about #GorillaLLM, check out the project page!
https://gorilla.cs.berkeley.edu
What are some alternatives?
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
DB-GPT - AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/AutoGPT]
Voyager - An Open-Ended Embodied Agent with Large Language Models
autogen - A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap
gorilla-cli - LLMs for your CLI
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
Gin - Gin is a HTTP web framework written in Go (Golang). It features a Martini-like API with much better performance -- up to 40 times faster. If you need smashing performance, get yourself some Gin.
AgentGPT - 🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
GirlfriendGPT - Girlfriend GPT is a Python project to build your own AI girlfriend using ChatGPT4.0
AutoLearn-GPT - ChatGPT learns automatically.
gpt4all - gpt4all: run open-source LLMs anywhere