SuperAGI VS guidance

Compare SuperAGI vs guidance and see what are their differences.

SuperAGI

<⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably. (by TransformerOptimus)

guidance

A guidance language for controlling large language models. (by guidance-ai)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
SuperAGI guidance
82 23
14,491 17,357
- 2.7%
9.8 9.8
1 day ago 2 days ago
Python Jupyter Notebook
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

SuperAGI

Posts with mentions or reviews of SuperAGI. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-11-06.
  • Introducing GPTs
    3 projects | news.ycombinator.com | 6 Nov 2023
  • 🐍🐍 23 issues to grow yourself as an exceptional open-source Python expert 🧑‍💻 🥇
    10 projects | dev.to | 19 Oct 2023
    Repo : https://github.com/TransformerOptimus/SuperAGI
  • Introduction to Agent Summary – Improving Agent Output by Using LTS & STM
    1 project | dev.to | 8 Sep 2023
    The recent introduction of the “Agent Summary” feature in SuperAGI version 0.0.10 has brought a drastic difference in agent performance – improving the quality of agent output. Agent Summary helps AI agents maintain a larger context about their goals while executing complex tasks that require longer conversations (iterations).
  • 🚀✨SuperAGI v0.0.10✨is now live on GitHub
    1 project | /r/Super_AGI | 14 Aug 2023
    Checkout the full release here: https://github.com/TransformerOptimus/SuperAGI/releases/tag/v0.0.10
  • Top 20 Must Try AI Tools for Developers in 2023
    2 projects | dev.to | 20 Jul 2023
    10. SuperAGI
  • We're bringing in Google 's PaLM2 🦬 Bison LLM API support into SuperAGI in our upcoming v0.0.8 release
    1 project | /r/Super_AGI | 11 Jul 2023
    Currently, PaLM2 Bison is live on the dev branch of SuperAGI GitHub for the community to try: https://github.com/TransformerOptimus/SuperAGI/tree/dev
  • Why use SuperAGI
    1 project | /r/SuperrAGI | 5 Jul 2023
    SuperAGI is made with developers in mind, therefore it takes into account their requirements and preferences when making autonomous AI agents. It has a number of advantages, including:
  • In five years, there will be no programmers left, believes Stability AI CEO
    4 projects | /r/singularity | 3 Jul 2023
  • LLM Powered Autonomous Agents
    3 projects | news.ycombinator.com | 27 Jun 2023
    I think for agents to truly find adoption in real world, agent trajectory fine tuning is critical component - how do you make an agent perform better to achieve particular objective with every subsequent run. Basically making the agents learn similar to how we learn when we

    Also I think current LLMs might not fit well for agent use cases in mid to long term because the RL they go through is based on input-best output methods whereas the intelligence that you need in agents is more around how to build an algorithm to achieve an objective on the fly - this requires perhaps new type of large models ( Large Agent Models ? ) which are trained using RLfD ( Reinforcement Learning from demonstration )

    Also I think one of the key missing piece is a highly configurable software middle ware between Intelligence ( LLMs ), Memory ( Vector Dbs ~LTMs, STMs ), Tools and workflows across every iteration. Current agent core loop to find next best action is too simplistic. For example if core self prompting loop or iteration of an agent can be configured for the use case in hand. Eg for BabyAGI, every iteration goes through workflow of Plan, Prioritize and Execute or in AutoGPT it finds the next best action based on LTM/STM, or GPTEngineer it is to write specs > write tests > write code. Now for dev infra monitoring agent this workflow might be totally different - it would look like consume logs from different tools like Grafana, Splunk, APMs > See if it doesnt have an anomaly > if it has an anomaly then take human input for feedback. Every use case in real world has it's own workflow and current construct of agent frameworks have this thing hard coded in base prompt. In SuperAGI( https://superagi.com) ( disclaimer : Im creator of it ), core iteration workflow of agent can be defined as part of agent provisioning.

    Another missing piece is notion of Knowledge. Agents currently depend entirely upon knowledge of LLMs or search results to execute on tasks, but if a specialised knowledge set is plugged to an agent, it performs significantly better.

  • Created a simple chrome dino game using SuperAGI's SuperCoder 😵 The dino changes color on every run :P (without writing a single line of code myself)
    1 project | /r/indiegames | 23 Jun 2023
    Build your own game here: https://github.com/TransformerOptimus/SuperAGI

guidance

Posts with mentions or reviews of guidance. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-08.
  • Anthropic's Haiku Beats GPT-4 Turbo in Tool Use
    5 projects | news.ycombinator.com | 8 Apr 2024
    [1]: https://github.com/guidance-ai/guidance/tree/main
  • Show HN: Prompts as (WASM) Programs
    9 projects | news.ycombinator.com | 11 Mar 2024
    > The most obvious usage of this is forcing a model to output valid JSON

    Isn't this something that Outlines [0], Guidance [1] and others [2] already solve much more elegantly?

    0. https://github.com/outlines-dev/outlines

    1. https://github.com/guidance-ai/guidance

    2. https://github.com/sgl-project/sglang

  • Show HN: Fructose, LLM calls as strongly typed functions
    10 projects | news.ycombinator.com | 6 Mar 2024
  • LiteLlama-460M-1T has 460M parameters trained with 1T tokens
    1 project | news.ycombinator.com | 7 Jan 2024
    Or combine it with something like llama.cpp's grammer or microsoft's guidance-ai[0] (which I prefer) which would allow adding some react-style prompting and external tools. As others have mentioned, instruct tuning would help too.

    [0] https://github.com/guidance-ai/guidance

  • Forcing AI to Follow a Specific Answer Pattern Using GBNF Grammar
    2 projects | /r/LocalLLaMA | 10 Dec 2023
  • Prompting LLMs to constrain output
    2 projects | /r/LocalLLaMA | 8 Dec 2023
    have been experimenting with guidance and lmql. a bit too early to give any well formed opinions but really do like the idea of constraining llm output.
  • Guidance is back 🥳
    1 project | /r/LocalLLaMA | 16 Nov 2023
  • New: LangChain templates – fastest way to build a production-ready LLM app
    6 projects | news.ycombinator.com | 1 Nov 2023
  • Is supervised learning dead for computer vision?
    9 projects | news.ycombinator.com | 28 Oct 2023
    Thanks for your comment.

    I did not know about "Betteridge's law of headlines", quite interesting. Thanks for sharing :)

    You raise some interesting points.

    1) Safety: It is true that LVMs and LLMs have unknown biases and could potentially create unsafe content. However, this is not necessarily unique to them, for example, Google had the same problem with their supervised learning model https://www.theverge.com/2018/1/12/16882408/google-racist-go.... It all depends on the original data. I believe we need systems on top of our models to ensure safety. It is also possible to restrict the output domain of our models (https://github.com/guidance-ai/guidance). Instead of allowing our LVMs to output any words, we could restrict it to only being able to answer "red, green, blue..." when giving the color of a car.

    2) Cost: You are right right now LVMs are quite expensive to run. As you said are a great way to go to market faster but they cannot run on low-cost hardware for the moment. However, they could help with training those smaller models. Indeed, with see in the NLP domain that a lot of smaller models are trained on data created with GPT models. You can still distill the knowledge of your LVMs into a custom smaller model that can run on embedded devices. The advantage is that you can use your LVMs to generate data when it is scarce and use it as a fallback when your smaller device is uncertain of the answer.

    3) Labelling data: I don't think labeling data is necessarily cheap. First, you have to collect the data, depending on the frequency of your events could take months of monitoring if you want to build a large-scale dataset. Lastly, not all labeling is necessarily cheap. I worked at a semiconductor company and labeled data was scarce as it required expert knowledge and could only be done by experienced employees. Indeed not all labelling can be done externally.

    However, both approaches are indeed complementary and I think systems that will work the best will rely on both.

    Thanks again for the thought-provoking discussion. I hope this answer some of the concerns you raised

  • Show HN: Elelem – TypeScript LLMs with tracing, retries, and type safety
    2 projects | news.ycombinator.com | 12 Oct 2023
    I've had a bit of trouble getting function calling to work with cases that aren't just extracting some data from the input. The format is correct but it was harder to get the correct data if it wasn't a simple extraction.

    Hopefully OpenAI and others will offer something like https://github.com/guidance-ai/guidance at some point to guarantee overall output structure.

    Failed validations will retry, but from what I've seen JSONSchema + generated JSON examples are decently reliable in practice for gpt-3.5-turbo and extremely reliable on gpt-4.

What are some alternatives?

When comparing SuperAGI and guidance you can also consider the following projects:

AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.

lmql - A language for constraint-guided and efficient LLM programming.

Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/AutoGPT]

semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps

autogen - A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap

langchain - 🦜🔗 Build context-aware reasoning applications

Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]

NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.

AgentGPT - 🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

AutoLearn-GPT - ChatGPT learns automatically.

outlines - Structured Text Generation