automata
AutoLearn-GPT
automata | AutoLearn-GPT | |
---|---|---|
7 | 1 | |
550 | 24 | |
- | - | |
9.5 | 6.5 | |
8 months ago | 12 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
automata
-
Self-Coding is imminent with OpenAI's new function calling
I have been worried about how much fine-tuning for function calls has lobotomized the models. That being said, I find that it an still author good code, even with fn calls. You can see a sample output here - https://github.com/emrgnt-cmplxty/Automata/issues/72
-
Self-Coding is imminent w/ OpenAI's new function calling
I wanted to share a new demo since the new feature model resulted in a big boost to the model programming ability and robustness. Please enjoy the demo, and if you are interested in contributing to the project please check out the repository here.
-
Is anyone getting good results? Am I using this wrong?
This is a hard problem that's going to take a lot of work to crack. I'm working on something similar here and I can confirm it will be a long while before we see useful fully autonomous work.
-
Automata - A Bottom-Up version of AutoGPT
When I wrote the action extractor there wasn't a fine-tuned approach to function handling, I'm really glad there is now. Here is a Github issue if anyone wants to try to tackle this.
This morning I just open sourced a big project I've been working on - https://github.com/emrgnt-cmplxty/Automata
AutoLearn-GPT
-
Can GPT improve itself?
Has anyone explored whether GPT could improve itself using the data it gathered. I have made a project that might be the first step to explore it, which is justing memorizing everything that it does not know. Here is the link: Reason-Wang/AutoLearn-GPT: ChatGPT learns automatically. (github.com). Of course this is not the real learning for GPT, since it does not update parameters. But it is difficult to generate high quality data suitable for training. Does anyone have ideas about this?
What are some alternatives?
RasaGPT - 💬 RasaGPT is the first headless LLM chatbot platform built on top of Rasa and Langchain. Built w/ Rasa, FastAPI, Langchain, LlamaIndex, SQLModel, pgvector, ngrok, telegram
SuperAGI - <⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.
yaaamagi - Yet Another Attempt At Making Artificial General Intelligence (with GPT-4)
GPT-Codemaster - Automatic programming by creating Pull Requests from Issues using LLMs
agents - An Open-source Framework for Autonomous Language Agents
chatGPT-cheatsheet - An ever-evolving introduction to ChatGPT, AI, and machine learning (including prompt examples and Python-built chatbots)
LLMChat - A Discord chatbot that supports popular LLMs for text generation and ultra-realistic voices for voice chat.
searchGPT - Grounded search engine (i.e. with source reference) based on LLM / ChatGPT / OpenAI API. It supports web search, file content search etc.
ACE_Model_Implementation - A python implementation of Dave Shap's ACE Model
funcchain - ⛓️ build cognitive systems, pythonic
LMOps - General technology for enabling AI capabilities w/ LLMs and MLLMs
GPTDiscord - A robust, all-in-one GPT interface for Discord. ChatGPT-style conversations, image generation, AI-moderation, custom indexes/knowledgebase, youtube summarizer, and more!