AgentOoba
AgentOoba | gptq-cuda-api | |
---|---|---|
10 | 2 | |
172 | 19 | |
- | - | |
7.8 | 3.9 | |
8 months ago | 11 months ago | |
Python | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AgentOoba
-
Could autogpt functionality be implemented?
Something similar was implemented here: https://github.com/flurb18/AgentOoba
- Can someone explain why there isn't a good interface for the oobabooga api in langchain?
-
AgentOoba v0.2 - Custom prompting
Github
- Weekly Megathread - 14 May 2023
- What features would everyone like to see in oog?
-
autogpt-like framework?
Take a look at https://github.com/flurb18/AgentOoba. It's still missing some pieces but it appears they're being worked on..
- An autonomous AI agent extension for Oobabooga's web ui
- AgentOoba v0.1 - better UI, better contextualization, the beginnings of langchain integration and tools
-
Introducing AgentOoba, an extension for Oobabooga's web ui that (sort of) implements an autonomous agent! I was inspired and rewrote the fork that I posted yesterday completely.
Right now, the agent functions as little more than a planner / "task splitter". However I have plans to implement a toolchain, which would be a set of tools that the agent could use to complete tasks. Considering native langchain, but have to look into it. Here's a screenshot and here's a complete sample output. The github link is https://github.com/flurb18/AgentOoba. Installation is very easy, just clone the repo inside the "extensions" folder in your main text-generation-webui folder and run the webui with --extensions AgentOoba. Then load a model and scroll down on the main page to see AgentOoba's input, output and parameters. Enjoy!
gptq-cuda-api
-
Example of how to run GPTQ models on multiple GPUs
Here is the repository with minimal code required to run GPTQ on multiple GPUs https://github.com/mzbac/gptq-cuda-api
-
Can someone explain why there isn't a good interface for the oobabooga api in langchain?
oobabooga has to support way too many models, so making the whole thing unnecessarily complicated. If you have some development experience, maybe you would build your own API in a few lines of Python code. It's not hard if you build from scratch and learn along the way. I have built some example repositories for hosting GPTQ-related models. You can have a look at them. https://github.com/mzbac/GPTQ-for-LLaMa-API https://github.com/mzbac/gptq-cuda-api
What are some alternatives?
AGiXT - AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.
AutoGPTQ - An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
EdgeGPT - Extension for Text Generation Webui based on EdgeGPT, a reverse engineered API of Microsoft's Bing Chat AI
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
GPTQ-for-LLaMa-API - Provide a way to use the GPT-QLLama model as an API
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
guidance - A guidance language for controlling large language models.
llama_generative_agent - A generative agent implementation for LLaMA based models, derived from langchain's implementation.
learn-langchain