loopgpt
llama.cpp
loopgpt | llama.cpp | |
---|---|---|
20 | 773 | |
1,391 | 56,891 | |
- | - | |
8.5 | 10.0 | |
about 2 months ago | 6 days ago | |
Python | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
loopgpt
-
[P] LoopGPT Update - Finally something useful?
So we thought it would be a good idea to create a framework that makes use of LoopGPT agent's memory and custom tooling capabilities. Let's jump right into the new features of this framework.
-
April 2023
LoopGPT Modular Auto-GPT Framework (https://github.com/farizrahman4u/loopgpt)
-
Has anyone set Auto GPT with the Azure OpenAI service?
We just added Azure OpenAI support on our modular reimplementation of Auto-GPT: LoopGPT. And I think it's much easier to setup using our python API because you can just use the copy-paste that you can get from "View code" on the Chat Playground on Azure. Here's the whole snippet that you need to use.
-
How do I get autogpt to start exactly where it left off?
I suggest using LoopGPT, a GPT-3.5 friendly, modular re-implementation of Auto-GPT (this is self-promotion, I am a co-author FYI :). We have full state serialization which means you can save your agent state completely and start right from where you left off. To get started just do
-
Machine Learning Engineer Answers Your Questions Episode 2
As another MLE, who's working on an Auto-GPT inspired project, I want to say that I totally understand this technology (atleast with GPT-3.5) is not great. This is why we focused on building a better codebase, which is extensibile, modular and is just an overall more "Pythonic" reimplementation of Auto-GPT rather than trying to claim to be perfect. Although, I must mention that people have had better results with LoopGPT with both GPT-3.5 and GPT-4, and we don't even have GPT-4!
-
is GPT 4 api necessary ?
I suggest you try LoopGPT - It works better on GPT-3.5 according to many users. We have a nice little discord too where you can post any issues: https://discord.gg/rqs26cqx7v
-
Can not install Auto GPT(as well as Baby AGI)...
Glad you got it figured out. Please also try out LoopGPT if you can - it works better with GPT-3.5
-
crap-gpt?
This 100%. Also, AutoGPT is not the best option when you want to add more capability, LoopGPT is as it has a framework to easily add more capabilities (called tools).
- cant get autogpt to run :/
-
LLaMA support on LoopGPT
We recently released a GPT-3.5-friendly reimplementation of Auto-GPT, a "Pythonic" modular framework called LoopGPT that supports adding custom tools. Our first-time users tell us it produces better results compared to Auto-GPT on both GPT-3.5 as well as GPT-4.
llama.cpp
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
-
Embeddings are a good starting point for the AI curious app developer
Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)
Running an embedding server locally is pretty straightforward:
- Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases
- Mixtral 8x22B
- Llama.cpp: Improve CPU prompt eval speed
What are some alternatives?
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
babyagi
gpt4all - gpt4all: run open-source LLMs anywhere
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/AutoGPT]
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
AgentGPT - 🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
deepdoctection - A Repo For Document AI
ggml - Tensor library for machine learning
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM