Auto-GPT
llama.cpp
Our great sponsors
Auto-GPT | llama.cpp | |
---|---|---|
104 | 772 | |
72,359 | 56,891 | |
- | - | |
9.8 | 10.0 | |
about 1 year ago | 3 days ago | |
Python | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Auto-GPT
-
How I am catching up with AI
A few notable examples of these leaps in AI technology include GPT-3.5, GPT-4, ChatGPT, Midjourney, Dall-e 2, AutoGPT, and Github Next.
-
How to install Auto-GPT on Mac
Why did you clone git clone https://github.com/Torantulino/Auto-GPT.git instead of https://github.com/Significant-Gravitas/Auto-GPT.git Does it matter?
- [Termux] Comment exécuter Auto-GPT sur Android?
-
Don’t Build Your House on Someone Else’s Land
You can also use the API key with tools such as TypingMind and Auto-GPT.
- [Chatgptpro] Auto-GPT (intento de código abierto para hacer que GPT4 sea completamente autónomo)
-
[Machine Learning] [D] Que pensez-vous de ce problème sur Auto-GPT?
[https://github.com/torantulino/auto-gpt/issues/475
- [Chatgptpro] Auto-GPT (open source tente de rendre GPT4 entièrement autonome)
-
How do I update Auto GPT ?
git clone --branch stable https://github.com/Torantulino/Auto-GPT.git $ git pull $ pip3 install -r requirements.txt
- [Singularity] Chaos GPT: Utilisation de l'auto-GPT pour créer un agent d'IA hostile mis sur la destruction de l'humanité
- FLiPN-FLaNK Stack Weekly for 17 April 2023
llama.cpp
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
-
Embeddings are a good starting point for the AI curious app developer
Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)
Running an embedding server locally is pretty straightforward:
- Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases
- Mixtral 8x22B
- Llama.cpp: Improve CPU prompt eval speed
-
Ollama 0.1.32: WizardLM 2, Mixtral 8x22B, macOS CPU/GPU model split
Ah, thanks for this! I can't edit my parent comment that you replied to any longer unfortunately.
As I said, I only compared the contributors graphs [0] and checked for overlaps. But those apparently only go back about year and only list at most 100 contributors ranked by number of commits.
[0]: https://github.com/ollama/ollama/graphs/contributors and https://github.com/ggerganov/llama.cpp/graphs/contributors
What are some alternatives?
gpt4all - gpt4all: run open-source LLMs anywhere
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
JARVIS - JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
babyagi
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
AgentGPT - 🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
SuperAGI - <⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.
ggml - Tensor library for machine learning
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM