gpt-llama.cpp
Delphic
gpt-llama.cpp | Delphic | |
---|---|---|
12 | 4 | |
587 | 245 | |
- | - | |
8.2 | 10.0 | |
11 months ago | 9 months ago | |
JavaScript | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt-llama.cpp
-
Attempt to run Llama on a remote server with chatbot-ui
hi! I really like the solution https://github.com/keldenl/gpt-llama.cpp which helps to deploy https://github.com/mckaywrigley/chatbot-ui on the local model. I am running this together with Wizard7b or 13b locally and it works fine, but when I tried to upload to a remote server I met an error.
-
Introducing Basaran: self-hosted open-source alternative to the OpenAI text completion API
sounds like youβre asking for exactly this? https://github.com/keldenl/gpt-llama.cpp
- LLaMA and AutoAPI?
-
New big update to GPTNicheFinder: better trends analysis and scoring system, cleaned up UI and verbose in the terminal for people who want to see what is going on and to verify the results
I salut you good sir. This is an amazing idea. I don't have time but it will be interesting idea to use this wrapper https://github.com/keldenl/gpt-llama.cpp which simulates GPT endpoint for local lama, so basically we can have amazing tool for completely free use. If somebody test it please let me know underneath my comment!
-
I build an AI powered writing tools, an AI co-author
I would gladly buy your product to run with a local model, like Vicuna ggml , also see https://github.com/keldenl/gpt-llama.cpp/
-
Serge... Just works
possible through fastllama in python or gpt-llama.cpp an API wrapper around llama.cpp
-
Embeddings?
https://github.com/keldenl/gpt-llama.cpp supports embeddings, and it even takes in openai type requests and returns openai compatible responses!
-
I built a completely Local AutoGPT with the help of GPT-llama running Vicuna-13B
https://github.com/keldenl/gpt-llama.cpp
- I build a completely Local and portable AutoGPT with the help of gpt-llama, running on Vicuna-13b
-
Adding Long-Term Memory to Custom LLMs: Let's Tame Vicuna Together!
There's a (kind of) working Auto-GPT solution that uses Vicuna https://github.com/keldenl/gpt-llama.cpp/blob/master/docs/Auto-GPT-setup-guide.md
Delphic
- Research Study Companion App
-
Reality check on good embedding model (and this idea in general)
Hi - I'm working on getting up to speed to put together a practical implementation. As a proof-of-concept I'm trying to build a locally-hosted (no external API calls) document query proof-of-concept along the lines of Delphic ( GitHub - JSv4/Delphic: Starter App to Build Your Own App to Query Doc Collections with Large Language Models (LLMs) using LlamaIndex, Langchain, OpenAI and more (MIT Licensed) ) As I type this, I realize it would probably be enough to just demonstrate something working in a Jupyter notebook.
-
[P] Fullstack LlamaIndex App to Build and Query Document Collections with LLMs (MIT Licensed)
Wanted to share an MIT-licensed, open source starter project called Delphic I released to help people build apps to LlamaIndex to search through documents and use LLMs to interact with the text. Here's a super quick demo of uploading a word doc and then asking some questions:
-
Built This GPT-Powered Document Search and Question Answering App with Django
Repo here and a detailed walkthrough here . Checkout the short video below. Feedback and/or contributions are welcome!
What are some alternatives?
llama_index - LlamaIndex is a data framework for your LLM applications
Auto-LLM-Local - Created my own python script similar to AutoGPT where you supply a local llm model like alpaca13b (The main one I use), and the script can access the supplied tools to achieve your objective. Code fully works as far as I can tell. Takes me 5 minutes per chain on my slow laptop.
langstream - LangStream. Event-Driven Developer Platform for Building and Running LLM AI Apps. Powered by Kubernetes and Kafka.
long_term_memory - A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.
delta-buddy - Introducing Delta-Buddy: Your ultimate Delta Lake companion! π Streamline your data journey with an AI-powered chatbot. Ask Delta-Buddy anything about your Delta Lake.
langchain - β‘ Building applications with LLMs through composability β‘ [Moved to: https://github.com/langchain-ai/langchain]
maccarone - AI-managed code blocks in Python βͺβ©
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
SecondBrain - SecondBrain is your second brain in the cloud, designed to easily store and retrieve unstructured information. It's like Obsidian but powered by generative AI.
langchain - π¦π Build context-aware reasoning applications
bert.cpp - ggml implementation of BERT