-
gpt-llama.cpp
A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
langchain
Discontinued ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain] (by hwchase17)
-
long_term_memory
A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.
There's a (kind of) working Auto-GPT solution that uses Vicuna https://github.com/keldenl/gpt-llama.cpp/blob/master/docs/Auto-GPT-setup-guide.md
I'm hoping that many of you brilliant people can join me in our common quest to add long-term memory to our favorite camelid, Vicuna. The repository is called BrainChulo, and it's just waiting for your contributions.
“Semantic Memory: Embeddings can be used to create a semantic memory, by which a machine can learn to understand the meanings of words and sentences and can understand the relationships between them.”
Is this going to function at all similarly to https://github.com/wawawario2/long_term_memory ?
Here's the link https://github.com/blob42/Instrukt where I will share the code in the upcoming the days.