-
FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
auto-vicuna-butler
a hacked and Frankensteined version of baby-agi ( https://github.com/yoheinakajima/babyagi ), modified to run entirely offline through the use of vicuna.
To make number 1 work, I forked Fast Chat code (https://github.com/lm-sys/FastChat) and created my own inference code server server, and then wrapped a HTTP client inside Custom Langchain LLM Model, that can be used as part of a ReAct Agent.
You can find my code here: https://github.com/paolorechia/learn-langchain
Unfortunately that's all I could do during lunch break. Once I'm back from work I'm sure to play around with it more. How did you find the python repl langchain tool? I can't see it in the langchain docs. Did you try adding long-term memory like https://github.com/NiaSchim/auto-vicuna-butler ?