Our great sponsors
-
simpleaichat
Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
llm-api
Fully typed & consistent chat APIs for OpenAI, Anthropic, Groq, and Azure's chat models for browser, edge, and node environments. (by dzhng)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
bosquet
Tooling to build LLM applications: prompt templating and composition, agents, LLM memory, and other instruments for builders of AI applications.
-
multi-gpt
A Clojure interface into the GPT API with advanced tools like conversational memory, task management, and more (by cjbarre)
Thanks for making this:
"simpleaichat is a Python package for easily interfacing with chat apps like ChatGPT and GPT-4 with robust features and minimal code complexity. This tool has many features optimized for working with ChatGPT as fast and as cheap as possible, but still much more capable of modern AI tricks than most implementations"
https://github.com/minimaxir/simpleaichat
Separately, in the article, typo expect->except here:
"LangChain uses about the same amount of code as just using the official openai library, expect LangChain incorporates more object classes for not much obvious code benefit."
Yes! This is why I started working on AIPL. The scripts are much more like recipes (linear, contained in a single-file, self-evident even to people who don't know the language). For instance, here's a multi-level summarizer of a webpage: https://github.com/saulpw/aipl/blob/develop/examples/summari...
The goal is to capture all that knowledge that langchain has, into consistent legos that you can combine and parameterize with the prompts, without all the complexity and boilerplate of langchain, nor having to learn all the Python libraries and their APIs. Perfect for prototypes and experiments (like a notebook, as you suggest), and then if you find something that really works, you can hand-off a single text file to an engineer and they can make it work in a production environment.
After running into these issues a few others and I wrote a typescript agent framework that I think significantly improves on LangChain in many ways: https://github.com/sciencecorp/buildabot/
It’s still very early days for software composing AI models and we almost certainly don’t have all the right metaphors yet. And I think there is a lot to be said for strong typing and simple, robust code!
Good question! It's because there's an aspect of conversations that differs between different models: the way the previous messages are injected into the context of the prompt.
I went back and forth on a bunch of different designs, but eventually decided to try to make it so that each plugin that implemented a new model would only have to subclass Model and add new methods.
You can see all of the design arguments I had with myself about this here, across the course of 129 pull requests comments: https://github.com/simonw/llm/pull/65
Yeah I haven't figured out how to have it reuse the models from the desktop GPT4All installation yet, issue here: https://github.com/simonw/llm-gpt4all/issues/5
You don't need to look at the code: I looked at the release notes and the garbage fire of unrelated nonsense getting added and got to skip even installing it.
Langchain is almost required at this point to accept anything. They raised money, and now their growth metric is Github stars.
A simple wrapper around the APIs (I used llamaflow, which is now llm-api https://github.com/dzhng/llm-api) and a templating engine is most of what you need.
AI is not at a point where generalist prompts to do agent/memory/search things is a good idea for a real product. You need to integrate procedural guidance unless you want your UX to be awful.
I used Langchain before for a job interview and was not confident with how it works under the hood and how dangerous would it be if there’s some injection going on. So I used it as minimal as possible. It took me a lot of codes even though when I’m using it minimally. One of their example is to call an API by letting LLM parse a documentation and call the API from its understanding, which looks so unreliable if the LLM went offs a bit. I found it hard to give total control to Langchain.
I tried experimenting on building a library that makes it easy and transparent to use LLM https://github.com/adityapurwa/jehuty and tried the middleware approach that might be more familiar in general. Its an experiment so the API might changes a lot until we find a sweet spot. If you have an advice or suggestions it would be helpful and appreciated.
Sorry didn't share because I think it's not that good yet. But here you go :)
https://github.com/wejick/gchain
I had a go at one of those a few months ago: https://datasette.io/plugins/datasette-faiss
Alex Garcia built a better one here as a SQLite Rust extension: https://github.com/asg017/sqlite-vss
LLM calls are just function calls, so most functional composition is already afforded by any general-purpose language out there. If you need fancy stuff, use something like Python‘s functools.
Working on https://github.com/eth-sri/lmql (shameless plug, sorry), we have always found that compositional abstractions on top of LMQL are mostly there already, once you internalize prompts being functions.