llm-api
lmql
llm-api | lmql | |
---|---|---|
1 | 30 | |
128 | 3,375 | |
- | 4.4% | |
8.3 | 9.5 | |
27 days ago | 14 days ago | |
TypeScript | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llm-api
-
The Problem with LangChain
You don't need to look at the code: I looked at the release notes and the garbage fire of unrelated nonsense getting added and got to skip even installing it.
Langchain is almost required at this point to accept anything. They raised money, and now their growth metric is Github stars.
A simple wrapper around the APIs (I used llamaflow, which is now llm-api https://github.com/dzhng/llm-api) and a templating engine is most of what you need.
AI is not at a point where generalist prompts to do agent/memory/search things is a good idea for a real product. You need to integrate procedural guidance unless you want your UX to be awful.
lmql
- Show HN: Fructose, LLM calls as strongly typed functions
-
Prompting LLMs to constrain output
have been experimenting with guidance and lmql. a bit too early to give any well formed opinions but really do like the idea of constraining llm output.
-
[D] Prompt Engineering Seems Like Guesswork - How To Evaluate LLM Application Properly?
the only time i've ever felt like it was anything other than guesswork was using LMQL . not coincidentally, LMQL works with LLMs as autocomplete engines rather than q&a ones.
-
Guidance for selecting a function-calling library?
lqml
-
Show HN: Magentic – Use LLMs as simple Python functions
This is also similar in spirit to LMQL
https://github.com/eth-sri/lmql
- Show HN: LLMs can generate valid JSON 100% of the time
- LangChain Agent Simulation – Multi-Player Dungeons and Dragons
-
The Problem with LangChain
LLM calls are just function calls, so most functional composition is already afforded by any general-purpose language out there. If you need fancy stuff, use something like Python‘s functools.
Working on https://github.com/eth-sri/lmql (shameless plug, sorry), we have always found that compositional abstractions on top of LMQL are mostly there already, once you internalize prompts being functions.
- Is there a UI that can limit LLM tokens to a preset list?
-
Local LLMs: After Novelty Wanes
LMQL is another.
What are some alternatives?
jehuty - Fluent API to interact with chat based GPT model
guidance - A guidance language for controlling large language models.
gchain - Composable LLM Application framework inspired by langchain
guidance - A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance]
aipl - Array-Inspired Pipeline Language
simpleaichat - Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
llm-gpt4all - Plugin for LLM adding support for the GPT4All collection of models
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
multi-gpt - A Clojure interface into the GPT API with advanced tools like conversational memory, task management, and more
guardrails - Adding guardrails to large language models.
buildabot - A production-grade framework for building AI agents.
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.