motorhead
lambdaprompt
Our great sponsors
motorhead | lambdaprompt | |
---|---|---|
10 | 8 | |
822 | 368 | |
2.6% | 0.8% | |
8.0 | 5.6 | |
9 days ago | 3 months ago | |
Rust | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
motorhead
- Motorhead is a memory and information retrieval server for LLMs
-
Comparison of Vector Databases
Metal [1] is another one on my radar. Their API looks super simple.
Disclosures: None
[1] https://getmetal.io
-
Any Alternatives to Langchain?
Any alternatives? I found this Rust based project that might be interesting: https://github.com/getmetal/motorhead
- RasaGPT: First headless LLM chatbot built on top of Rasa, Langchain and FastAPI
-
Langchain question and answer without openai
you could run motorhead on docker https://github.com/getmetal/motorhead
-
How to use Enum with Vec to parse the mixed data vector from RedisSearch
The code is found using GitHub search FT.SEARCH inside https://github.com/getmetal/motorhead/blob/main/src/models.rs and adapted.
-
Memory in production
All the examples that Langchain gives are for persisting memory locally which won't work in a serverless (statelesss) environment, and the one solution documented for stateless applications, getmetal/motorhead, is a containerized, Rust-based service we would have to run ourselves.
- Show HN: Motörhead, LLM Memory Server Built in Rust
-
OpenAI Embeddings API alternative?
I've only just signed up and haven't had a chance to build anything with it yet, but this might be something to consider https://getmetal.io/
- Motörhead – memory and information retrieval server for LLMs
lambdaprompt
-
Ask HN: What have you built with LLMs?
We're using all sorts of different stacks and tooling. We made our own tooling at one point (https://github.com/approximatelabs/lambdaprompt/), but have more recently switched to just using the raw requests ourselves and writing out the logic ourselves in the product. For our main product, the code just lives in our next app, and deploys on vercel.
-
RasaGPT: First headless LLM chatbot built on top of Rasa, Langchain and FastAPI
https://github.com/approximatelabs/lambdaprompt It has served all of my personal use-cases since making it, including powering `sketch` (copilot for pandas) https://github.com/approximatelabs/sketch
Core things it does: Uses jinja templates, does sync and async, and most importantly treats LLM completion endpoints as "function calls", which you can compose and build structures around just with simple python. I also combined it with fastapi so you can just serve up any templates you want directly as rest endpoints. It also offers callback hooks so you can log & trace execution graphs.
All together its only ~600 lines of python.
I haven't had a chance to really push all the different examples out there, but most "complex behaviors", so there aren't many patterns to copy. But if you're comfortable in python, then I think it offers a pretty good interface.
I hope to get back to it sometime in the next week to introduce local-mode (eg. all the open source smaller models are now available, I want to make those first-class)
-
Replacing a SQL analyst with 26 recursive GPT prompts
This is great~ There's been some really rapid progress on Text2SQL in the last 6 months, and I really thinking this will have a real impact on the modern data stack ecosystem!
I had similar success with lambdaprompt for solving Text2SQL (https://github.com/approximatelabs/lambdaprompt/)
- λprompt - Composing Ai prompts with python in a functional style
-
LangChain: Build AI apps with LLMs through composability
This is great! I love seeing how rapidly in the past 6 months these ideas are evolving. I've been internally calling these systems "prompt machines". I'm a strong believer that chaining together language model prompts is core to extracting real, and reproducible value from language models. I sometimes even wonder if systems like this are the path to AGI as well, and spent a full month 'stuck' on that hypothesis in October.
Specific to prompt-chaining: I've spent a lot of time ideating about where "prompts live" (are they best as API endpoint, as cli programs, as machines with internal state, treated as a single 'assembly instruction' -- where do "prompts" live naturally) and eventually decided on them being the most synonymous with functions (and api endpoints via the RPC concept)
mental model I've developed (sharing in case it resonates with anyone else)
a "chain" is `a = 'text'; b = p1(a); c = p2(b)` where p1 and p2 are LLM prompts.
What comes next (in my opinion) is other programming constructs: loops, conditionals, variables (memory), etc. (I think LangChain represents some of these concepts as their "areas" -> chain (function chaining), agents (loops), memory (variables))
To offer this code-style interface on top of LLMs, I made something similar to LangChain, but scoped what i made to only focus on the bare functional interface and the concept of a "prompt function", and leave the power of the "execution flow" up to the language interpreter itself (in this case python) so the user can make anything with it.
https://github.com/approximatelabs/lambdaprompt
I've had so much fun recently just playing with prompt chaining in general, it feels like the "new toy" in the AI space (orders of magnitude more fun than dall-e or chat-gpt for me). (I built sketch (posted the other day on HN) based on lambdaprompt)
My favorites have been things to test the inherent behaviors of language models using iterated prompts. I spent some time looking for "fractal" like behavior inside the functions, hoping that if I got the right starting point, an iterated function would avoid fixed points --> this has eluded me so far, so if anyone finds non-fixed points in LLMs, please let me know!
I'm a believer that the "next revolution" in machine-written code and behavior from LLMs will come when someone can tame LLM prompting to self-write prompt chains themselves (whether that is on lambdaprompt, langchain, or something else!)
All in all, I'm super hyped about LangChain, love the space they are in and the rapid attention they are getting~
-
Show HN: Sketch – AI code-writing assistant that understands data content
From https://github.com/approximatelabs/sketch/blob/main/sketch/p... it appears that this library is calling a remote API, which obviates the utility of the demonstrated use case.
Upon closer inspection, it looks like https://github.com/approximatelabs/sketch interfaces with the model via https://github.com/approximatelabs/lambdaprompt, which is made by the same organization. This suggests to me that the former may be a toy demonstration of the latter.
- Show HN: Prompt – Build, compose and call templated LLM prompts
What are some alternatives?
lmql - A language for constraint-guided and efficient LLM programming.
datasloth - Natural language Pandas queries and data generation powered by GPT-3
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
RasaGPT - 💬 RasaGPT is the first headless LLM chatbot platform built on top of Rasa and Langchain. Built w/ Rasa, FastAPI, Langchain, LlamaIndex, SQLModel, pgvector, ngrok, telegram
LiteratureReviewBot - Experiment to use GPT-3 to help write grant proposals.
kor - LLM(😽)
Abstract Feature Branch - abstract_feature_branch is a Ruby gem that provides a variation on the Branch by Abstraction Pattern by Paul Hammant and the Feature Toggles Pattern by Martin Fowler (aka Feature Flags) to enable Continuous Integration and Trunk-Based Development.
olympe - Query your database in plain english
rasa-haystack
com2fun - Transform document into function.