lambdaprompt VS Constrained-Text-Genera

Compare lambdaprompt vs Constrained-Text-Genera and see what are their differences.

lambdaprompt

λprompt - A functional programming interface for building AI systems (by approximatelabs)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
lambdaprompt Constrained-Text-Genera
8 11
368 -
0.8% -
5.6 -
4 months ago -
Python
MIT License -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

lambdaprompt

Posts with mentions or reviews of lambdaprompt. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-05.
  • Ask HN: What have you built with LLMs?
    43 projects | news.ycombinator.com | 5 Feb 2024
    We're using all sorts of different stacks and tooling. We made our own tooling at one point (https://github.com/approximatelabs/lambdaprompt/), but have more recently switched to just using the raw requests ourselves and writing out the logic ourselves in the product. For our main product, the code just lives in our next app, and deploys on vercel.
  • RasaGPT: First headless LLM chatbot built on top of Rasa, Langchain and FastAPI
    13 projects | news.ycombinator.com | 8 May 2023
    https://github.com/approximatelabs/lambdaprompt It has served all of my personal use-cases since making it, including powering `sketch` (copilot for pandas) https://github.com/approximatelabs/sketch

    Core things it does: Uses jinja templates, does sync and async, and most importantly treats LLM completion endpoints as "function calls", which you can compose and build structures around just with simple python. I also combined it with fastapi so you can just serve up any templates you want directly as rest endpoints. It also offers callback hooks so you can log & trace execution graphs.

    All together its only ~600 lines of python.

    I haven't had a chance to really push all the different examples out there, but most "complex behaviors", so there aren't many patterns to copy. But if you're comfortable in python, then I think it offers a pretty good interface.

    I hope to get back to it sometime in the next week to introduce local-mode (eg. all the open source smaller models are now available, I want to make those first-class)

  • Replacing a SQL analyst with 26 recursive GPT prompts
    5 projects | news.ycombinator.com | 25 Jan 2023
    This is great~ There's been some really rapid progress on Text2SQL in the last 6 months, and I really thinking this will have a real impact on the modern data stack ecosystem!

    I had similar success with lambdaprompt for solving Text2SQL (https://github.com/approximatelabs/lambdaprompt/)

  • λprompt - Composing Ai prompts with python in a functional style
    1 project | /r/AiAppDev | 21 Jan 2023
  • LangChain: Build AI apps with LLMs through composability
    8 projects | news.ycombinator.com | 17 Jan 2023
    This is great! I love seeing how rapidly in the past 6 months these ideas are evolving. I've been internally calling these systems "prompt machines". I'm a strong believer that chaining together language model prompts is core to extracting real, and reproducible value from language models. I sometimes even wonder if systems like this are the path to AGI as well, and spent a full month 'stuck' on that hypothesis in October.

    Specific to prompt-chaining: I've spent a lot of time ideating about where "prompts live" (are they best as API endpoint, as cli programs, as machines with internal state, treated as a single 'assembly instruction' -- where do "prompts" live naturally) and eventually decided on them being the most synonymous with functions (and api endpoints via the RPC concept)

    mental model I've developed (sharing in case it resonates with anyone else)

    a "chain" is `a = 'text'; b = p1(a); c = p2(b)` where p1 and p2 are LLM prompts.

    What comes next (in my opinion) is other programming constructs: loops, conditionals, variables (memory), etc. (I think LangChain represents some of these concepts as their "areas" -> chain (function chaining), agents (loops), memory (variables))

    To offer this code-style interface on top of LLMs, I made something similar to LangChain, but scoped what i made to only focus on the bare functional interface and the concept of a "prompt function", and leave the power of the "execution flow" up to the language interpreter itself (in this case python) so the user can make anything with it.

    https://github.com/approximatelabs/lambdaprompt

    I've had so much fun recently just playing with prompt chaining in general, it feels like the "new toy" in the AI space (orders of magnitude more fun than dall-e or chat-gpt for me). (I built sketch (posted the other day on HN) based on lambdaprompt)

    My favorites have been things to test the inherent behaviors of language models using iterated prompts. I spent some time looking for "fractal" like behavior inside the functions, hoping that if I got the right starting point, an iterated function would avoid fixed points --> this has eluded me so far, so if anyone finds non-fixed points in LLMs, please let me know!

    I'm a believer that the "next revolution" in machine-written code and behavior from LLMs will come when someone can tame LLM prompting to self-write prompt chains themselves (whether that is on lambdaprompt, langchain, or something else!)

    All in all, I'm super hyped about LangChain, love the space they are in and the rapid attention they are getting~

  • Show HN: Sketch – AI code-writing assistant that understands data content
    9 projects | news.ycombinator.com | 16 Jan 2023
    From https://github.com/approximatelabs/sketch/blob/main/sketch/p... it appears that this library is calling a remote API, which obviates the utility of the demonstrated use case.

    Upon closer inspection, it looks like https://github.com/approximatelabs/sketch interfaces with the model via https://github.com/approximatelabs/lambdaprompt, which is made by the same organization. This suggests to me that the former may be a toy demonstration of the latter.

  • Show HN: Prompt – Build, compose and call templated LLM prompts
    2 projects | news.ycombinator.com | 31 Dec 2022

Constrained-Text-Genera

Posts with mentions or reviews of Constrained-Text-Genera. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-06.
  • Photoshop for Text (2022)
    2 projects | news.ycombinator.com | 6 Apr 2024
    Oh my god. I wrote a whole library called "Constrained Text Generation Studio" where I mused that I wanted a "Photoshop for Text". I'm not even sure which work predates the other: https://github.com/Hellisotherpeople/Constrained-Text-Genera...

    The core idea of a "photoshop for text", specifically a word processor made for prosumers supporting GenAI first class (i.e oobabooga but actually good) - is worth so much. If you're a VC reading this, chances are I want to talk to you to actually execute on the idea from the OP

  • Ask HN: What have you built with LLMs?
    43 projects | news.ycombinator.com | 5 Feb 2024
    I was working on this stuff before it was cool, so in the sense of the precursor to LLMs (and sometimes supporting LLMs still) I've built many things:

    1. Games you can play with word2vec or related models (could be drop in replaced with sentence transformer). It's crazy that this is 5 years old now: https://github.com/Hellisotherpeople/Language-games

    2. "Constrained Text Generation Studio" - A research project I wrote when I was trying to solve LLM's inability to follow syntactic, phonetic, or semantic constraints: https://github.com/Hellisotherpeople/Constrained-Text-Genera...

    3. DebateKG - A bunch of "Semantic Knowledge Graphs" built on my pet debate evidence dataset (LLM backed embeddings indexes synchronized with a graphDB and a sqlDB via txtai). Can create compelling policy debate cases https://github.com/Hellisotherpeople/DebateKG

    4. My failed attempt at a good extractive summarizer. My life work is dedicated to one day solving the problems I tried to fix with this project: https://github.com/Hellisotherpeople/CX_DB8

  • You need a mental model of LLMs to build or use a LLM-based product
    2 projects | news.ycombinator.com | 13 Nov 2023
    My mental model for LLMs was built by carefully studying the distribution of its output vocabulary at every time step.

    There are tools that allow you to right click and see all possible continuations for an LLM like you would in a code IDE[1]. Seeing what this vocabulary is[2] and how trivial modifications to the prompt can impact probabilities will do a lot for improving the mental model of how LLM operate.

    Shameless self plug, but software which can do what I am describing is here, and it's worth noting that it ended up as peer reviewed research.

    [1] https://github.com/Hellisotherpeople/Constrained-Text-Genera...

  • Ask HN: How training of LLM dedicated to code is different from LLM of “text”
    3 projects | news.ycombinator.com | 2 Oct 2023
    Yeah, the LLM outputs a distribution of likely next tokens. It is up to the decoder to select one, and it can use a grammar to enforce certain rules on the output. https://github.com/Hellisotherpeople/Constrained-Text-Genera... or https://github.com/ggerganov/llama.cpp/blob/master/grammars/... for example.
  • Show HN: LLMs can generate valid JSON 100% of the time
    25 projects | news.ycombinator.com | 14 Aug 2023
  • Llama: Add Grammar-Based Sampling
    7 projects | news.ycombinator.com | 21 Jul 2023
    I am in love with this, I tried my hand at building a Constrained Text Generation Studio (https://github.com/Hellisotherpeople/Constrained-Text-Genera...), and got published at COLING 2022 for my paper on it (https://paperswithcode.com/paper/most-language-models-can-be...), but I always knew that something like this or the related idea enumerated in this paper: https://arxiv.org/abs/2306.03081 was the way to go.
  • Understanding GPT Tokenizers
    10 projects | news.ycombinator.com | 8 Jun 2023
    I agree with you, and I'm SHOCKED at how little work there actually is in phonetics within the NLP community. Consider that most of the phonetic tools that I am using to enforce rhyming or similar syntactic constrained in constrained text generation studio (https://github.com/Hellisotherpeople/Constrained-Text-Genera...) were built circa 2014, such as the CMU rhyming dictionary. In most cases, I could not find better modern implementations of these tools.

    I did learn an awful lot about phonetic representations and matching algorithms. Things like "soundex" and "double metaphone" now make sense to me and are fascinating to read about.

  • Don Knuth Plays with ChatGPT
    6 projects | news.ycombinator.com | 20 May 2023
    https://github.com/hellisotherpeople/constrained-text-genera...

    Just ban the damn tokens and try again. I wish that folks had more intuition around tokenization, and why LLMs struggle to follow syntactic, lexical, or phonetic constraints.

  • GPT-3 Creative Fiction
    2 projects | news.ycombinator.com | 19 Apr 2023
    My work on constrained text generation / filter assisted decoding for LLMS is cited in this article! One of my proudest moments was being noticed by my senpai Gwern!

    https://paperswithcode.com/paper/most-language-models-can-be...

    I want to update that just because GPT-4 appears to be far better at following constraints, doesn't mean that it's anywhere near perfect at following them. It's better now at my easy example of "ban the letter e" but if you ask for several constraints, or mixing lexical and phonetic constraints, it gets pretty awful pretty quickly. Filter assisted decoding can make any LLM (no matter how awful they are) follow constraints perfectly.

    I can't wait to get someone whose better at coding than me to implement these techniques in the major LLM frontends (oogabooga, llamma.ccp, etc) since my attempt at it was quite poopy research code: https://github.com/hellisotherpeople/constrained-text-genera...

  • Photoshop for Text
    2 projects | news.ycombinator.com | 18 Oct 2022
    The paper at COLING 2022 that I wrote, titled "Most language models can be poets too" included a GUI constrained text generation studio that I market as being "Like Photoshop but for text"

    https://github.com/Hellisotherpeople/Constrained-Text-Genera...

What are some alternatives?

When comparing lambdaprompt and Constrained-Text-Genera you can also consider the following projects:

datasloth - Natural language Pandas queries and data generation powered by GPT-3

outlines - Structured Text Generation

lmql - A language for constraint-guided and efficient LLM programming.

Constrained-Text-Generation-Studio - Code repo for "Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio" at the (CAI2) workshop, jointly held at (COLING 2022)

LiteratureReviewBot - Experiment to use GPT-3 to help write grant proposals.

agency - Agency: Robust LLM Agent Management with Go

kor - LLM(😽)

tokenizer - Pure Go implementation of OpenAI's tiktoken tokenizer

olympe - Query your database in plain english

torch-grammar

com2fun - Transform document into function.

relm - ReLM is a Regular Expression engine for Language Models