SeekWhence
A simple programming language built around mathematical sequences as a primitive. Originally created in 28 hours for LangJam 2. (by kgscialdone)
lmql
A language for constraint-guided and efficient LLM programming. (by eth-sri)
SeekWhence | lmql | |
---|---|---|
1 | 30 | |
3 | 3,375 | |
- | 4.4% | |
0.0 | 9.5 | |
over 2 years ago | 13 days ago | |
Python | Python | |
- | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SeekWhence
Posts with mentions or reviews of SeekWhence.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-03-22.
-
Favorite Feature in YOUR programming language?
While SeekWhence is admittedly somewhat of a jokey language (I created it for a programming language jam in under 48 hours of work over about a week), it does have a feature I would love to see in other languages - mathematical sequences as a primative.
lmql
Posts with mentions or reviews of lmql.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-03-06.
- Show HN: Fructose, LLM calls as strongly typed functions
-
Prompting LLMs to constrain output
have been experimenting with guidance and lmql. a bit too early to give any well formed opinions but really do like the idea of constraining llm output.
-
[D] Prompt Engineering Seems Like Guesswork - How To Evaluate LLM Application Properly?
the only time i've ever felt like it was anything other than guesswork was using LMQL . not coincidentally, LMQL works with LLMs as autocomplete engines rather than q&a ones.
-
Guidance for selecting a function-calling library?
lqml
-
Show HN: Magentic – Use LLMs as simple Python functions
This is also similar in spirit to LMQL
https://github.com/eth-sri/lmql
- Show HN: LLMs can generate valid JSON 100% of the time
- LangChain Agent Simulation – Multi-Player Dungeons and Dragons
-
The Problem with LangChain
LLM calls are just function calls, so most functional composition is already afforded by any general-purpose language out there. If you need fancy stuff, use something like Python‘s functools.
Working on https://github.com/eth-sri/lmql (shameless plug, sorry), we have always found that compositional abstractions on top of LMQL are mostly there already, once you internalize prompts being functions.
- Is there a UI that can limit LLM tokens to a preset list?
-
Local LLMs: After Novelty Wanes
LMQL is another.