Show HN: Magentic – Use LLMs as simple Python functions

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • magentic

    Seamlessly integrate LLMs as Python functions

    1) The OpenAI API will be queried each time a "prompt-function" is called in python code. If you provide the `functions` argument in order to use function-calling then magentic will not execute the function the LLM has chosen, instead it returns a `FunctionCall` instance which you can validate before calling.

    2) I haven't measured additional latency but it should be negligible in comparison to the speed of generation of the LLM. And since it makes it easy to use streaming and async functions you might be able to achieve much faster generation speeds overall - see the Async section in the README. Token usage should also be a negligible change from calling the OpenAI API directly - the only "prompting" magentic does currently is in naming the functions sent to OpenAI, all other input tokens are written by the user. A user switching from explicitly defining the output schema in the prompt to using function-calling via magentic might actually save a few tokens.

    3) Functionality is not deterministic, even with `temperature=0`, but since we're working with python functions one option is to just add the `@cache` decorator. This would save you tokens and time when calling the same prompt-function with the same inputs.

    ---

    1) https://github.com/jackmpcollins/magentic#usage

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
  • marvin

    ✨ Build AI interfaces that spark joy

    Yes, similar ideas. Marvin [asks the LLM to mimic the python function](https://github.com/PrefectHQ/marvin/blob/f37ad5b15e2e77dd998...), whereas in magentic the function signature just represents the inputs/outputs to the prompt-template/LLM, so the LLM “doesn’t know” that it is pretending to be a python function - you specify all the prompts.

  • lmql

    A language for constraint-guided and efficient LLM programming.

    This is also similar in spirit to LMQL

    https://github.com/eth-sri/lmql

  • outlines

    Structured Text Generation

    Right now it just works with OpenAI chat models (gpt-3.5-turbo, gpt-4) but if there's interest I plan to extend it to have several backends. These would probably each be an existing library that implements generating structured output like https://github.com/outlines-dev/outlines or https://github.com/guidance-ai/guidance. If you have ideas how this should be done let me know - on a github issue would be great to make it visible to others.

  • guidance

    A guidance language for controlling large language models.

    Right now it just works with OpenAI chat models (gpt-3.5-turbo, gpt-4) but if there's interest I plan to extend it to have several backends. These would probably each be an existing library that implements generating structured output like https://github.com/outlines-dev/outlines or https://github.com/guidance-ai/guidance. If you have ideas how this should be done let me know - on a github issue would be great to make it visible to others.

  • cria

    OpenAI compatible API for serving LLAMA-2 model

  • openai-functools

    openai-functools: Simplified Generation of OpenAI Functions JSON Metadata for OpenAI Function Calling

    Very cool! At first the title reminded me of a project me and my colleague are working on called OpenAI-Functools [1], but your concept is quite the opposite, combining LLMs in your code rather seamlessly instead of the other way around. Quite cool, and interesting examples :)

    I’ll definitely try to apply it in one of my pet projects. Good stuff

    [1] https://github.com/Jakob-98/openai-functools

  • vanna

    🤖 Chat with your SQL database 📊. Accurate Text-to-SQL Generation via LLMs using RAG 🔄.

    Nice! I’m going to try it out and possibly integrate it into my Python package: https://vanna.ai

  • LocalAI

    :robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference

    There's also LocalAI[0] which allows the use of local LLMs with an OpenAI compatible API.

    [0] https://github.com/go-skynet/LocalAI

  • antiscope

    experimental model-based language structures

    See also: `antiscope`, an experiment in subjunctive programming

    https://github.com/MillionConcepts/antiscope

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • I'm puzzled how anyone trusts ChatGPT for code

    4 projects | news.ycombinator.com | 8 May 2024
  • I just had the displeasure of implementing Langchain in our org.

    3 projects | /r/LangChain | 10 Dec 2023
  • How is Langchain's dev experience? Any alternatives?

    2 projects | /r/LLMDevs | 6 Jul 2023
  • T2x – a CLI tool for AI-first text operations

    6 projects | news.ycombinator.com | 30 Dec 2024
  • is-even-ai: Check if a number is even using the power of AI

    1 project | news.ycombinator.com | 20 Dec 2024