langroid

Harness LLMs with Multi-Agent Programming (by langroid)

Langroid Alternatives

Similar projects and alternatives to langroid

  1. text-generation-webui

    A Gradio web UI for Large Language Models with support for multiple inference backends.

  2. Judoscale

    Save 47% on cloud hosting with autoscaling that just works. Judoscale integrates with Django, FastAPI, Celery, and RQ to make autoscaling easy and reliable. Save big, and say goodbye to request timeouts and backed-up task queues.

    Judoscale logo
  3. ollama

    Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models.

  4. txtai

    385 langroid VS txtai

    đź’ˇ All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows

  5. gpt4all

    GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.

  6. Typesense

    Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences

  7. CodeTriage

    Discover the best way to get started contributing to Open Source projects

  8. semantic-kernel

    Integrate cutting-edge LLM technology quickly and easily into your apps

  9. CodeRabbit

    CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.

    CodeRabbit logo
  10. vis

    A vi-like editor based on Plan 9's structural regular expressions (by martanne)

  11. TypeScript-Website

    The Website and web infrastructure for learning TypeScript

  12. outlines

    43 langroid VS outlines

    Structured Text Generation

  13. Back In Time

    A comfortable and well-configurable graphical Frontend for incremental backups, with a command-line version also available. Modified files are transferred, while unchanged files are linked to the new folder using rsync's hard link feature, saving storage space. Restoring is straightforward via file manager, command line or Back In Time itself.

  14. dspy

    40 langroid VS dspy

    DSPy: The framework for programming—not prompting—language models

  15. CorsixTH

    Open source clone of Theme Hospital

  16. instructor

    26 langroid VS instructor

    structured outputs for llms

  17. langchaingo

    LangChain for Go, the easiest way to write LLM-based programs in Go

  18. swarm

    15 langroid VS swarm

    Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team.

  19. agency

    🕵️‍♂️ Library designed for developers eager to explore the potential of Large Language Models (LLMs) and other generative AI through a clean, effective, and Go-idiomatic approach. (by neurocult)

  20. mesop

    12 langroid VS mesop

    Rapidly build AI apps in Python

  21. llm

    59 langroid VS llm

    Access large language models from the command-line

  22. InfluxDB

    InfluxDB high-performance time series database. Collect, organize, and act on massive volumes of high-resolution data to power real-time intelligent systems.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better langroid alternative or higher similarity.

langroid discussion

Log in or Post with

langroid reviews and mentions

Posts with mentions or reviews of langroid. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-11-19.
  • Understanding the BM25 full text search algorithm
    5 projects | news.ycombinator.com | 19 Nov 2024
    In the Langroid[1] LLM library we have a clean, extensible RAG implementation in the DocChatAgent[2] -- it uses several retrieval techniques, including lexical (bm25, fuzzy search) and semantic (embeddings), and re-ranking (using cross-encoder, reciprocal-rank-fusion) and also re-ranking for diversity and lost-in-the-middle mitigation:

    [1] Langroid - a multi-agent LLM framework from CMU/UW-Madison researchers https://github.com/langroid/langroid

    [2] DocChatAgent Implementation -

  • Ask HN: What Open Source Projects Need Help?
    46 projects | news.ycombinator.com | 16 Nov 2024
    Langroid: https://github.com/langroid/langroid

    Langroid (2.7k stars, 20k downloads/mo) is an intuitive, lightweight, extensible and principled Python framework to easily build agent-oriented LLM-powered applications, from CMU and UW-Madison researchers. You set up Agents, equip them with optional components (LLM,

  • Swarm, a new agent framework by OpenAI
    13 projects | news.ycombinator.com | 11 Oct 2024
    You can have a look at Langroid, an agent-oriented LLM framework from CMU/UW-Madison researchers (I am the lead dev). We are seeing companies using it in production in preference to other libs mentioned here.

    https://github.com/langroid/langroid

  • Every Way to Get Structured Output from LLMs
    8 projects | news.ycombinator.com | 18 Jun 2024
    https://github.com/langroid/langroid/tree/main/examples/basi...

    [1] Langroid: https://github.com/langroid/langroid

  • Show HN: Mesop, open-source Python UI framework used at Google
    11 projects | news.ycombinator.com | 3 Jun 2024
    This is very interesting. To build LLM chat-oriented WebApps in python, these days I use Chainlit[1], which I find is much better than Streamlit for this. I've integrated Chainlit into the Langroid[2] Multi-Agent LLM framework via a callback injection class[3] (i.e. hooks to display responses by various entities).

    One of the key requirements in a multi-agent chat app is to be able to display steps of sub-tasks nested under parent tasks (to any level of nesting), with the option to fold/collapse sub-steps to only view the parent steps. I was able to get this to work with chainlit, though it was not easy, since their sub-step rendering mental model seemed more aligned to a certain other LLM framework with a partial name overlap with theirs.

    That said, I am very curious if Mesop could be a viable alternative, for this type of nested chat implementation, especially if the overall layout can be much more flexible (which it seems like), and more production-ready.

    [1] Chainlit https://github.com/Chainlit/chainlit

    [2] Langroid: https://github.com/langroid/langroid

    [3] Langroid ChainlitAgentCallback class: https://github.com/langroid/langroid/blob/main/langroid/agen...

  • OpenAI: Streaming is now available in the Assistants API
    2 projects | news.ycombinator.com | 14 Mar 2024
    This was indeed true in the beginning, and I don’t know if this has changed. Inserting messages with Assistant role is crucial for many reasons, such as if you want to implement caching, or otherwise edit/compress a previous assistant response for cost or other reason.

    At the time I implemented a work-around in Langroid[1]: since you can only insert a “user” role message, prepend the content with ASSISTANT: whenever you want it to be treated as an assistant role. This actually works as expected and I was able to do caching. I explained it in this forum:

    https://community.openai.com/t/add-custom-roles-to-messages-...

    [1] the Langroid code that adds a message with a given role, using this above “assistant spoofing trick”:

    https://github.com/langroid/langroid/blob/main/langroid/agen...

  • FLaNK Stack 29 Jan 2024
    46 projects | dev.to | 29 Jan 2024
  • Ollama Python and JavaScript Libraries
    17 projects | news.ycombinator.com | 24 Jan 2024
    Same question here. Ollama is fantastic as it makes it very easy to run models locally, But if you already have a lot of code that processes OpenAI API responses (with retry, streaming, async, caching etc), it would be nice to be able to simply switch the API client to Ollama, without having to have a whole other branch of code that handles Alama API responses. One way to do an easy switch is using the litellm library as a go-between but it’s not ideal (and I also recently found issues with their chat formatting for mistral models).

    For an OpenAI compatible API my current favorite method is to spin up models using oobabooga TGW. Your OpenAI API code then works seamlessly by simply switching out the api_base to the ooba endpoint. Regarding chat formatting, even ooba’s Mistral formatting has issues[1] so I am doing my own in Langroid using HuggingFace tokenizer.apply_chat_template [2]

    [1] https://github.com/oobabooga/text-generation-webui/issues/53...

    [2] https://github.com/langroid/langroid/blob/main/langroid/lang...

    Related question - I assume ollama auto detects and applies the right chat formatting template for a model?

  • Pushing ChatGPT's Structured Data Support to Its Limits
    8 projects | news.ycombinator.com | 27 Dec 2023
    we (like simpleaichat from OP) leverage Pydantic to specify the desired structured output, and under the hood Langroid translates it to either the OpenAI function-calling params or (for LLMs that don’t natively support fn-calling), auto-insert appropriate instructions into tje system-prompt. We call this mechanism a ToolMessage:

    https://github.com/langroid/langroid/blob/main/langroid/agen...

    We take this idea much further — you can define a method in a ChatAgent to “handle” the tool and attach the tool to the agent. For stateless tools you can define a “handle” method in the tool itself and it gets patched into the ChatAgent as the handler for the tool.

  • Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
    12 projects | news.ycombinator.com | 24 Dec 2023
    Many services/platforms are careless/disingenuous when they claim they “train” on your documents, where they actually mean they do RAG.

    An under-appreciate benefit of RAG is the ability to have the LLM cite sources for its answers (which are in principle automatically/manually verifiable). You lose this citation ability when you finetune on your documents.

    In Langroid (the Multi-Agent framework from ex-CMU/UW-Madison researchers) https://github.com/langroid/langroid

  • A note from our sponsor - CodeRabbit
    coderabbit.ai | 22 Apr 2025
    Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR. Learn more →

Stats

Basic langroid repo stats
20
3,234
9.9
about 12 hours ago

langroid/langroid is an open source project licensed under MIT License which is an OSI approved license.

The primary programming language of langroid is Python.


Sponsored
Save 47% on cloud hosting with autoscaling that just works
Judoscale integrates with Django, FastAPI, Celery, and RQ to make autoscaling easy and reliable. Save big, and say goodbye to request timeouts and backed-up task queues.
judoscale.com

Did you know that Python is
the 2nd most popular programming language
based on number of references?