deepeval VS LLMStack

Compare deepeval vs LLMStack and see what are their differences.

LLMStack

No-code platform to build LLM Agents, workflows and applications with your data (by trypromptly)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
deepeval LLMStack
22 20
1,923 1,159
20.2% 11.7%
9.9 9.9
2 days ago 1 day ago
Python Python
Apache License 2.0 GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

deepeval

Posts with mentions or reviews of deepeval. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-21.
  • Unit Testing LLMs with DeepEval
    1 project | dev.to | 11 Apr 2024
    For the last year I have been working with different LLMs (OpenAI, Claude, Palm, Gemini, etc) and I have been impressed with their performance. With the rapid advancements in AI and the increasing complexity of LLMs, it has become crucial to have a reliable testing framework that can help us maintain the quality of our prompts and ensure the best possible outcomes for our users. Recently, I discovered DeepEval (https://github.com/confident-ai/deepeval), an LLM testing framework that has revolutionized the way we approach prompt quality assurance.
  • Show HN: Ragas – the de facto open-source standard for evaluating RAG pipelines
    4 projects | news.ycombinator.com | 21 Mar 2024
    Checkout this instead: https://github.com/confident-ai/deepeval

    Also has native ragas implementation but supports all models.

  • Show HN: Times faster LLM evaluation with Bayesian optimization
    6 projects | news.ycombinator.com | 13 Feb 2024
    Fair question.

    Evaluate refers to the phase after training to check if the training is good.

    Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!

    So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.

  • Implemented 12+ LLM evaluation metrics so you don't have to
    1 project | news.ycombinator.com | 13 Dec 2023
    A link to a reddit post (with no discussion) which links to this repo

    https://github.com/confident-ai/deepeval

  • Show HN: I implemented a range of evaluation metrics for LLMs that runs locally
    1 project | news.ycombinator.com | 11 Dec 2023
  • These 5 Open Source AI Startups are changing the AI Landscape
    7 projects | dev.to | 16 Nov 2023
    Star DeepEval on GitHub and contribute to the advancement of LLM evaluation frameworks! 🌟
  • FLaNK Stack Weekly 06 Nov 2023
    21 projects | dev.to | 6 Nov 2023
  • Why we replaced Pinecone with PGVector 😇
    1 project | dev.to | 2 Nov 2023
    Pinecone, the leading closed-source vector database provider, is known for being fast, scalable, and easy to use. Its ability to allow users to perform blazing-fast vector search makes it a popular choice for large-scale RAG applications. Our initial infrastructure for Confident AI, the world’s first open-source evaluation infrastructure for LLMs, utilized Pinecone to cluster LLM observability log data in production. However, after weeks of experimentation, we made the decision to replace it entirely with pgvector. Pinecone’s simplistic design is deceptive due to several hidden complexities, particularly in integrating with existing data storage solutions. For example, it forces a complicated architecture and its restrictive metadata storage capacity made it troublesome for managing data-intensive workloads.
  • Show HN: Unit Testing for LLMs
    1 project | news.ycombinator.com | 26 Oct 2023
  • Show HN: DeepEval – Unit Testing for LLMs (Open Science)
    1 project | news.ycombinator.com | 5 Oct 2023

LLMStack

Posts with mentions or reviews of LLMStack. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-14.
  • Vanna.ai: Chat with your SQL database
    13 projects | news.ycombinator.com | 14 Jan 2024
    We have recently added support to query data from SingleStore to our agent framework, LLMStack (https://github.com/trypromptly/LLMStack). Out of the box performance performance when prompting with just the table schemas is pretty good with GPT-4.

    The more domain specific knowledge needed for queries, the harder it has gotten in general. We've had good success `teaching` the model different concepts in relation to the dataset and giving it example questions and queries greatly improved performance.

  • FFmpeg Lands CLI Multi-Threading as Its "Most Complex Refactoring" in Decades
    2 projects | news.ycombinator.com | 12 Dec 2023
    This will hopefully improve the startup times for FFmpeg when streaming from virtual display buffers. We use FFmpeg in LLMStack (low-code framework to build and run LLM agents) to stream browser video. We use playwright to automate browser interactions and provide that as tool to the LLM. When this tool is invoked, we stream the video of these browser interactions with FFmpeg by streaming the virtual display buffer the browser is using.

    There is a noticeable delay booting up this pipeline for each tool invoke right now. We are working on putting in some optimizations but improvements in FFmpeg will definitely help. https://github.com/trypromptly/LLMStack is the project repo for the curious.

  • Show HN: IncarnaMind-Chat with your multiple docs using LLMs
    4 projects | news.ycombinator.com | 15 Sep 2023
    We built https://github.com/trypromptly/LLMStack to serve exactly this persona. A low-code platform to quickly build RAG pipelines and other LLM applications.
  • A Comprehensive Guide for Building Rag-Based LLM Applications
    6 projects | news.ycombinator.com | 13 Sep 2023
    Kudos to the team for a very detailed notebook going into things like pipeline evaluation wrt performance and costs etc. Even if we ignore the framework specific bits, it is a great guide to follow when building RAG systems in production.

    We have been building RAG systems in production for a few months and have been tinkering with different strategies to get the most performance out of these pipelines. As others have pointed out, vector database may not be the right strategy for every problem. Similarly there are things like lost in the middle problems (https://arxiv.org/abs/2307.03172) that one may have to deal with. We put together our learnings building and optimizing these pipelines in a post at https://llmstack.ai/blog/retrieval-augmented-generation.

    https://github.com/trypromptly/LLMStack is a low-code platform we open-sourced recently that ships these RAG pipelines out of the box with some app templates if anyone wants to try them out.

  • Building a Blog in Django
    12 projects | news.ycombinator.com | 12 Sep 2023
    Django has been my go to framework for any new web project I start for more than a decade. Its batteries-included approach meant that one could go pretty far with just Django alone. Included admin interface and the views/templating setup was what first drew me to the project.

    Django project itself has kept pace with recent developments in web development. I still remember migrations being an external project, getting merged in and the transition that followed. Ecosystem is pretty powerful too with projects like drf, channels, social-auth etc., covering most things we need to run in production.

    https://github.com/trypromptly/LLMStack is a recent project I built entirely with Django. It uses django channels for websockets, drf for API and reactjs for the frontend.

  • Show HN: Rivet – open-source AI Agent dev env with real-world applications
    5 projects | news.ycombinator.com | 8 Sep 2023
    We recently opensourced a similar platform for building workflows by chaining LLMs visually along with LocalAI support.

    Check it out at https://github.com/trypromptly/LLMStack. Like you said, it was fairly easy to integrate LocalAI and is a great project.

  • Show HN: Retool AI
    5 projects | news.ycombinator.com | 7 Sep 2023
    Would you mind expanding why it was tough to get started with Retool?

    We are building https://github.com/trypromptly/LLMStack, a low-code platform to build LLM apps with a goal of making it easy for non-tech people to leverage LLMs in their workflows. Would love to learn about your experience with retool and incorporate some of that feedback into LLMStack.

  • We built a self-hosted low-code platform to build LLM apps locally and open-sourced it
    1 project | /r/OpenAI | 3 Sep 2023
    We built LLMStack for our internal purposes and pulled it out into its own repo and open sourced it at https://github.com/trypromptly/LLMStack.
  • LLMStack: self-hosted low-code platform to build LLM apps locally with LocalAI support
    1 project | /r/selfhosted | 3 Sep 2023
    LLMStack (https://github.com/trypromptly/LLMStack) is a no-code platform to build LLM apps that we have been working on for a few months and open-sourced recently. It comes with everything out of the box that one needs to build LLM apps locally or in an enterprise setting.
  • LLMStack: a self-hosted low-code platform to build LLM apps locally
    1 project | /r/programming | 1 Sep 2023

What are some alternatives?

When comparing deepeval and LLMStack you can also consider the following projects:

ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines

anything-llm - The all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities.

litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)

langflow - ⛓️ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.

blog-examples

azurechatgpt - 🤖 Azure ChatGPT: Private & secure ChatGPT for internal enterprise use 💼

openvino_notebooks - 📚 Jupyter notebook tutorials for OpenVINO™

spider - scripts and baselines for Spider: Yale complex and cross-domain semantic parsing and text-to-SQL challenge

pezzo - 🕹️ Open-source, developer-first LLMOps platform designed to streamline prompt design, version management, instant delivery, collaboration, troubleshooting, observability and more.

audapolis - an editor for spoken-word audio with automatic transcription

tailspin - 🌀 A log file highlighter

SpeechRecognition - Speech recognition module for Python, supporting several engines and APIs, online and offline.