cube.js VS Druid

Compare cube.js vs Druid and see what are their differences.

Druid

Apache Druid: a high performance real-time analytics database. (by apache)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
cube.js Druid
86 24
17,135 13,188
1.2% 0.6%
9.9 9.9
2 days ago 6 days ago
Rust Java
GNU General Public License v3.0 or later Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cube.js

Posts with mentions or reviews of cube.js. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-07.
  • MQL – Client and Server to query your DB in natural language
    2 projects | news.ycombinator.com | 7 Apr 2024
    I should have clarified. There's a large number of apps that are:

    1. taking info strictly from SQL (e.g. information_schema, query history)

    2. taking a user input / question

    3. writing SQL to answer that question

    An app like this is what I call "text-to-sql". Totally agree a better system would pull in additional documentation (which is what we're doing), but I'd no longer consider it "text-to-sql". In our case, we're not even directly writing SQL, but rather generating semantic layer queries (i.e. https://cube.dev/).

  • Show HN: Spice.ai – materialize, accelerate, and query SQL data from any source
    5 projects | news.ycombinator.com | 28 Mar 2024
    I'm not too familiar with https://cube.dev/ - but my initial impression is they are focused more on providing APIs backed by SQL. They have a SQL API that emulates the PostgreSQL wire protocol, whereas Spice implements Arrow and Flight SQL natively. Their pre-aggregations are a similar concept to Spice's data accelerators. It also looks like they have their own query language, whereas Spice is native SQL as well.
  • Show HN: Delphi – Build customer-facing AI data apps (that work)
    1 project | news.ycombinator.com | 22 Mar 2024
    Hey HN!

    Over the past year, my co-founder David and I have been building Delphi to let developers create amazing customer-facing AI experiences on top of their data. We're excited to share it with you.

    David and I have spent our careers leading data and engineering teams. After ChatGPT got popular, we saw a rush of "chat with your data" startups launch. Most of these are "text-to-SQL" and use an LLM like GPT-4 to generate SQL queries that run directly against a data warehouse or database.

    However, the general perception now is most of them make for nice demos but are hard to make work in the real world. The reason is data complexity. Even smart LLMs find it difficult to reason about messy databases with hundreds of tables, thousands of columns, and complex schemas that have been built up piece-meal for years. Text-to-SQL can be a fine dev tool for data scientists and analysts, but we've seen many organizations hesitate to deploy it to end users, who never know if the answer they get one day will be the same the next.

    David and I found a better way. From our time in the data engineering world, we were familiar with a type of tool called "semantic layers." Think of them like an ORM for analytics. Basically, they sit between databases (or data warehouses) and data consumers (data viz tools like Tableau or APIs) and map real-world concepts (entities like "customers" and metrics like "sales") to database tables and calculations.

    Semantic layers are often used for "embedded analytics" (e.g. when you're building customer-facing dashboards into your application) but are increasingly also used for traditional business intelligence. Cube (https://cube.dev) is a prominent example, and dbt has also recently released one. They're useful because with a semantic layer, the consumer doesn't have to think about questions like "how do we define revenue?" when running a query. They just get consistent, governed data definitions across their business.

    We realized that semantic layers could be just as useful for LLMs as for humans. After all, LLMs are built on natural language, so a system that deterministically translates natural language concepts into code has obvious power when you're working with LLMs. With a semantic layer, we've found that companies can get AI to answer much more complex questions than without it.

    For a year now, we've been building Delphi to do just that. We've gone through a few iterations/pivots (initially we were focused on building a Slack bot for internal analytics) and are now seeing our developer-first approach resonate. We're being used to power customer-facing fintech applications, recruiting software, and more.

    How do you use Delphi? The first step is connecting your database; then, we build your semantic layer on top of it. Right now we do this manually, but we're moving more and more of it over to AI. Once that's done, we have 3 main ways of using Delphi: 1) white-labeling our AI analytics platform and providing it to your customers; 2) a streaming REST API and SDKs; and 3) React components to easily drop a "chat with your data" experience into your app.

    If this is interesting to you, drop us a line at [email protected] or sign up at our website (https://delphihq.com) to get in touch. Thanks for reading! Would love to hear any thoughts and feedback.

  • Apache Superset
    14 projects | news.ycombinator.com | 26 Feb 2024
    We use https://cube.dev/ as intermediate layer between data warehouse database and Superset (and other "terminal" apps for BI like report generators). You define your schema (metrics, dimensions, joins, calculated metrics etc) in cube and then access them by any tool that can connect to SQL db
  • Need to reduce costs - which service to use?
    1 project | /r/dataengineering | 5 Dec 2023
    also check out cube.dev. they can do the semantic layer and cache it so you are not hitting Snowflake all the time.
  • Anyone with experience moving to Cube.dev + Metabase/Superset from Looker ?
    1 project | /r/BusinessIntelligence | 3 Dec 2023
    We need metrics to live in source control with reviews. Metabase doesn't have a git integration for metrics, which is why we are convinced to use cube.dev as a semantic layer.
  • GigaOm Sonar Report Reviews Semantic Layer and Metric Store Vendors
    1 project | news.ycombinator.com | 8 Sep 2023
    https://github.com/cube-js/cube comes out very well at the end as a promising open source system, getting rather close to the bullseye. Would love to know more & hear people's experience with it.
  • Show HN: VulcanSQL – Serve high-concurrency, low-latency API from OLAP
    4 projects | news.ycombinator.com | 5 Jul 2023
    How is this different from something like https://cube.dev/
  • Best Headless Chart Library?
    2 projects | /r/reactjs | 29 May 2023
    Have a look to cube.js
  • Advice / Questions on Modern Data Stack
    1 project | /r/dataengineering | 20 May 2023
    For now, I've been thinking on using self-hosted Rudderstack both for ingestion and reverse ETL, cube.dev as the abstraction later for building webapps and providing catching for the BI layer, and dbt for transformations. But I have doubts with the following elements:

Druid

Posts with mentions or reviews of Druid. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-28.
  • How to choose the right type of database
    15 projects | dev.to | 28 Feb 2024
    Apache Druid: Focused on real-time analytics and interactive queries on large datasets. Druid is well-suited for high-performance applications in user-facing analytics, network monitoring, and business intelligence.
  • Choosing Between a Streaming Database and a Stream Processing Framework in Python
    10 projects | dev.to | 10 Feb 2024
    Online analytical processing (OLAP) databases like Apache Druid, Apache Pinot, and ClickHouse shine in addressing user-initiated analytical queries. You might write a query to analyze historical data to find the most-clicked products over the past month efficiently using OLAP databases. When contrasting with streaming databases, they may not be optimized for incremental computation, leading to challenges in maintaining the freshness of results. The query in the streaming database focuses on recent data, making it suitable for continuous monitoring. Using streaming databases, you can run queries like finding the top 10 sold products where the “top 10 product list” might change in real-time.
  • Show HN: The simplest tiny analytics tool – storywise
    3 projects | news.ycombinator.com | 18 Jul 2023
    https://github.com/apache/druid

    It's always a question of tradeoffs.

    The awesome-selfhosted project has a nice list of open-source analytics projects. It's really good inspiration to dig into these projects and find out about the technology choices that other open-source tools in the space have made.

  • Analysing Github Stars - Extracting and analyzing data from Github using Apache NiFi®, Apache Kafka® and Apache Druid®
    8 projects | dev.to | 11 Jan 2023
    Spencer Kimball (now CEO at CockroachDB) wrote an interesting article on this topic in 2021 where they created spencerkimball/stargazers based on a Python script. So I started thinking: could I create a data pipeline using Nifi and Kafka (two OSS tools often used with Druid) to get the API data into Druid - and then use SQL to do the analytics? The answer was yes! And I have documented the outcome below. Here’s my analytical pipeline for Github stars data using Nifi, Kafka and Druid.
  • Apache Druid® - an enterprise architect's overview
    1 project | dev.to | 15 Dec 2022
    Apache Druid is part of the modern data architecture. It uses a special data format designed for analytical workloads, using extreme parallelisation to get data in and get data out. A shared-nothing, microservices architecture helps you to build highly-available, extreme scale analytics features into your applications.
  • Real Time Data Infra Stack
    15 projects | dev.to | 4 Dec 2022
    Apache Druid
  • When you should use columnar databases and not Postgres, MySQL, or MongoDB
    5 projects | dev.to | 25 Oct 2022
    But then you realize there are other databases out there focused specifically on analytical use cases with lots of data and complex queries. Newcomers like ClickHouse, Pinot, and Druid (all open source) respond to a new class of problem: The need to develop applications using endpoints published on analytical queries that were previously confined only to the data warehouse and BI tools.
  • Druids by Datadog
    6 projects | news.ycombinator.com | 20 Sep 2022
    Datadog's product is a bit too close to Apache Druid to have named their design system so similarly.

    From https://druid.apache.org/ :

    > Druid unlocks new types of queries and workflows for clickstream, APM, supply chain, network telemetry, digital marketing, risk/fraud, and many other types of data. Druid is purpose built for rapid, ad-hoc queries on both real-time and historical data.

  • Mom at 54 is thinking about coding and a complete career shift. Thoughts?
    2 projects | /r/cscareerquestions | 18 Sep 2022
    Maybe rare for someone to be seeking their first coding job at that age. But plenty of us are in our 50s or older and still coding up a storm. And not necessarily ancient tech or anything. My current project exposes analytics data from Apache Druid and Cassandra via Go microservices hosted in K8s.
  • Building an arm64 container for Apache Druid for your Apple Silicon
    4 projects | dev.to | 8 Sep 2022
    Fortunately, it is super easy to build your own leveraging the binary distribution and existing docker.sh.

What are some alternatives?

When comparing cube.js and Druid you can also consider the following projects:

Apache Superset - Apache Superset is a Data Visualization and Data Exploration Platform [Moved to: https://github.com/apache/superset]

iced - A cross-platform GUI library for Rust, inspired by Elm

Elasticsearch - Free and Open, Distributed, RESTful Search Engine

Apache Cassandra - Mirror of Apache Cassandra

Redash - Make Your Company Data Driven. Connect to any data source, easily visualize, dashboard and share your data.

Apache HBase - Apache HBase

Metabase - The simplest, fastest way to get business intelligence and analytics to everyone in your company :yum:

egui - egui: an easy-to-use immediate mode GUI in Rust that runs on both web and native

metriql - The metrics layer for your data. Join us at https://metriql.com/slack

Scylla - NoSQL data store using the seastar framework, compatible with Apache Cassandra

GoAccess - GoAccess is a real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems or through your browser.