Apache Flink VS cube.js

Compare Apache Flink vs cube.js and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
Apache Flink cube.js
9 86
23,128 17,120
1.0% 1.1%
9.9 9.9
7 days ago 7 days ago
Java Rust
Apache License 2.0 GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Apache Flink

Posts with mentions or reviews of Apache Flink. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-15.
  • First 15 Open Source Advent projects
    16 projects | dev.to | 15 Dec 2023
    7. Apache Flink | Github | tutorial
  • Pyflink : Flink DataStream (KafkaSource) API to consume from Kafka
    1 project | /r/dataengineering | 13 May 2023
    Does anyone have fully running Pyflink code snippet to read from Kafka using the new Flink DataStream (KafkaSource) API and just print out the output to console or write it out to a file. Most of the examples and the official Flink GitHubare using the old API (FlinkKafkaConsumer).
  • I keep getting build failure when I try to run mvn clean compile package
    2 projects | /r/AskProgramming | 8 Apr 2023
    I'm trying to use https://github.com/mauricioaniche/ck to analyze the ck metrics of https://github.com/apache/flink. I have the latest version of java downloaded and I have the latest version of apache maven downloaded too. My environment variables are set correctly. I'm in the correct directory as well. However, when I run mvn clean compile package in powershell it always says build error. I've tried looking up the errors but there's so many. https://imgur.com/a/Zk8Snsa I'm very new to programming in general so any suggestions would be appreciated.
  • How do I determine what the dependencies are when I make pom.xml file?
    1 project | /r/AskProgramming | 7 Apr 2023
    Looking at the project on github, it seems like they should have a pom in the root dir https://github.com/apache/flink/blob/master/pom.xml
  • Akka is moving away from Open Source
    1 project | /r/scala | 7 Sep 2022
    Akka is used only as a possible RPC implementation, isn't it?
  • We Are Changing the License for Akka
    6 projects | news.ycombinator.com | 7 Sep 2022
  • DeWitt Clause, or Can You Benchmark %DATABASE% and Get Away With It
    21 projects | dev.to | 2 Jun 2022
    Apache Drill, Druid, Flink, Hive, Kafka, Spark
  • Computation reuse via fusion in Amazon Athena
    2 projects | news.ycombinator.com | 20 May 2022
    It took me some time to get a good grasp of the power of SQL; and it really kicked in when I learned about optimization rules. It's a program that you rewrite, just like an optimizing compiler would.

    You state what you want; you have different ways to fetch and match and massage data; and you can search through this space to produce a physical plan. Hopefully you used knowledge to weight parts to be optimized (table statistics, like Java's JIT would detect hot spots).

    I find it fascinating to peer through database code to see what is going on. Lately, there's been new advances towards streaming databases, which bring a whole new design space. For example, now you have latency of individual new rows to optimize for, as opposed to batch it whole to optimize the latency of a dataset. Batch scanning will be benefit from better use of your CPU caches.

    And maybe you could have a hybrid system which reads history from a log and aggregates in a batched manner, and then switches to another execution plan when it reaches the end of the log.

    If you want to have a peek at that here are Flink's set of rules [1], generic and stream-specific ones. The names can be cryptic, but usually give a good sense of what is going on. For example: PushFilterIntoTableSourceScanRule makes the WHERE clause apply the earliest possible, to save some CPU/network bandwidth further down. PushPartitionIntoTableSourceScanRule tries to make a fan-out/shuffle happen the earliest possible, so that parallelism can be made use of.

    [1] https://github.com/apache/flink/blob/5f8fb304fb5d68cdb0b3e3c...

  • Avro SpecificRecord File Sink using apache flink is not compiling due to error incompatible types: FileSink<?> cannot be converted to SinkFunction<?>
    3 projects | /r/apacheflink | 14 Sep 2021
    [1]: https://mvnrepository.com/artifact/org.apache.avro/avro-maven-plugin/1.8.2 [2]: https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/sink/FileSink.java [3]: https://ci.apache.org/projects/flink/flink-docs-master/docs/connectors/datastream/file_sink/ [4]: https://github.com/apache/flink/blob/c81b831d5fe08d328251d91f4f255b1508a9feb4/flink-end-to-end-tests/flink-file-sink-test/src/main/java/FileSinkProgram.java [5]: https://github.com/rajcspsg/streaming-file-sink-demo

cube.js

Posts with mentions or reviews of cube.js. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-07.
  • MQL – Client and Server to query your DB in natural language
    2 projects | news.ycombinator.com | 7 Apr 2024
    I should have clarified. There's a large number of apps that are:

    1. taking info strictly from SQL (e.g. information_schema, query history)

    2. taking a user input / question

    3. writing SQL to answer that question

    An app like this is what I call "text-to-sql". Totally agree a better system would pull in additional documentation (which is what we're doing), but I'd no longer consider it "text-to-sql". In our case, we're not even directly writing SQL, but rather generating semantic layer queries (i.e. https://cube.dev/).

  • Show HN: Spice.ai – materialize, accelerate, and query SQL data from any source
    5 projects | news.ycombinator.com | 28 Mar 2024
    I'm not too familiar with https://cube.dev/ - but my initial impression is they are focused more on providing APIs backed by SQL. They have a SQL API that emulates the PostgreSQL wire protocol, whereas Spice implements Arrow and Flight SQL natively. Their pre-aggregations are a similar concept to Spice's data accelerators. It also looks like they have their own query language, whereas Spice is native SQL as well.
  • Show HN: Delphi – Build customer-facing AI data apps (that work)
    1 project | news.ycombinator.com | 22 Mar 2024
    Hey HN!

    Over the past year, my co-founder David and I have been building Delphi to let developers create amazing customer-facing AI experiences on top of their data. We're excited to share it with you.

    David and I have spent our careers leading data and engineering teams. After ChatGPT got popular, we saw a rush of "chat with your data" startups launch. Most of these are "text-to-SQL" and use an LLM like GPT-4 to generate SQL queries that run directly against a data warehouse or database.

    However, the general perception now is most of them make for nice demos but are hard to make work in the real world. The reason is data complexity. Even smart LLMs find it difficult to reason about messy databases with hundreds of tables, thousands of columns, and complex schemas that have been built up piece-meal for years. Text-to-SQL can be a fine dev tool for data scientists and analysts, but we've seen many organizations hesitate to deploy it to end users, who never know if the answer they get one day will be the same the next.

    David and I found a better way. From our time in the data engineering world, we were familiar with a type of tool called "semantic layers." Think of them like an ORM for analytics. Basically, they sit between databases (or data warehouses) and data consumers (data viz tools like Tableau or APIs) and map real-world concepts (entities like "customers" and metrics like "sales") to database tables and calculations.

    Semantic layers are often used for "embedded analytics" (e.g. when you're building customer-facing dashboards into your application) but are increasingly also used for traditional business intelligence. Cube (https://cube.dev) is a prominent example, and dbt has also recently released one. They're useful because with a semantic layer, the consumer doesn't have to think about questions like "how do we define revenue?" when running a query. They just get consistent, governed data definitions across their business.

    We realized that semantic layers could be just as useful for LLMs as for humans. After all, LLMs are built on natural language, so a system that deterministically translates natural language concepts into code has obvious power when you're working with LLMs. With a semantic layer, we've found that companies can get AI to answer much more complex questions than without it.

    For a year now, we've been building Delphi to do just that. We've gone through a few iterations/pivots (initially we were focused on building a Slack bot for internal analytics) and are now seeing our developer-first approach resonate. We're being used to power customer-facing fintech applications, recruiting software, and more.

    How do you use Delphi? The first step is connecting your database; then, we build your semantic layer on top of it. Right now we do this manually, but we're moving more and more of it over to AI. Once that's done, we have 3 main ways of using Delphi: 1) white-labeling our AI analytics platform and providing it to your customers; 2) a streaming REST API and SDKs; and 3) React components to easily drop a "chat with your data" experience into your app.

    If this is interesting to you, drop us a line at [email protected] or sign up at our website (https://delphihq.com) to get in touch. Thanks for reading! Would love to hear any thoughts and feedback.

  • Apache Superset
    14 projects | news.ycombinator.com | 26 Feb 2024
    We use https://cube.dev/ as intermediate layer between data warehouse database and Superset (and other "terminal" apps for BI like report generators). You define your schema (metrics, dimensions, joins, calculated metrics etc) in cube and then access them by any tool that can connect to SQL db
  • Need to reduce costs - which service to use?
    1 project | /r/dataengineering | 5 Dec 2023
    also check out cube.dev. they can do the semantic layer and cache it so you are not hitting Snowflake all the time.
  • Anyone with experience moving to Cube.dev + Metabase/Superset from Looker ?
    1 project | /r/BusinessIntelligence | 3 Dec 2023
    We need metrics to live in source control with reviews. Metabase doesn't have a git integration for metrics, which is why we are convinced to use cube.dev as a semantic layer.
  • GigaOm Sonar Report Reviews Semantic Layer and Metric Store Vendors
    1 project | news.ycombinator.com | 8 Sep 2023
    https://github.com/cube-js/cube comes out very well at the end as a promising open source system, getting rather close to the bullseye. Would love to know more & hear people's experience with it.
  • Show HN: VulcanSQL – Serve high-concurrency, low-latency API from OLAP
    4 projects | news.ycombinator.com | 5 Jul 2023
    How is this different from something like https://cube.dev/
  • Best Headless Chart Library?
    2 projects | /r/reactjs | 29 May 2023
    Have a look to cube.js
  • Advice / Questions on Modern Data Stack
    1 project | /r/dataengineering | 20 May 2023
    For now, I've been thinking on using self-hosted Rudderstack both for ingestion and reverse ETL, cube.dev as the abstraction later for building webapps and providing catching for the BI layer, and dbt for transformations. But I have doubts with the following elements:

What are some alternatives?

When comparing Apache Flink and cube.js you can also consider the following projects:

Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)

Apache Superset - Apache Superset is a Data Visualization and Data Exploration Platform [Moved to: https://github.com/apache/superset]

Deeplearning4j - Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learning using automatic differentiation.

Elasticsearch - Free and Open, Distributed, RESTful Search Engine

Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing

Druid - Apache Druid: a high performance real-time analytics database.

H2O - Sparkling Water provides H2O functionality inside Spark cluster

Redash - Make Your Company Data Driven. Connect to any data source, easily visualize, dashboard and share your data.

Scio - A Scala API for Apache Beam and Google Cloud Dataflow.

Metabase - The simplest, fastest way to get business intelligence and analytics to everyone in your company :yum:

Apache Kafka - Mirror of Apache Kafka

metriql - The metrics layer for your data. Join us at https://metriql.com/slack