Apache Flink VS ClickHouse

Compare Apache Flink vs ClickHouse and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
Apache Flink ClickHouse
9 208
23,158 34,153
1.2% 2.6%
9.9 10.0
5 days ago 5 days ago
Java C++
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Apache Flink

Posts with mentions or reviews of Apache Flink. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-15.
  • First 15 Open Source Advent projects
    16 projects | dev.to | 15 Dec 2023
    7. Apache Flink | Github | tutorial
  • Pyflink : Flink DataStream (KafkaSource) API to consume from Kafka
    1 project | /r/dataengineering | 13 May 2023
    Does anyone have fully running Pyflink code snippet to read from Kafka using the new Flink DataStream (KafkaSource) API and just print out the output to console or write it out to a file. Most of the examples and the official Flink GitHubare using the old API (FlinkKafkaConsumer).
  • I keep getting build failure when I try to run mvn clean compile package
    2 projects | /r/AskProgramming | 8 Apr 2023
    I'm trying to use https://github.com/mauricioaniche/ck to analyze the ck metrics of https://github.com/apache/flink. I have the latest version of java downloaded and I have the latest version of apache maven downloaded too. My environment variables are set correctly. I'm in the correct directory as well. However, when I run mvn clean compile package in powershell it always says build error. I've tried looking up the errors but there's so many. https://imgur.com/a/Zk8Snsa I'm very new to programming in general so any suggestions would be appreciated.
  • How do I determine what the dependencies are when I make pom.xml file?
    1 project | /r/AskProgramming | 7 Apr 2023
    Looking at the project on github, it seems like they should have a pom in the root dir https://github.com/apache/flink/blob/master/pom.xml
  • Akka is moving away from Open Source
    1 project | /r/scala | 7 Sep 2022
    Akka is used only as a possible RPC implementation, isn't it?
  • We Are Changing the License for Akka
    6 projects | news.ycombinator.com | 7 Sep 2022
  • DeWitt Clause, or Can You Benchmark %DATABASE% and Get Away With It
    21 projects | dev.to | 2 Jun 2022
    Apache Drill, Druid, Flink, Hive, Kafka, Spark
  • Computation reuse via fusion in Amazon Athena
    2 projects | news.ycombinator.com | 20 May 2022
    It took me some time to get a good grasp of the power of SQL; and it really kicked in when I learned about optimization rules. It's a program that you rewrite, just like an optimizing compiler would.

    You state what you want; you have different ways to fetch and match and massage data; and you can search through this space to produce a physical plan. Hopefully you used knowledge to weight parts to be optimized (table statistics, like Java's JIT would detect hot spots).

    I find it fascinating to peer through database code to see what is going on. Lately, there's been new advances towards streaming databases, which bring a whole new design space. For example, now you have latency of individual new rows to optimize for, as opposed to batch it whole to optimize the latency of a dataset. Batch scanning will be benefit from better use of your CPU caches.

    And maybe you could have a hybrid system which reads history from a log and aggregates in a batched manner, and then switches to another execution plan when it reaches the end of the log.

    If you want to have a peek at that here are Flink's set of rules [1], generic and stream-specific ones. The names can be cryptic, but usually give a good sense of what is going on. For example: PushFilterIntoTableSourceScanRule makes the WHERE clause apply the earliest possible, to save some CPU/network bandwidth further down. PushPartitionIntoTableSourceScanRule tries to make a fan-out/shuffle happen the earliest possible, so that parallelism can be made use of.

    [1] https://github.com/apache/flink/blob/5f8fb304fb5d68cdb0b3e3c...

  • Avro SpecificRecord File Sink using apache flink is not compiling due to error incompatible types: FileSink<?> cannot be converted to SinkFunction<?>
    3 projects | /r/apacheflink | 14 Sep 2021
    [1]: https://mvnrepository.com/artifact/org.apache.avro/avro-maven-plugin/1.8.2 [2]: https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/sink/FileSink.java [3]: https://ci.apache.org/projects/flink/flink-docs-master/docs/connectors/datastream/file_sink/ [4]: https://github.com/apache/flink/blob/c81b831d5fe08d328251d91f4f255b1508a9feb4/flink-end-to-end-tests/flink-file-sink-test/src/main/java/FileSinkProgram.java [5]: https://github.com/rajcspsg/streaming-file-sink-demo

ClickHouse

Posts with mentions or reviews of ClickHouse. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-24.
  • We Built a 19 PiB Logging Platform with ClickHouse and Saved Millions
    1 project | news.ycombinator.com | 2 Apr 2024
    Yes, we are working on it! :) Taking some of the learnings from current experimental JSON Object datatype, we are now working on what will become the production-ready implementation. Details here: https://github.com/ClickHouse/ClickHouse/issues/54864

    Variant datatype is already available as experimental in 24.1, Dynamic datatype is WIP (PR almost ready), and JSON datatype is next up. Check out the latest comment on that issue with how the Dynamic datatype will work: https://github.com/ClickHouse/ClickHouse/issues/54864#issuec...

  • Build time is a collective responsibility
    2 projects | news.ycombinator.com | 24 Mar 2024
    In our repository, I've set up a few hard limits: each translation unit cannot spend more than a certain amount of memory for compilation and a certain amount of CPU time, and the compiled binary has to be not larger than a certain size.

    When these limits are reached, the CI stops working, and we have to remove the bloat: https://github.com/ClickHouse/ClickHouse/issues/61121

    Although these limits are too generous as of today: for example, the maximum CPU time to compile a translation unit is set to 1000 seconds, and the memory limit is 5 GB, which is ridiculously high.

  • Fair Benchmarking Considered Difficult (2018) [pdf]
    2 projects | news.ycombinator.com | 10 Mar 2024
    I have a project dedicated to this topic: https://github.com/ClickHouse/ClickBench

    It is important to explain the limitations of a benchmark, provide a methodology, and make it reproducible. It also has to be simple enough, otherwise it will not be realistic to include a large number of participants.

    I'm also collecting all database benchmarks I could find: https://github.com/ClickHouse/ClickHouse/issues/22398

  • How to choose the right type of database
    15 projects | dev.to | 28 Feb 2024
    ClickHouse: A fast open-source column-oriented database management system. ClickHouse is designed for real-time analytics on large datasets and excels in high-speed data insertion and querying, making it ideal for real-time monitoring and reporting.
  • Writing UDF for Clickhouse using Golang
    2 projects | dev.to | 27 Feb 2024
    Today we're going to create an UDF (User-defined Function) in Golang that can be run inside Clickhouse query, this function will parse uuid v1 and return timestamp of it since Clickhouse doesn't have this function for now. Inspired from the python version with TabSeparated delimiter (since it's easiest to parse), UDF in Clickhouse will read line by line (each row is each line, and each text separated with tab is each column/cell value):
  • The 2024 Web Hosting Report
    37 projects | dev.to | 20 Feb 2024
    For the third, examples here might be analytics plugins in specialized databases like Clickhouse, data-transformations in places like your ETL pipeline using Airflow or Fivetran, or special integrations in your authentication workflow with Auth0 hooks and rules.
  • Choosing Between a Streaming Database and a Stream Processing Framework in Python
    10 projects | dev.to | 10 Feb 2024
    Online analytical processing (OLAP) databases like Apache Druid, Apache Pinot, and ClickHouse shine in addressing user-initiated analytical queries. You might write a query to analyze historical data to find the most-clicked products over the past month efficiently using OLAP databases. When contrasting with streaming databases, they may not be optimized for incremental computation, leading to challenges in maintaining the freshness of results. The query in the streaming database focuses on recent data, making it suitable for continuous monitoring. Using streaming databases, you can run queries like finding the top 10 sold products where the “top 10 product list” might change in real-time.
  • Proton, a fast and lightweight alternative to Apache Flink
    7 projects | news.ycombinator.com | 30 Jan 2024
    Proton is a lightweight streaming processing "add-on" for ClickHouse, and we are making these delta parts as standalone as possible. Meanwhile contributing back to the ClickHouse community can also help a lot.

    Please check this PR from the proton team: https://github.com/ClickHouse/ClickHouse/pull/54870

  • 1 billion rows challenge in PostgreSQL and ClickHouse
    1 project | dev.to | 18 Jan 2024
    curl https://clickhouse.com/ | sh
  • We Executed a Critical Supply Chain Attack on PyTorch
    6 projects | news.ycombinator.com | 14 Jan 2024
    But I continue to find garbage in some of our CI scripts.

    Here is an example: https://github.com/ClickHouse/ClickHouse/pull/58794/files

    The right way is to:

    - always pin versions of all packages;

What are some alternatives?

When comparing Apache Flink and ClickHouse you can also consider the following projects:

Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)

loki - Like Prometheus, but for logs.

Deeplearning4j - Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learning using automatic differentiation.

duckdb - DuckDB is an in-process SQL OLAP Database Management System

Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing

H2O - Sparkling Water provides H2O functionality inside Spark cluster

VictoriaMetrics - VictoriaMetrics: fast, cost-effective monitoring solution and time series database

Scio - A Scala API for Apache Beam and Google Cloud Dataflow.

TimescaleDB - An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.

Apache Kafka - Mirror of Apache Kafka

datafusion - Apache DataFusion SQL Query Engine