ClickHouse VS TimescaleDB

Compare ClickHouse vs TimescaleDB and see what are their differences.

TimescaleDB

An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension. (by timescale)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
ClickHouse TimescaleDB
208 82
34,054 16,445
2.3% 1.6%
10.0 9.8
5 days ago 4 days ago
C++ C
Apache License 2.0 GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ClickHouse

Posts with mentions or reviews of ClickHouse. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-24.
  • We Built a 19 PiB Logging Platform with ClickHouse and Saved Millions
    1 project | news.ycombinator.com | 2 Apr 2024
    Yes, we are working on it! :) Taking some of the learnings from current experimental JSON Object datatype, we are now working on what will become the production-ready implementation. Details here: https://github.com/ClickHouse/ClickHouse/issues/54864

    Variant datatype is already available as experimental in 24.1, Dynamic datatype is WIP (PR almost ready), and JSON datatype is next up. Check out the latest comment on that issue with how the Dynamic datatype will work: https://github.com/ClickHouse/ClickHouse/issues/54864#issuec...

  • Build time is a collective responsibility
    2 projects | news.ycombinator.com | 24 Mar 2024
    In our repository, I've set up a few hard limits: each translation unit cannot spend more than a certain amount of memory for compilation and a certain amount of CPU time, and the compiled binary has to be not larger than a certain size.

    When these limits are reached, the CI stops working, and we have to remove the bloat: https://github.com/ClickHouse/ClickHouse/issues/61121

    Although these limits are too generous as of today: for example, the maximum CPU time to compile a translation unit is set to 1000 seconds, and the memory limit is 5 GB, which is ridiculously high.

  • Fair Benchmarking Considered Difficult (2018) [pdf]
    2 projects | news.ycombinator.com | 10 Mar 2024
    I have a project dedicated to this topic: https://github.com/ClickHouse/ClickBench

    It is important to explain the limitations of a benchmark, provide a methodology, and make it reproducible. It also has to be simple enough, otherwise it will not be realistic to include a large number of participants.

    I'm also collecting all database benchmarks I could find: https://github.com/ClickHouse/ClickHouse/issues/22398

  • How to choose the right type of database
    15 projects | dev.to | 28 Feb 2024
    ClickHouse: A fast open-source column-oriented database management system. ClickHouse is designed for real-time analytics on large datasets and excels in high-speed data insertion and querying, making it ideal for real-time monitoring and reporting.
  • Writing UDF for Clickhouse using Golang
    2 projects | dev.to | 27 Feb 2024
    Today we're going to create an UDF (User-defined Function) in Golang that can be run inside Clickhouse query, this function will parse uuid v1 and return timestamp of it since Clickhouse doesn't have this function for now. Inspired from the python version with TabSeparated delimiter (since it's easiest to parse), UDF in Clickhouse will read line by line (each row is each line, and each text separated with tab is each column/cell value):
  • The 2024 Web Hosting Report
    37 projects | dev.to | 20 Feb 2024
    For the third, examples here might be analytics plugins in specialized databases like Clickhouse, data-transformations in places like your ETL pipeline using Airflow or Fivetran, or special integrations in your authentication workflow with Auth0 hooks and rules.
  • Choosing Between a Streaming Database and a Stream Processing Framework in Python
    10 projects | dev.to | 10 Feb 2024
    Online analytical processing (OLAP) databases like Apache Druid, Apache Pinot, and ClickHouse shine in addressing user-initiated analytical queries. You might write a query to analyze historical data to find the most-clicked products over the past month efficiently using OLAP databases. When contrasting with streaming databases, they may not be optimized for incremental computation, leading to challenges in maintaining the freshness of results. The query in the streaming database focuses on recent data, making it suitable for continuous monitoring. Using streaming databases, you can run queries like finding the top 10 sold products where the “top 10 product list” might change in real-time.
  • Proton, a fast and lightweight alternative to Apache Flink
    7 projects | news.ycombinator.com | 30 Jan 2024
    Proton is a lightweight streaming processing "add-on" for ClickHouse, and we are making these delta parts as standalone as possible. Meanwhile contributing back to the ClickHouse community can also help a lot.

    Please check this PR from the proton team: https://github.com/ClickHouse/ClickHouse/pull/54870

  • 1 billion rows challenge in PostgreSQL and ClickHouse
    1 project | dev.to | 18 Jan 2024
    curl https://clickhouse.com/ | sh
  • We Executed a Critical Supply Chain Attack on PyTorch
    6 projects | news.ycombinator.com | 14 Jan 2024
    But I continue to find garbage in some of our CI scripts.

    Here is an example: https://github.com/ClickHouse/ClickHouse/pull/58794/files

    The right way is to:

    - always pin versions of all packages;

TimescaleDB

Posts with mentions or reviews of TimescaleDB. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-11.
  • TimescaleDB: An open-source time-series SQL database
    1 project | news.ycombinator.com | 6 Feb 2024
  • Google Cloud Spanner is now half the cost of Amazon DynamoDB
    2 projects | news.ycombinator.com | 11 Oct 2023
    Don't forget PostgreSQL extensions. For something like a chat log, TimescaleDB (https://www.timescale.com/) can be surprisingly efficient. It will handle partitioning for you, with additional features like data reordering, compression, and retention policies.
  • How to setup Postgres master-master cluster.
    1 project | /r/sysadmin | 5 Sep 2023
    Offboard it to Postgres specialists like https://www.timescale.com/
  • How to Choose the Right MQTT Data Storage for Your Next Project
    8 projects | dev.to | 23 Jul 2023
    TimescaleDB{:target="_blank"}: an extension of PostgreSQL that adds time-series capabilities to the relational database model. It provides scalability and performance optimizations for handling large volumes of time-stamped data while maintaining the flexibility of a relational database.
  • Why does the presence of a large write-only table in a PostgreSQL database cause severe performance degradation?
    1 project | /r/PostgreSQL | 2 Jul 2023
    Have some experience with https://www.timescale.com in this context
  • Opinions and Suggestions for PostgreSQL Extension under Development
    3 projects | /r/PostgreSQL | 29 May 2023
    What about getting in touch with commercial organisations that have products/services based on PostgreSQL? For example Timescale, EDB, and Citus Data, or really any hosting provider that offers a managed PostgreSQL service.
  • I have to do about a million inserts on a table every day that is also under very frequent reads. How should I do that?
    1 project | /r/PostgreSQL | 20 May 2023
    There is Timescale.
  • Ask HN: It's 2023, how do you choose between MySQL and Postgres?
    7 projects | news.ycombinator.com | 11 May 2023
    Friends don't let their friends choose Mysql :)

    A super long time ago (decades) when I was using Oracle regularly I had to make a decision on which way to go. Although Mysql then had the mindshare I thought that Postgres was more similar to Oracle, more standards compliant, and more of a real enterprise type of DB. The rumor was also that Postgres was heavier than MySQL. Too many horror stories of lost data (MyIsam), bad transactions (MyIsam lacks transaction integrity), and the number of Mysql gotchas being a really long list influenced me.

    In time I actually found out that I had underestimated one of the most important attributes of Postgres that was a huge strength over Mysql: the power of community. Because Postgres has a really superb community that can be found on Libera Chat and elsewhere, and they are very willing to help out, I think Postgres has a huge advantage over Mysql. RhodiumToad [Andrew Gierth] https://github.com/RhodiumToad & davidfetter [David Fetter] https://www.linkedin.com/in/davidfetter are incredibly helpful folks.

    I don't know that Postgres' licensing made a huge difference or not but my perception is that there are a ton of 3rd party products based on Postgres but customized to specific DB needs because of the more liberalness of the PG license which is MIT/BSD derived https://www.postgresql.org/about/licence/

    Some of the PG based 3rd party DBs:

    Enterprise DB https://www.enterprisedb.com/ - general purpose PG with some variants

    Greenplum https://greenplum.org/ - Data warehousing

    Crunchydata https://www.crunchydata.com/products/hardened-postgres - high security Postgres for regulated environments

    Citus https://www.citusdata.com - Distributed DB & Columnar

    Timescale https://www.timescale.com/

    Why Choose PG today?

    If you want better ACID: Postgres

    If you want more compliant SQL: Postgres

    If you want more customizability to a variety of use-cases: Postgres using a variant

    If you want the flexibility of using NOSQL at times: Postgres

    If you want more product knowledge reusability for other backend products: Postgres

  • Help with timeseries data
    2 projects | /r/Database | 10 May 2023
    TimescaleDB is Postgres with extensions to automatically partition tables for fast processing of time series data.
  • Postgres for time-series data
    1 project | news.ycombinator.com | 2 May 2023

What are some alternatives?

When comparing ClickHouse and TimescaleDB you can also consider the following projects:

loki - Like Prometheus, but for logs.

promscale - [DEPRECATED] Promscale is a unified metric and trace observability backend for Prometheus, Jaeger and OpenTelemetry built on PostgreSQL and TimescaleDB.

duckdb - DuckDB is an in-process SQL OLAP Database Management System

TDengine - TDengine is an open source, high-performance, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, Industrial IoT and DevOps.

Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)

GORM - The fantastic ORM library for Golang, aims to be developer friendly

VictoriaMetrics - VictoriaMetrics: fast, cost-effective monitoring solution and time series database

temporal_tables - Temporal Tables PostgreSQL Extension

arrow-datafusion - Apache DataFusion SQL Query Engine

pgbouncer - lightweight connection pooler for PostgreSQL

RocksDB - A library that provides an embeddable, persistent key-value store for fast storage.

Telegraf - The plugin-driven server agent for collecting & reporting metrics.