ctlstore
ClickHouse
ctlstore | ClickHouse | |
---|---|---|
3 | 208 | |
259 | 34,359 | |
0.8% | 1.9% | |
5.9 | 10.0 | |
21 days ago | about 21 hours ago | |
Go | C++ | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ctlstore
-
SQLedge: Replicate Postgres to SQLite on the Edge
We replicated our MySQL database to a SQLite edge at Segment in ctlstore: https://github.com/segmentio/ctlstore
We considered tailing binlogs directly but there's so much cruft and complexity involved trying to translate between types and such at that end, once you even just get passed properly parsing the binlogs and maintaining the replication connection. Then you have to deal with schema management across both systems too. Similar sets of problems using PostgreSQL as a source of truth.
In the end we decided just to wrap the whole thing up and abstract away the schema with a common set of types and a limited set of read APIs. Biggest missing piece I regret not getting in was support for secondary indexes.
-
Sharing an SQLite database across containers is surprisingly brilliant
> it is only practical for situations where the write rate (<100/s total) and data volumes (<10GB total) are low.
This comment from the GitHub project page is pretty important. Configuration data often sees slow change, and isn't huge so a custom approach seems viable. I wonder how close they are to that 100/s ceiling.
There's also an unmentioned transition to eventual consistency happening here:
> The implications of this decoupling is that the data at each instance is usually slightly out-of-date (by 1-2 seconds).
> The reader API provides a way to fetch an approximate staleness measurement that is accurate to within ~5 seconds.
That's could lead to more complex application logic or risk of confusing users with stale behavior. No free lunch here.
[1] https://segment.com/blog/separating-our-data-and-control-pla...
[2] https://github.com/segmentio/ctlstore
-
Go port of SQLite without CGo
at segment we benchmarked https://github.com/segmentio/ctlstore against this driver. We saw about a 50% hit to read performance, so we didn't move forward with it, but the improvements in service build times were really appealing.
ClickHouse
-
We Built a 19 PiB Logging Platform with ClickHouse and Saved Millions
Yes, we are working on it! :) Taking some of the learnings from current experimental JSON Object datatype, we are now working on what will become the production-ready implementation. Details here: https://github.com/ClickHouse/ClickHouse/issues/54864
Variant datatype is already available as experimental in 24.1, Dynamic datatype is WIP (PR almost ready), and JSON datatype is next up. Check out the latest comment on that issue with how the Dynamic datatype will work: https://github.com/ClickHouse/ClickHouse/issues/54864#issuec...
-
Build time is a collective responsibility
In our repository, I've set up a few hard limits: each translation unit cannot spend more than a certain amount of memory for compilation and a certain amount of CPU time, and the compiled binary has to be not larger than a certain size.
When these limits are reached, the CI stops working, and we have to remove the bloat: https://github.com/ClickHouse/ClickHouse/issues/61121
Although these limits are too generous as of today: for example, the maximum CPU time to compile a translation unit is set to 1000 seconds, and the memory limit is 5 GB, which is ridiculously high.
-
Fair Benchmarking Considered Difficult (2018) [pdf]
I have a project dedicated to this topic: https://github.com/ClickHouse/ClickBench
It is important to explain the limitations of a benchmark, provide a methodology, and make it reproducible. It also has to be simple enough, otherwise it will not be realistic to include a large number of participants.
I'm also collecting all database benchmarks I could find: https://github.com/ClickHouse/ClickHouse/issues/22398
-
How to choose the right type of database
ClickHouse: A fast open-source column-oriented database management system. ClickHouse is designed for real-time analytics on large datasets and excels in high-speed data insertion and querying, making it ideal for real-time monitoring and reporting.
-
Writing UDF for Clickhouse using Golang
Today we're going to create an UDF (User-defined Function) in Golang that can be run inside Clickhouse query, this function will parse uuid v1 and return timestamp of it since Clickhouse doesn't have this function for now. Inspired from the python version with TabSeparated delimiter (since it's easiest to parse), UDF in Clickhouse will read line by line (each row is each line, and each text separated with tab is each column/cell value):
-
The 2024 Web Hosting Report
For the third, examples here might be analytics plugins in specialized databases like Clickhouse, data-transformations in places like your ETL pipeline using Airflow or Fivetran, or special integrations in your authentication workflow with Auth0 hooks and rules.
-
Choosing Between a Streaming Database and a Stream Processing Framework in Python
Online analytical processing (OLAP) databases like Apache Druid, Apache Pinot, and ClickHouse shine in addressing user-initiated analytical queries. You might write a query to analyze historical data to find the most-clicked products over the past month efficiently using OLAP databases. When contrasting with streaming databases, they may not be optimized for incremental computation, leading to challenges in maintaining the freshness of results. The query in the streaming database focuses on recent data, making it suitable for continuous monitoring. Using streaming databases, you can run queries like finding the top 10 sold products where the “top 10 product list” might change in real-time.
-
Proton, a fast and lightweight alternative to Apache Flink
Proton is a lightweight streaming processing "add-on" for ClickHouse, and we are making these delta parts as standalone as possible. Meanwhile contributing back to the ClickHouse community can also help a lot.
Please check this PR from the proton team: https://github.com/ClickHouse/ClickHouse/pull/54870
-
1 billion rows challenge in PostgreSQL and ClickHouse
curl https://clickhouse.com/ | sh
-
We Executed a Critical Supply Chain Attack on PyTorch
But I continue to find garbage in some of our CI scripts.
Here is an example: https://github.com/ClickHouse/ClickHouse/pull/58794/files
The right way is to:
- always pin versions of all packages;
What are some alternatives?
go-sqlite - pure-Go SQLite driver for Go (SQLite embedded)
loki - Like Prometheus, but for logs.
sqlite
duckdb - DuckDB is an in-process SQL OLAP Database Management System
sysroot - Files for cross-compilation
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
libc
VictoriaMetrics - VictoriaMetrics: fast, cost-effective monitoring solution and time series database
sqlite - The pure-Go SQLite driver for GORM
TimescaleDB - An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
sqledge - Replicate postgres to SQLite on the edge
datafusion - Apache DataFusion SQL Query Engine