chproxy
clickhouse-backup
chproxy | clickhouse-backup | |
---|---|---|
- | 5 | |
1,210 | 1,149 | |
1.1% | 1.8% | |
8.1 | 9.7 | |
5 days ago | 3 days ago | |
Go | Go | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chproxy
We haven't tracked posts mentioning chproxy yet.
Tracking mentions began in Dec 2020.
clickhouse-backup
-
Backing up Plausible Analytics database
I set up Plausible Analytics in my Kubernetes cluster and am trying to figure out how to properly back up and restore the Clickhouse database. I am trying to use https://github.com/AlexAkulov/clickhouse-backup but it only supports table of the MergeTree family. Plausible uses a table named schema_migrations which is of type TinyLog, so it's skipped during backups, making restores useless (because the table is restored empty, when Plausible starts it will try to run all the migrations which will fail because the other tables already exist).
-
Difference in data size for the Clickhouse backups
already fixed on https://github.com/AlexAkulov/clickhouse-backup/issues/224
-
The ClickHouse Community
Similar experience to vulkoingim: steep learning curve but quite stable once deployed properly.
Schema management in zookeeper has been the biggest pain point for us. Occasionally individual clickhouse shards will get out of sync during a schema update, which can be hard to diagnose.
We use a heavily modified version of clickhouse-backup[1], which works well for us.
As for hands-off replica reboot: you must have an automated process to reapply the same schema which exists in zookeeper, otherwise it won't resync. If the local schema gets out of sync with that in zookeeper, then you'll have issues again.
I expect a lot of these ergonomics issues will be fixed over time. It's already much easier to use than it was 3 years ago, and even if progress on usability and reducing the learning curve is slow the database performance makes it worth it.
[1] https://github.com/AlexAkulov/clickhouse-backup
-
ClickHouse incremental backups
clickhouse-backup allows us to perform local backups, that are always full backups, and full or incremental uploads to remote storage. In my previous post I talked about how to perform full backups and uploads. Now we are going to review all the steps required to work with incremental uploads. This way we could upload a weekly full backup to our remote storage and perform daily incremental uploads.
-
Backup and restore with clickhouse-backup
We can automate this process thanks to clickhouse-backup.
What are some alternatives?
goqu - SQL builder and query library for golang
jaeger-clickhouse - Jaeger ClickHouse storage plugin implementation
gokv - Simple key-value store abstraction and implementations for Go (Redis, Consul, etcd, bbolt, BadgerDB, LevelDB, Memcached, DynamoDB, S3, PostgreSQL, MongoDB, CockroachDB and many more)
trusearch - Perform advanced search on unofficial rutracker.org (ex torrents.ru) XML database
sqrl - Fluent SQL generation for golang
Trickster - Open Source HTTP Reverse Proxy Cache and Time Series Dashboard Accelerator
clickhouse-bulk - Collects many small inserts to ClickHouse and send in big inserts
s3backup - A super simple solution for backup
jaeger - CNCF Jaeger, a Distributed Tracing Platform
flow-pipeline - A set of tools and examples to run a flow-pipeline (sFlow, NetFlow)
orchestrator - MySQL replication topology manager/visualizer
wal-g - Archival and Restoration for databases in the Cloud