Packer VS ClickHouse

Compare Packer vs ClickHouse and see what are their differences.

Packer

Packer is a tool for creating identical machine images for multiple platforms from a single source configuration. (by hashicorp)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
Packer ClickHouse
66 208
14,915 34,269
0.4% 1.6%
9.4 10.0
3 days ago 3 days ago
Go C++
GNU General Public License v3.0 or later Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Packer

Posts with mentions or reviews of Packer. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-29.
  • AWS Cloud Platform for highly loaded WordPress website
    3 projects | dev.to | 29 Apr 2024
    The missing piece of puzzle is the AMI "golden image" that will be used to start the instances in autoscaling group. The AMI has to have NGINX and PHP installed with the list of required modules enabled. The great tool to brew one is hashicorp packer.
  • The 2024 Web Hosting Report
    37 projects | dev.to | 20 Feb 2024
    To manage a VM, you can use something as simple as just manual actions over SSH, or can use tools like Ansible, Hashicorp's Packer and Terraform or other automations. For an app where there is minimal load and security/reliability concern, VMs are still a great option that provide a lot of value for the buck
  • Avoiding DevOps tool hell
    9 projects | dev.to | 24 Jul 2023
    Server templating: Using Packer has never been easier to create reusable server configurations in a platform-independent and documented manner.
  • How to create an iso image of a finished system
    1 project | /r/linux4noobs | 19 Jun 2023
    I'll give you hard, but rewarding and easy to modify(once you know what you're doing) way. Packer may be a thing you're looking for.
  • 13.2 ZFS root AMIs in AWS
    1 project | /r/freebsd | 17 May 2023
    It is straightforward to build them with packer (I have built AMIs for 13.0 and 13.1, but 13.2 should be exactly the same). I've been meaning to write a blog post about it for a while, but have not gotten to it yet... In any case, what I am doing is using the EBS Surrogate Builder to start an instance running the official FreeBSD 13.2 image with an extra volume attached and run a script to create a zpool on the extra volume and bootstrap and configure FreeBSD 13.2-RELEASE on it. After that packer takes care of creating an AMI out of that extra volume, so you can use it... If you have any issues, let me know, and maybe I will finally get to writing that blog post...
  • DevOps Tooling Landscape
    12 projects | dev.to | 4 Apr 2023
    HashiCorp Packer is a tool for creating machine images for a variety of platforms, including AWS, Azure, and VMware. It allows you to define machine images as code and supports a wide range of configuration options.
  • auto-provisioning multiple raspberry pi's
    2 projects | /r/selfhosted | 19 Mar 2023
    Packer is a tool that can be used to build machine images. Basically, it takes a base image, runs a series of steps to provision that image, and then burns a new image. In my workplace we use it heavily to build AWS AMIs. But it has an ARM plugin that looks to be very very suitable for building customised Raspberry Pi images (my quick read of the doco there says it can go ahead and write the final image to an SD card for you too).
  • How do hosting companies immediately create vm right after purchasing one?
    2 projects | /r/linux | 5 Mar 2023
  • Packer preseed file seems to not be read
    1 project | /r/hashicorp | 18 Feb 2023
    Seems related to https://github.com/hashicorp/packer/issues/12118 But the workaround discribed in the comments doesn’t seems to work anymore
  • How to create AMI which also copies the user data?
    1 project | /r/aws | 5 Jan 2023
    I'd suggest using a tool like Packer to build a gold image based on your base AMI and all your changes. Then you'll have your own AMI you can launch new instances with.

ClickHouse

Posts with mentions or reviews of ClickHouse. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-24.
  • We Built a 19 PiB Logging Platform with ClickHouse and Saved Millions
    1 project | news.ycombinator.com | 2 Apr 2024
    Yes, we are working on it! :) Taking some of the learnings from current experimental JSON Object datatype, we are now working on what will become the production-ready implementation. Details here: https://github.com/ClickHouse/ClickHouse/issues/54864

    Variant datatype is already available as experimental in 24.1, Dynamic datatype is WIP (PR almost ready), and JSON datatype is next up. Check out the latest comment on that issue with how the Dynamic datatype will work: https://github.com/ClickHouse/ClickHouse/issues/54864#issuec...

  • Build time is a collective responsibility
    2 projects | news.ycombinator.com | 24 Mar 2024
    In our repository, I've set up a few hard limits: each translation unit cannot spend more than a certain amount of memory for compilation and a certain amount of CPU time, and the compiled binary has to be not larger than a certain size.

    When these limits are reached, the CI stops working, and we have to remove the bloat: https://github.com/ClickHouse/ClickHouse/issues/61121

    Although these limits are too generous as of today: for example, the maximum CPU time to compile a translation unit is set to 1000 seconds, and the memory limit is 5 GB, which is ridiculously high.

  • Fair Benchmarking Considered Difficult (2018) [pdf]
    2 projects | news.ycombinator.com | 10 Mar 2024
    I have a project dedicated to this topic: https://github.com/ClickHouse/ClickBench

    It is important to explain the limitations of a benchmark, provide a methodology, and make it reproducible. It also has to be simple enough, otherwise it will not be realistic to include a large number of participants.

    I'm also collecting all database benchmarks I could find: https://github.com/ClickHouse/ClickHouse/issues/22398

  • How to choose the right type of database
    15 projects | dev.to | 28 Feb 2024
    ClickHouse: A fast open-source column-oriented database management system. ClickHouse is designed for real-time analytics on large datasets and excels in high-speed data insertion and querying, making it ideal for real-time monitoring and reporting.
  • Writing UDF for Clickhouse using Golang
    2 projects | dev.to | 27 Feb 2024
    Today we're going to create an UDF (User-defined Function) in Golang that can be run inside Clickhouse query, this function will parse uuid v1 and return timestamp of it since Clickhouse doesn't have this function for now. Inspired from the python version with TabSeparated delimiter (since it's easiest to parse), UDF in Clickhouse will read line by line (each row is each line, and each text separated with tab is each column/cell value):
  • The 2024 Web Hosting Report
    37 projects | dev.to | 20 Feb 2024
    For the third, examples here might be analytics plugins in specialized databases like Clickhouse, data-transformations in places like your ETL pipeline using Airflow or Fivetran, or special integrations in your authentication workflow with Auth0 hooks and rules.
  • Choosing Between a Streaming Database and a Stream Processing Framework in Python
    10 projects | dev.to | 10 Feb 2024
    Online analytical processing (OLAP) databases like Apache Druid, Apache Pinot, and ClickHouse shine in addressing user-initiated analytical queries. You might write a query to analyze historical data to find the most-clicked products over the past month efficiently using OLAP databases. When contrasting with streaming databases, they may not be optimized for incremental computation, leading to challenges in maintaining the freshness of results. The query in the streaming database focuses on recent data, making it suitable for continuous monitoring. Using streaming databases, you can run queries like finding the top 10 sold products where the “top 10 product list” might change in real-time.
  • Proton, a fast and lightweight alternative to Apache Flink
    7 projects | news.ycombinator.com | 30 Jan 2024
    Proton is a lightweight streaming processing "add-on" for ClickHouse, and we are making these delta parts as standalone as possible. Meanwhile contributing back to the ClickHouse community can also help a lot.

    Please check this PR from the proton team: https://github.com/ClickHouse/ClickHouse/pull/54870

  • 1 billion rows challenge in PostgreSQL and ClickHouse
    1 project | dev.to | 18 Jan 2024
    curl https://clickhouse.com/ | sh
  • We Executed a Critical Supply Chain Attack on PyTorch
    6 projects | news.ycombinator.com | 14 Jan 2024
    But I continue to find garbage in some of our CI scripts.

    Here is an example: https://github.com/ClickHouse/ClickHouse/pull/58794/files

    The right way is to:

    - always pin versions of all packages;

What are some alternatives?

When comparing Packer and ClickHouse you can also consider the following projects:

Vagrant - Vagrant is a tool for building and distributing development environments.

loki - Like Prometheus, but for logs.

helm - The Kubernetes Package Manager

duckdb - DuckDB is an in-process SQL OLAP Database Management System

oVirt - oVirt website

Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)

cloud-init-vmware-guestinfo - A cloud-init datasource for VMware vSphere's GuestInfo interface

VictoriaMetrics - VictoriaMetrics: fast, cost-effective monitoring solution and time series database

kubernetes - Production-Grade Container Scheduling and Management

TimescaleDB - An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.

QEMU - Official QEMU mirror. Please see https://www.qemu.org/contribute/ for how to submit changes to QEMU. Pull Requests are ignored. Please only use release tarballs from the QEMU website.

datafusion - Apache DataFusion SQL Query Engine