hydra VS ClickBench

Compare hydra vs ClickBench and see what are their differences.

hydra

Hydra is a framework for elegantly configuring complex applications (by facebookresearch)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
hydra ClickBench
14 71
8,229 571
1.6% 3.2%
6.3 9.0
21 days ago 3 days ago
Python HTML
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

hydra

Posts with mentions or reviews of hydra. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-19.
  • Hydra – a Framework for configuring complex applications
    1 project | news.ycombinator.com | 24 Sep 2023
  • Show HN: Hydra - Open-Source Columnar Postgres
    6 projects | news.ycombinator.com | 19 Sep 2023
    Nice tool, only unfortunate name, consider changing it. Already very well know security tool named hydra https://github.com/vanhauser-thc/thc-hydra been around since 2001. Then facebook went ahead and named their config tool hydra https://github.com/facebookresearch/hydra on top of it. Like we get it, hydra popular mythology but we could use more original naming for tools
  • Show HN: Hydra 1.0 – open-source column-oriented Postgres
    12 projects | news.ycombinator.com | 3 Aug 2023
    This looks really impressive, and I'm excited to see how it performs on our data!

    P.S., I think the name conflicts with Hydra, the configuration management library: https://hydra.cc/

  • Best practice for saving logits/activation values of model in PyTorch Lightning
    3 projects | /r/deeplearning | 19 Jul 2023
    I've been trying to learn PyTorch Lightning and Hydra in order to use/create my own custom deep learning template (e.g. like this) as it would greatly help with my research workflow. A lot of the work I do requires me to analyse metrics based on the logits/activations of the model.
  • [D] Alternatives to fb Hydra?
    5 projects | /r/MachineLearning | 29 Mar 2023
    However, hydra seems to have several limitations that are really annoying and are making me reconsider my choice. Most problematic is the inability to group parameters together in a multirun. Hydra only supports trying all combinations of parameters, as described in https://github.com/facebookresearch/hydra/issues/1258, which does not seem to be a priority for hydra. Furthermore, hydras optuna optimizer implementation does not allow for early pruning of bad runs, which while not a deal breaker is definitely a nice to have feature.
  • Show HN: Lightweight YAML Config CLI for Deep Learning Projects
    2 projects | news.ycombinator.com | 10 Mar 2023
    Do you hate the fact that they don't let you return the config file: https://github.com/facebookresearch/hydra/issues/407
  • Config management for deep learning
    3 projects | /r/Python | 10 Mar 2023
    I kind of built this due to frustrations with Hydra. Hydra is an end to end framework, it locks you into a certain DL project format, it decides logging, model saving and a whole host of things. For example Hydra can do the same config file overwriting that I allow but you have to store the config file with the name config.yaml inside a specific folder. On top of that hydra doesn’t let you return the config file from the main function so you have to put all the major logic in the main function itself (link), the authors claim this is by design. I can find Hydra useful for a mature less experimental project. But in my robotics and ML research, I like being able to write code where I want and integrating it how I want, especially when debugging for which I think this package is useful. TLDR; If you just want the config file functionality use my package, if you want a complete DL project manager use Hydra. While hydra implements this config file functionality, it also adds a lot of restrictions to project structure that you might not like.
  • The YAML Document from Hell
    19 projects | news.ycombinator.com | 12 Jan 2023
    For managing configs of ML experiments (where each experiment can override a base config, and "variant" configs can further override the experiment config, etc), Hydra + Yaml + OmegaConf is really nice.

    https://hydra.cc/

    I admit I don't fully understand all the advanced options in Hydra, but the basic usage is already very useful. A nice guide is here:

    https://florianwilhelm.info/2022/01/configuration_via_yaml_a...

  • Hydra - namestitev in osnovna uporaba
    1 project | /r/HackProtectSlo | 8 Dec 2022
  • Hydra - namestitevt in osnovna uporaba
    1 project | /r/HackProtectSlo | 8 Dec 2022

ClickBench

Posts with mentions or reviews of ClickBench. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-02.
  • Umbra: A Disk-Based System with In-Memory Performance [pdf]
    3 projects | news.ycombinator.com | 2 May 2024
    Benchmarks: https://benchmark.clickhouse.com

    So definitely compared against PostgreSQL, MariaDB it is significantly faster.

    On par with lower-end Snowflake.

  • Loading a trillion rows of weather data into TimescaleDB
    8 projects | news.ycombinator.com | 16 Apr 2024
    TimescaleDB primarily serves operational use cases: Developers building products on top of live data, where you are regularly streaming in fresh data, and you often know what many queries look like a priori, because those are powering your live APIs, dashboards, and product experience.

    That's different from a data warehouse or many traditional "OLAP" use cases, where you might dump a big dataset statically, and then people will occasionally do ad-hoc queries against it. This is the big weather dataset file sitting on your desktop that you occasionally query while on holidays.

    So it's less about "can you store weather data", but what does that use case look like? How are the queries shaped? Are you saving a single dataset for ad-hoc queries across the entire dataset, or continuously streaming in new data, and aging out or de-prioritizing old data?

    In most of the products we serve, customers are often interested in recent data in a very granular format ("shallow and wide"), or longer historical queries along a well defined axis ("deep and narrow").

    For example, this is where the benefits of TimescaleDB's segmented columnar compression emerges. It optimizes for those queries which are very common in your application, e.g., an IoT application that groups by or selected by deviceID, crypto/fintech analysis based on the ticker symbol, product analytics based on tenantID, etc.

    If you look at Clickbench, what most of the queries say are: Scan ALL the data in your database, and GROUP BY one of the 100 columns in the web analytics logs.

    - https://github.com/ClickHouse/ClickBench/blob/main/clickhous...

    There are almost no time-predicates in the benchmark that Clickhouse created, but perhaps that is not surprising given it was designed for ad-hoc weblog analytics at Yandex.

    So yes, Timescale serves many products today that use weather data, but has made different choices than Clickhouse (or things like DuckDB, pg_analytics, etc) to serve those more operational use cases.

  • Variant in Apache Doris 2.1.0: a new data type 8 times faster than JSON for semi-structured data analysis
    2 projects | dev.to | 27 Mar 2024
    We tested with 43 Clickbench SQL queries. Queries on the Variant columns are about 10% slower than those on pre-defined static columns, and 8 times faster than those on JSON columns. (For I/O reasons, most cold runs on JSONB data failed with OOM.)
  • Fair Benchmarking Considered Difficult (2018) [pdf]
    2 projects | news.ycombinator.com | 10 Mar 2024
    I have a project dedicated to this topic: https://github.com/ClickHouse/ClickBench

    It is important to explain the limitations of a benchmark, provide a methodology, and make it reproducible. It also has to be simple enough, otherwise it will not be realistic to include a large number of participants.

    I'm also collecting all database benchmarks I could find: https://github.com/ClickHouse/ClickHouse/issues/22398

  • ClickBench – A Benchmark for Analytical DBMS
    1 project | news.ycombinator.com | 8 Feb 2024
  • FLaNK Stack 05 Feb 2024
    49 projects | dev.to | 5 Feb 2024
  • Why Postgres RDS didn't work for us
    4 projects | news.ycombinator.com | 3 Feb 2024
    Indeed, ClickHouse results were run on an older instance type of the same family and size (c5.4xlarge for ClickHouse and c6a.4xlarge for Timescale), so if anything ClickHouse results are at a slight disadvantage.

    This is an open source benchmark - we'd love contributions from Timescale enthusiasts if we missed something: https://github.com/ClickHouse/ClickBench/

  • Show HN: Stanchion – Column-oriented tables in SQLite
    3 projects | news.ycombinator.com | 31 Jan 2024
    Interesting project! Thank you for open sourcing and sharing. Agree that local and embedded analytics are an increasing trend, I see it too.

    A couple of questions:

    * I’m curious what the difficulties were in the implementation. I suspect it is quite a challenge to implement this support in the current SQLite architecture, and would curious to know which parts were tricky and any design trade-off you were faced with.

    * Aside from ease-of-use (install extension, no need for a separate analytical database system), I wonder if there are additional benefits users can anticipate resulting from a single system architecture vs running an embedded OLAP store like DuckDB or clickhouse-local / chdb side-by-side with SQLite? Do you anticipate performance or resource efficiency gains, for instance?

    * I am also curious, what the main difficulty with bringing in a separate analytical database is, assuming it natively integrates with SQLite. I may be biased, but I doubt anything can approach the performance of native column-oriented systems, so I'm curious what the tipping point might be for using this extension vs using an embedded OLAP store in practice.

    Btw, would love for you or someone in the community to benchmark Stanchion in ClickBench and submit results! (https://github.com/ClickHouse/ClickBench/)

    Disclaimer: I work on ClickHouse.

  • ClickBench: A Benchmark for Analytical Databases
    1 project | news.ycombinator.com | 22 Jan 2024
  • DuckDB performance improvements with the latest release
    8 projects | news.ycombinator.com | 6 Nov 2023

What are some alternatives?

When comparing hydra and ClickBench you can also consider the following projects:

dynaconf - Configuration Management for Python ⚙

starrocks - StarRocks, a Linux Foundation project, is a next-generation sub-second MPP OLAP database for full analytics scenarios, including multi-dimensional analytics, real-time analytics, and ad-hoc queries. InfoWorld’s 2023 BOSSIE Award for best open source software.

ConfigParser

duckdb - DuckDB is an in-process SQL OLAP Database Management System

python-dotenv - Reads key-value pairs from a .env file and can set them as environment variables. It helps in developing applications following the 12-factor principles.

ClickHouse - ClickHouse® is a free analytics DBMS for big data

python-decouple - Strict separation of config from code.

hosts - 🔒 Consolidating and extending hosts files from several well-curated sources. Optionally pick extensions for porn, social media, and other categories.

django-environ - Django-environ allows you to utilize 12factor inspired environment variables to configure your Django application.

TablePlus - TablePlus macOS issue tracker

classyconf - Declarative and extensible library for configuration & code separation

clickhouse-bulk - Collects many small inserts to ClickHouse and send in big inserts