sqlite-parquet-vtable VS ClickHouse

Compare sqlite-parquet-vtable vs ClickHouse and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
sqlite-parquet-vtable ClickHouse
4 208
261 34,269
- 1.6%
10.0 10.0
almost 3 years ago 1 day ago
C++ C++
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

sqlite-parquet-vtable

Posts with mentions or reviews of sqlite-parquet-vtable. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-06-24.
  • Universal Database
    1 project | /r/learnpython | 10 Jul 2022
    Sqlite3 has parquet extension that support parquet (https://github.com/cldellow/sqlite-parquet-vtable) as virtual table. I use sqlite3 a lot, for work and personally. It's really good, but I do have issue with large datasets, mainly due VACUUM operations. Insertion rate drops significantly when single table hits around 20M rows. Indexing is important for your query speed, but it'll impact your write speed.
  • Show HN: Easily Convert WARC (Web Archive) into Parquet, Then Query with DuckDB
    3 projects | news.ycombinator.com | 24 Jun 2022
    Well there's a virtual table extension to read parquet files in SQLite. I've not tried it myself. https://github.com/cldellow/sqlite-parquet-vtable
  • One-liner for running queries against CSV files with SQLite
    20 projects | news.ycombinator.com | 21 Jun 2022
    /? sqlite arrow

    - "Comparing SQLite, DuckDB and Arrow with UN trade data" https://news.ycombinator.com/item?id=29010103 ; partial benchmarks of query time and RAM requirements [relative to data size] would be

    - https://arrow.apache.org/blog/2022/02/16/introducing-arrow-f... :

    > Motivation: While standards like JDBC and ODBC have served users well for decades, they fall short for databases and clients which wish to use Apache Arrow or columnar data in general. Row-based APIs like JDBC or PEP 249 require transposing data in this case, and for a database which is itself columnar, this means that data has to be transposed twice—once to present it in rows for the API, and once to get it back into columns for the consumer. Meanwhile, while APIs like ODBC do provide bulk access to result buffers, this data must still be copied into Arrow arrays for use with the broader Arrow ecosystem, as implemented by projects like Turbodbc. Flight SQL aims to get rid of these intermediate steps.

    > - One cannot create a trigger on a virtual table.

    Just posted about eBPF a few days ago; opcodes have costs that are or are not costed: https://news.ycombinator.com/item?id=31688180

    > - One cannot create additional indices on a virtual table. (Virtual tables can have indices but that must be built into the virtual table implementation. Indices cannot be added separately using CREATE INDEX statements.)

    It looks like e.g. sqlite-parquet-vtable implements shadow tables to memoize row group filters. How does JOIN performance vary amongst sqlite virtual table implementations?

    > - One cannot run ALTER TABLE ... ADD COLUMN commands against a virtual table.

    Are there URIs in the schema? Mustn't there thus be a meta-schema that does e.g. nested structs with portable types [with URIs], (and jsonschema, [and W3C SHACL])?

    /? sqlite arrow virtual table

    - sqlite-parquet-vtable reads parquet with arrow for SQLite virtual tables https://github.com/cldellow/sqlite-parquet-vtable :

      $ sqlite/sqlite3
  • Show HN: WarcDB: Web crawl data as SQLite databases
    3 projects | news.ycombinator.com | 19 Jun 2022
    https://github.com/cldellow/sqlite-parquet-vtable

    But for my use case virtual would be too complicated.

ClickHouse

Posts with mentions or reviews of ClickHouse. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-24.
  • We Built a 19 PiB Logging Platform with ClickHouse and Saved Millions
    1 project | news.ycombinator.com | 2 Apr 2024
    Yes, we are working on it! :) Taking some of the learnings from current experimental JSON Object datatype, we are now working on what will become the production-ready implementation. Details here: https://github.com/ClickHouse/ClickHouse/issues/54864

    Variant datatype is already available as experimental in 24.1, Dynamic datatype is WIP (PR almost ready), and JSON datatype is next up. Check out the latest comment on that issue with how the Dynamic datatype will work: https://github.com/ClickHouse/ClickHouse/issues/54864#issuec...

  • Build time is a collective responsibility
    2 projects | news.ycombinator.com | 24 Mar 2024
    In our repository, I've set up a few hard limits: each translation unit cannot spend more than a certain amount of memory for compilation and a certain amount of CPU time, and the compiled binary has to be not larger than a certain size.

    When these limits are reached, the CI stops working, and we have to remove the bloat: https://github.com/ClickHouse/ClickHouse/issues/61121

    Although these limits are too generous as of today: for example, the maximum CPU time to compile a translation unit is set to 1000 seconds, and the memory limit is 5 GB, which is ridiculously high.

  • Fair Benchmarking Considered Difficult (2018) [pdf]
    2 projects | news.ycombinator.com | 10 Mar 2024
    I have a project dedicated to this topic: https://github.com/ClickHouse/ClickBench

    It is important to explain the limitations of a benchmark, provide a methodology, and make it reproducible. It also has to be simple enough, otherwise it will not be realistic to include a large number of participants.

    I'm also collecting all database benchmarks I could find: https://github.com/ClickHouse/ClickHouse/issues/22398

  • How to choose the right type of database
    15 projects | dev.to | 28 Feb 2024
    ClickHouse: A fast open-source column-oriented database management system. ClickHouse is designed for real-time analytics on large datasets and excels in high-speed data insertion and querying, making it ideal for real-time monitoring and reporting.
  • Writing UDF for Clickhouse using Golang
    2 projects | dev.to | 27 Feb 2024
    Today we're going to create an UDF (User-defined Function) in Golang that can be run inside Clickhouse query, this function will parse uuid v1 and return timestamp of it since Clickhouse doesn't have this function for now. Inspired from the python version with TabSeparated delimiter (since it's easiest to parse), UDF in Clickhouse will read line by line (each row is each line, and each text separated with tab is each column/cell value):
  • The 2024 Web Hosting Report
    37 projects | dev.to | 20 Feb 2024
    For the third, examples here might be analytics plugins in specialized databases like Clickhouse, data-transformations in places like your ETL pipeline using Airflow or Fivetran, or special integrations in your authentication workflow with Auth0 hooks and rules.
  • Choosing Between a Streaming Database and a Stream Processing Framework in Python
    10 projects | dev.to | 10 Feb 2024
    Online analytical processing (OLAP) databases like Apache Druid, Apache Pinot, and ClickHouse shine in addressing user-initiated analytical queries. You might write a query to analyze historical data to find the most-clicked products over the past month efficiently using OLAP databases. When contrasting with streaming databases, they may not be optimized for incremental computation, leading to challenges in maintaining the freshness of results. The query in the streaming database focuses on recent data, making it suitable for continuous monitoring. Using streaming databases, you can run queries like finding the top 10 sold products where the “top 10 product list” might change in real-time.
  • Proton, a fast and lightweight alternative to Apache Flink
    7 projects | news.ycombinator.com | 30 Jan 2024
    Proton is a lightweight streaming processing "add-on" for ClickHouse, and we are making these delta parts as standalone as possible. Meanwhile contributing back to the ClickHouse community can also help a lot.

    Please check this PR from the proton team: https://github.com/ClickHouse/ClickHouse/pull/54870

  • 1 billion rows challenge in PostgreSQL and ClickHouse
    1 project | dev.to | 18 Jan 2024
    curl https://clickhouse.com/ | sh
  • We Executed a Critical Supply Chain Attack on PyTorch
    6 projects | news.ycombinator.com | 14 Jan 2024
    But I continue to find garbage in some of our CI scripts.

    Here is an example: https://github.com/ClickHouse/ClickHouse/pull/58794/files

    The right way is to:

    - always pin versions of all packages;

What are some alternatives?

When comparing sqlite-parquet-vtable and ClickHouse you can also consider the following projects:

duckdb - DuckDB is an in-process SQL OLAP Database Management System

loki - Like Prometheus, but for logs.

WarcDB - WarcDB: Web crawl data as SQLite databases.

zsv - zsv+lib: tabular data swiss-army knife CLI + world's fastest (simd) CSV parser

Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)

sqlite_protobuf - A SQLite extension for extracting values from serialized Protobuf messages

VictoriaMetrics - VictoriaMetrics: fast, cost-effective monitoring solution and time series database

visidata - A terminal spreadsheet multitool for discovering and arranging data

TimescaleDB - An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.

datasette - An open source multi-tool for exploring and publishing data

datafusion - Apache DataFusion SQL Query Engine