materialize
octosql
Our great sponsors
materialize | octosql | |
---|---|---|
116 | 34 | |
5,543 | 4,683 | |
1.1% | - | |
10.0 | 4.3 | |
2 days ago | 6 months ago | |
Rust | Go | |
GNU General Public License v3.0 or later | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
materialize
-
Choosing Between a Streaming Database and a Stream Processing Framework in Python
To fully leverage the data is the new oil concept, companies require a special database designed to manage vast amounts of data instantly. This need has led to different database forms, including NoSQL databases, vector databases, time-series databases, graph databases, in-memory databases, and in-memory data grids. Recent years have seen the rise of cloud-based streaming databases such as RisingWave, Materialize, DeltaStream, and TimePlus. While they each have distinct commercial and technical approaches, their overarching goal remains consistent: to offer users cloud-based streaming database services.
-
Proton, a fast and lightweight alternative to Apache Flink
> Materialize no longer provide the latest code as an open-source software that you can download and try. It turned from a single binary design to cloud-only micro-service
Materialize CTO here. Just wanted to clarify that Materialize has always been source available, not OSS. Since our initial release in 2020, we've been licensed under the Business Source License (BSL), like MariaDB and CockroachDB. Under the BSL, each release does eventually transition to Apache 2.0, four years after its initial release.
Our core codebase is absolutely still publicly available on GitHub [0], and our developer guide for building and running Materialize on your own machine is still public [1].
It is true that we substantially rearchitected Materialize in 2022 to be more "cloud-native". Our new cloud offering offers horizontal scalability and fault tolerance—our two most requested features in the single-binary days. I wouldn't call the new architecture a microservices design though! There are only 2-3 services, each quite substantial, in the new architecture (loosely: a compute service, an orchestration service, and, soon, a load balancing service).
We do push folks to sign up for a free trial of our hosted cloud offering [2] these days, rather than trying to start off by running things locally, as we generally want folks' first impression of Materialize to be of the version that we support for production use cases. A all-in-one single machine Docker image does still exist, if you know where to look, but it's very much use-at-your-own-risk, and we don't recommend using it for anything serious, but it's there to support e.g. academic work that wants to evaluate Materialize's capabilities to incrementally maintain recursive SQL queries.
If folks have questions about Materialize, we've got a lively community Slack [3] where you can connect directly with our product and engineering teams.
[0]: https://github.com/MaterializeInc/materialize/tree/main
-
What I Talk About When I Talk About Query Optimizer (Part 1): IR Design
> the Query Graph Model (QGM) representation is quite abstract and hardcodes many properties, making it exceptionally difficult to understand. Its claimed extensibility is also questionable.
I don't know much about the context, but it was interesting to note that Materialize scrapped their QGM code last year: https://github.com/MaterializeInc/materialize/pull/17139
Also, a couple of interesting projects in the IR space:
- https://substrait.io/ is a cross-language serialization for Relational Algebra
- https://www.lingo-db.com/ is an MLIR-based query engine described extensively in this paper https://db.in.tum.de/~jungmair/papers/p2485-jungmair.pdf?lan...
-
We Built a Streaming SQL Engine
Some recent solutions to this problem include Differential Dataflow and Materialize. It would be neat if postgres adopted something similar for live-updating materialized views.
-
Ask HN: Who is hiring? (October 2023)
Materialize | Full-Time | NYC Office or Remote | https://materialize.com
Materialize is an Operational Data Warehouse: A cloud data warehouse with streaming internals, built for work that needs action on what’s happening right now. Keep the familiar SQL, keep the proven architecture of cloud warehouses but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date.
Materialize is the operational data warehouse built from the ground up to meet the needs of modern data products: Fresh, Correct, Scalable — all in a familiar SQL UI.
Senior/Staff Product Manager - https://grnh.se/69754ebf4us
Senior Frontend Engineer - https://grnh.se/7010bdb64us
===
Investors include Redpoint, Lightspeed and Kleiner Perkins.
-
Ask HN: Who is hiring? (June 2023)
Materialize | EM (Compute), Senior PM | New York, New York | https://materialize.com/
You shouldn't have to throw away the database to build with fast-changing data. Keep the familiar SQL, keep the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date.
That is Materialize, the only true SQL streaming database built from the ground up to meet the needs of modern data products: Fresh, Correct, Scalable — all in a familiar SQL UI.
Engineering Manager, Compute - https://grnh.se/4e14099f4us
Senior Product Manager - https://grnh.se/587c36804us
VP of Marketing - https://grnh.se/9caac4b04us
- What are your favorite tools or components in the Kafka ecosystem?
- Ask HN: Who is hiring? (May 2023)
octosql
-
Wazero: Zero dependency WebAssembly runtime written in Go
Never got it to anything close to a finished state, instead moving on to doing the same prototype in llvm and then cranelift.
That said, here's some of the wazero-based code on a branch - https://github.com/cube2222/octosql/tree/wasm-experiment/was...
It really is just a very very basic prototype.
- Analyzing multi-gigabyte JSON files locally
-
DuckDB: Querying JSON files as if they were tables
This is really cool!
With their Postgres scanner[0] you can now easily query multiple datasources using SQL and join between them (i.e. Postgres table with JSON file). Something I strived to build with OctoSQL[1] before.
It's amazing to see how quickly DuckDB is adding new features.
Not a huge fan of C++, which is right now used for authoring extensions, it'd be really cool if somebody implemented a Rust extension SDK, or even something like Steampipe[2] does for Postgres FDWs which would provide a shim for quickly implementing non-performance-sensitive extensions for various things.
Godspeed!
[0]: https://duckdb.org/2022/09/30/postgres-scanner.html
[1]: https://github.com/cube2222/octosql
[2]: https://steampipe.io
-
Show HN: ClickHouse-local – a small tool for serverless data analytics
Congrats on the Show HN!
It's great to see more tools in this area (querying data from various sources in-place) and the Lambda use case is a really cool idea!
I've recently done a bunch of benchmarking, including ClickHouse Local and the usage was straightforward, with everything working as it's supposed to.
Just to comment on the performance area though, one area I think ClickHouse could still possibly improve on - vs OctoSQL[0] at least - is that it seems like the JSON datasource is slower, especially if only a small part of the JSON objects is used. If only a single field of many is used, OctoSQL lazily parses only that field, and skips the others, which yields non-trivial performance gains on big JSON files with small queries.
Basically, for a query like `SELECT COUNT(*), AVG(overall) FROM books.json` with the Amazon Review Dataset, OctoSQL is twice as fast (3s vs 6s). That's a minor thing though (OctoSQL will slow down for more complicated queries, while for ClickHouse decoding the input is and remains the bottleneck).
-
Steampipe – Select * from Cloud;
To add somewhat of a counterpoint to the other response, I've tried the Steampipe CSV plugin and got 50x slower performance vs OctoSQL[0], which is itself 5x slower than something like DataFusion[1]. The CSV plugin doesn't contact any external API's so it should be a good benchmark of the plugin architecture, though it might just not be optimized yet.
That said, I don't imagine this ever being a bottleneck for the main use case of Steampipe - in that case I think the APIs themselves will always be the limiting part. But it does - potentially - speak to what you can expect if you'd like to extend your usage of Steampipe to more than just DevOps data.
[0]: https://github.com/cube2222/octosql
[1]: https://github.com/apache/arrow-datafusion
Disclaimer: author of OctoSQL
-
Go runtime: 4 years later
Actually, folks just use gRPC or Yaegi in Go.
See Terraform[0], Traefik[1], or OctoSQL[2].
Although I agree plugins would be welcome, especially for performance reasons, though also to be able to compile and load go code into a running go process (JIT-ish).
[0]: https://github.com/hashicorp/terraform
[1]: https://github.com/traefik/traefik
[2]: https://github.com/cube2222/octosql
Disclaimer: author of OctoSQL
- Run SQL on CSV, Parquet, JSON, Arrow, Unix Pipes and Google Sheet
-
Beginner interested in learning SQL. Have a few question that I wasn’t able to find on google.
Through more magic, you COULD of course use stuff like Spark, or easier with programs like TextQL, sq, OctoSQL.
-
How I Used DALL·E 2 to Generate the Logo for OctoSQL
Hey, author here, happy to answer any questions!
The logo was created for OctoSQL[0] and in the article you can find a lot of sample phrase-image combinations, as it describes the whole path (generation, variation, editing) I went down. Let me know what you think!
- A SQLite extension for reading large files line-by-line
What are some alternatives?
ClickHouse - ClickHouse® is a free analytics DBMS for big data
risingwave - Scalable Postgres for stream processing, analytics, and management. KsqlDB and Apache Flink alternative. 🚀 10x more productive. 🚀 10x more cost-efficient.
duckdb - DuckDB is an in-process SQL OLAP Database Management System
openpilot - openpilot is an open source driver assistance system. openpilot performs the functions of Automated Lane Centering and Adaptive Cruise Control for 250+ supported car makes and models.
rust-kafka-101 - Getting started with Rust and Kafka
dbt-expectations - Port(ish) of Great Expectations to dbt test macros
q - q - Run SQL directly on delimited files and multi-file sqlite databases
scryer-prolog - A modern Prolog implementation written mostly in Rust.
roapi - Create full-fledged APIs for slowly moving datasets without writing a single line of code.
readyset - Readyset is a MySQL and Postgres wire-compatible caching layer that sits in front of existing databases to speed up queries and horizontally scale read throughput. Under the hood, ReadySet caches the results of cached select statements and incrementally updates these results over time as the underlying data changes.
trdsql - CLI tool that can execute SQL queries on CSV, LTSV, JSON, YAML and TBLN. Can output to various formats.
delta-rs - A native Rust library for Delta Lake, with bindings into Python