delta
Redash
delta | Redash | |
---|---|---|
69 | 38 | |
6,897 | 24,948 | |
1.3% | 0.6% | |
9.8 | 9.5 | |
5 days ago | 6 days ago | |
Scala | Python | |
Apache License 2.0 | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
delta
-
Delta Lake vs. Parquet: A Comparison
Delta is pretty great, let's you do upserts into tables in DataBricks much easier than without it.
I think the website is here: https://delta.io
-
Understanding Parquet, Iceberg and Data Lakehouses
I often hear references to Apache Iceberg and Delta Lake as if they’re two peas in the Open Table Formats pod. Yet…
Here’s the Apache Iceberg table format specification:
https://iceberg.apache.org/spec/
As they like to say in patent law, anyone “skilled in the art” of database systems could use this to build and query Iceberg tables without too much difficulty.
This is nominally the Delta Lake equivalent:
https://github.com/delta-io/delta/blob/master/PROTOCOL.md
I defy anyone to even scope out what level of effort would be required to fully implement the current spec, let alone what would be involved in keeping up to date as this beast evolves.
Frankly, the Delta Lake spec reads like a reverse engineering of whatever implementation tradeoffs Databricks is making as they race to build out a lakehouse for every Fortune 1000 company burned by Hadoop (which is to say, most of them).
My point is that I’ve yet to be convinced that buying into Delta Lake is actually buying into an open ecosystem. Would appreciate any reassurance on this front!
-
Getting Started with Flink SQL, Apache Iceberg and DynamoDB Catalog
Apache Iceberg is one of the three types of lakehouse, the other two are Apache Hudi and Delta Lake.
-
[D] Is there other better data format for LLM to generate structured data?
The Apache Spark / Databricks community prefers Apache parquet or Linux Fundation's delta.io over json.
-
Delta vs Iceberg: make love not war
Delta 3.0 extends an olive branch. https://github.com/delta-io/delta/releases/tag/v3.0.0rc1
-
Databricks Strikes $1.3B Deal for Generative AI Startup MosaicML
Databricks provides Jupyter lab like notebooks for analysis and ETL pipelines using spark through pyspark, sparkql or scala. I think R is supported as well but it doesn't interop as well with their newer features as well as python and SQL do. It interfaces with cloud storage backend like S3 and offers some improvements to the parquet format of data querying that allows for updating, ordering and merged through https://delta.io . They integrate pretty seamlessly to other data visualisation tooling if you want to use it for that but their built in graphs are fine for most cases. They also have ML on rails type through menus and models if I recall but I typically don't use it for that. I've typically used it for ETL or ELT type workflows for data that's too big or isn't stored in a database.
-
The "Big Three's" Data Storage Offerings
Structured, Semi-structured and Unstructured can be stored in one single format, a lakehouse storage format like Delta, Iceberg or Hudi (assuming those don't require low-latency SLAs like subsecond).
-
Ideas/Suggestions around setting up a data pipeline from scratch
As the data source, what I have is a gRPC stream. I get data in protobuf encoded format from it. This is a fixed part in the overall system, there is no other way to extract the data. We plan to ingest this data in delta lake, but before we do that there are a few problems.
-
Medallion/lakehouse architecture data modelling
Take a look at Delta Lake https://delta.io, it enables a lot of database-like actions on files
-
CSV or Parquet File Format
I prefer parquet (or delta for larger datasets. CSV for very small datasets, or the ones that will be later used/edited in Excel or Googke sheets.
Redash
- Redash: Connect to data source, easily visualize, dashboard and share your data
- FLaNK Stack 26 February 2024
- Contribuir con proyectos Open Source
-
Auto reloading Odoo with Docker
It seems like there may be an issue with Watchdog on Apple Silicon.
-
Tool or service for querying and exposing database through API
I am looking for service or tool similiar to Metabase or Redash that allows me to add data source - for example Postgres connection, and create raw SQL queries that can be shared or exposed through API. So instead of keeping raw SQL code somewhere, my other service would call this tool e.g. http://microservice/query=1?param1=xx&page=2 and get the results from the DB. These calls are internal only and part of ETL processes, but of course authentication would be required.
-
A PostgreSQL Docker container that automatically upgrades PostgreSQL
Yeah, a lot of the time I'd agree with you.
This container came about for the Redash project (https://github.com/getredash/redash), which has been stuck on PostgreSQL 9.5 (!) for years.
Moving to a new PostgreSQL container version is easy enough for new installations, but rolling that kind of change out to an existing userbase isn't so pretty.
For people familiar with the command line, PostgreSQL, and Docker then no worries.
But a large number of Redash deployments seem to have been done by people not skilled in those things. "We deployed it from the Digital Ocean droplet / AWS image / etc!"
For those situations, something that takes care of the database upgrade process automatically is the better approach. :)
-
Did anyone try Openblocks for multi-tenant client reporting?
I have tried Metabase, Redash beore (both self hosted open source versions), from my experience I find Metabase a bit easy to work with.
-
Best apps for transitioning from Spreadsheets to SQLite?
Regarding visualization tools, sqliteviz has proven to be the best I've found so far. Their web app runs locally but has some trackers, so I run it locally via a simple, static HTTP server. Falcon and Redash seem like overkill for my needs.
-
Chartbrew – create live reporting dashboards from APIs, MongoDB, Firestore, etc.
Redash seems to be dead or at least in hibernation. There hasn't been a release in over a year.
https://github.com/getredash/redash/issues/5891
-
Real Time Data Infra Stack
redash
What are some alternatives?
dvc - 🦉 ML Experiments and Data Management with Git
Apache Superset - Apache Superset is a Data Visualization and Data Exploration Platform [Moved to: https://github.com/apache/superset]
Apache Cassandra - Mirror of Apache Cassandra
Metabase - The simplest, fastest way to get business intelligence and analytics to everyone in your company :yum:
lakeFS - lakeFS - Data version control for your data lake | Git for data
plotly - The interactive graphing library for Python :sparkles: This project now includes Plotly Express!
hudi - Upserts, Deletes And Incremental Processing on Big Data.
cube.js - 📊 Cube — The Semantic Layer for Building Data Applications
delta-rs - A native Rust library for Delta Lake, with bindings into Python
bokeh - Interactive Data Visualization in the browser, from Python
iceberg - Apache Iceberg
Druid - Apache Druid: a high performance real-time analytics database.