mara-pipelines
dremio-oss
Our great sponsors
mara-pipelines | dremio-oss | |
---|---|---|
3 | 8 | |
2,054 | 1,301 | |
0.4% | 1.4% | |
6.0 | 4.0 | |
5 months ago | 9 days ago | |
Python | Java | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mara-pipelines
-
How to keep track of the different Transformations done in an ETL pipeline?
The closest I've found is Mara but not what I'm after.
-
Using PostgreSQL as a Data Warehouse
The tooling behind the approach has been built as a set of python package named Mara. It is available at GitHub:
https://github.com/mara/mara-pipelines
And additional packages can be found at the Mara org:
https://github.com/mara
-
Build your own “data lake” for reporting purposes
Minio and nifi, require machines by themselves. Better off pure python and if obe wants sonething lighweight and visually pleasing Mara [0] or Dagster with Dagit [1] will do the job
[0] https://github.com/mara/mara-pipelines
[1] https://docs.dagster.io/tutorial/execute
dremio-oss
-
What is the separation of storage and compute in data platforms and why does it matter?
Dremio - Dremio is a data lakehouse based on the open-source Apache Iceberg table format. It offers different compute instances to process data that lives in your S3 bucket. You pay for S3 storage independently.
-
What is dremio query engine
Dremio core is actually fully open source: https://github.com/dremio/dremio-oss
-
Q – Run SQL Directly on CSV or TSV Files
I have been using Dremio to query large volume of CSV files: https://docs.dremio.com/software/data-sources/files-and-dire...
Although having them in some columnar format is much better for fast responses.
GitHub: https://github.com/dremio/dremio-oss
-
Hands-On Introduction to Apache Iceberg - Data Lakehouse Engineering
As a Developer Advocate for Dremio I spend a lot of time doing research on technology and best practices around engineering Data Lakehouses and sharing what I learn through content for Subsurface - The Data Lakehouse Community. One of the major topics I've been diving deep into is the topic of Data Lakehouse Table Formats, these allow you to take the files on your data lake and group them into tables data processing engines like Dremio can operate on.
-
Introduction to The World of Data - (OLTP, OLAP, Data Warehouses, Data Lakes and more)
Hearing about all these components sounds great, but what everyone wants isn't to have to setup and configure all these components but instead have a platform and tool that brings this all together in an easy to use package, and that platform is Dremio. With Dremio you can work with the data directly from your data lake. No copies, easy access, high performance.
-
Data Lakehouse and Delta Lake
And as u/pych_phd said, it's not just Databricks, Snowflake and Azure who make these claims, even AWS, GCP, Dremio and I'm sure many others are too.
-
Data Science Competition
Dremio
-
Build your own “data lake” for reporting purposes
For my home projects I generate parquet (columnar and very well suited for DW like queries) files with pyarrow and use https://github.com/dremio/dremio-oss (https://www.dremio.com/on-prem/) to query them on lake (minio or just local disk or s3) and use Apache Superset for quick charts or dashboards.
What are some alternatives?
abcd-hcp-pipeline - bids application for processing functional MRI data, robust to scanner, acquisition and age variability.
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
kuwala - Kuwala is the no-code data platform for BI analysts and engineers enabling you to build powerful analytics workflows. We are set out to bring state-of-the-art data engineering tools you love, such as Airbyte, dbt, or Great Expectations together in one intuitive interface built with React Flow. In addition we provide third-party data into data science models and products with a focus on geospatial data. Currently, the following data connectors are available worldwide: a) High-resolution demographics data b) Point of Interests from Open Street Map c) Google Popular Times
presto - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io) [Moved to: https://github.com/trinodb/trino]
pybaseball - Pull current and historical baseball statistics using Python (Statcast, Baseball Reference, FanGraphs)
ClickHouse - ClickHouse® is a free analytics DBMS for big data
dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
Greenplum - Greenplum Database - Massively Parallel PostgreSQL for Analytics. An open-source massively parallel data platform for analytics, machine learning and AI.
etl-markup-toolkit - ETL Markup Toolkit is a spark-native tool for expressing ETL transformations as configuration
Grafana - The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
Rakam - 📈 Collect customer event data from your apps. (Note that this project only includes the API collector, not the visualization platform)