dremio-oss
Airflow
dremio-oss | Airflow | |
---|---|---|
8 | 169 | |
1,301 | 34,570 | |
0.8% | 1.4% | |
4.0 | 10.0 | |
14 days ago | 3 days ago | |
Java | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dremio-oss
-
What is the separation of storage and compute in data platforms and why does it matter?
Dremio - Dremio is a data lakehouse based on the open-source Apache Iceberg table format. It offers different compute instances to process data that lives in your S3 bucket. You pay for S3 storage independently.
-
What is dremio query engine
Dremio core is actually fully open source: https://github.com/dremio/dremio-oss
-
Q – Run SQL Directly on CSV or TSV Files
I have been using Dremio to query large volume of CSV files: https://docs.dremio.com/software/data-sources/files-and-dire...
Although having them in some columnar format is much better for fast responses.
GitHub: https://github.com/dremio/dremio-oss
-
Hands-On Introduction to Apache Iceberg - Data Lakehouse Engineering
As a Developer Advocate for Dremio I spend a lot of time doing research on technology and best practices around engineering Data Lakehouses and sharing what I learn through content for Subsurface - The Data Lakehouse Community. One of the major topics I've been diving deep into is the topic of Data Lakehouse Table Formats, these allow you to take the files on your data lake and group them into tables data processing engines like Dremio can operate on.
-
Introduction to The World of Data - (OLTP, OLAP, Data Warehouses, Data Lakes and more)
Hearing about all these components sounds great, but what everyone wants isn't to have to setup and configure all these components but instead have a platform and tool that brings this all together in an easy to use package, and that platform is Dremio. With Dremio you can work with the data directly from your data lake. No copies, easy access, high performance.
-
Data Lakehouse and Delta Lake
And as u/pych_phd said, it's not just Databricks, Snowflake and Azure who make these claims, even AWS, GCP, Dremio and I'm sure many others are too.
-
Data Science Competition
Dremio
-
Build your own “data lake” for reporting purposes
For my home projects I generate parquet (columnar and very well suited for DW like queries) files with pyarrow and use https://github.com/dremio/dremio-oss (https://www.dremio.com/on-prem/) to query them on lake (minio or just local disk or s3) and use Apache Superset for quick charts or dashboards.
Airflow
-
Building in Public: Leveraging Tublian's AI Copilot for My Open Source Contributions
Contributing to Apache Airflow's open-source project immersed me in collaborative coding. Experienced maintainers rigorously reviewed my contributions, providing constructive feedback. This ongoing dialogue refined the codebase and honed my understanding of best practices.
-
Navigating Week Two: Insights and Experiences from My Tublian Internship Journey
In week Two, I contributed to the Apache Airflow repository.
-
Airflow VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Best ETL Tools And Why To Choose
Apache Airflow is an open-source platform to programmatically author, schedule, and monitor workflows. The platform features a web-based user interface and a command-line interface for managing and triggering workflows.
-
Simplifying Data Transformation in Redshift: An Approach with DBT and Airflow
Airflow is the most widely used and well-known tool for orchestrating data workflows. It allows for efficient pipeline construction, scheduling, and monitoring.
-
Share Your favorite python related software!
AIRFLOW This is more of a library in my opinion, but Airflow has become an essential tool for scheduling in my work. All our ML training pipelines are ordered and scheduled with Airflow and it works seamlessly. The dashboard provided is also fantastic!
-
Ask HN: What is the correct way to deal with pipelines?
I agree there are many options in this space. Two others to consider:
- https://airflow.apache.org/
- https://github.com/spotify/luigi
There are also many Kubernetes based options out there. For the specific use case you specified, you might even consider a plain old Makefile and incrond if you expect these all to run on a single host and be triggered by a new file showing up in a directory…
- "Você veio protestar para ter acesso ao código fonte da urnas. O que é o código fonte?" "Não sei" 🤡
- Cómo construir tu propia data platform. From zero to hero.
-
Is it impossible to contribute to open source as a data engineer?
You can try and contribute some new connectors/operators for workflow managers like Airflow or Airbyte
What are some alternatives?
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
presto - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io) [Moved to: https://github.com/trinodb/trino]
dagster - An orchestration platform for the development, production, and observation of data assets.
ClickHouse - ClickHouse® is a free analytics DBMS for big data
n8n - Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.
Greenplum - Greenplum Database - Massively Parallel PostgreSQL for Analytics. An open-source massively parallel data platform for analytics, machine learning and AI.
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
Grafana - The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
Rakam - 📈 Collect customer event data from your apps. (Note that this project only includes the API collector, not the visualization platform)
Dask - Parallel computing with task scheduling