f1-data-pipeline
astro
f1-data-pipeline | astro | |
---|---|---|
1 | 2 | |
23 | 183 | |
- | - | |
6.8 | 10.0 | |
10 months ago | over 1 year ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
f1-data-pipeline
astro
-
After Airflow. Where next for DE?
What I would suggest is if you want an "Airflow 3.0" feel you check out the Astro SDK. My team and I basically spent a year and a half rewriting the Airflow DAG writing experience from the ground up. Completely different feel, highly scalable SQL/python/spark (soon) workflows that basically feel like native python. Way easier to test as well. You can pass dataframes into SQL queries, load data from any supported source to any supported warehouse, and things like lineage are natively supported :)
What are some alternatives?
dbt2looker - Generate lookml for views from dbt models
astro-sdk - Astro SDK allows rapid and clean development of {Extract, Load, Transform} workflows using Python and SQL, powered by Apache Airflow.
steam-data-engineering - A data engineering project with Airflow, dbt, Terrafrom, GCP and much more!
airflow-maintenance-dags - A series of DAGs/Workflows to help maintain the operation of Airflow
magic-the-gathering - A complete pipeline to pull data from Scryfall's "Magic: The Gathering"-API, via Prefect orchestration and dbt transformation.
Mage - 🧙 The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai
weather_data_pipeline - This is a PySpark-based data pipeline that fetches weather data for a few cities, performs some basic processing and transformation on the data, and then writes the processed data to a Google Cloud Storage bucket and a BigQuery table.The data is then viewed in a looker dashboard
getting-started - This repository is a getting started guide to Singer.
prefect-deployment-patterns - Code examples showing flow deployment to various types of infrastructure
sqlelf - Explore ELF objects through the power of SQL
dbt-coves - CLI tool for dbt users to simplify creation of staging models (yml and sql) files
typhoon-orchestrator - Create elegant data pipelines and deploy to AWS Lambda or Airflow