f1-data-pipeline
astro-sdk
f1-data-pipeline | astro-sdk | |
---|---|---|
1 | 7 | |
23 | 319 | |
- | 1.6% | |
6.8 | 8.5 | |
11 months ago | 8 days ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
f1-data-pipeline
astro-sdk
-
Orchestration: Thoughts on Dagster, Airflow and Prefect?
Have you tried the Astro SDK? https://github.com/astronomer/astro-sdk
-
Airflow as near real time scheduler
One interesting point about putting the data into s3, is that if the data is in an S3 file then OP can use the Astro SDK to pretty easily upload that data into a table or a dataframe (there's even an s3 dynamic task function in the SDK that might fit the use-case well here).
-
Most ideal Airflow task structure?
I think you should take a look at the Astro SDK It’s an open source python package that removes the complexity of writing DAGs , particularly in the context of Extract, Load, Transform (ELT) use cases. Look at the doc here, especially aql.transform, aql.run_raw_sql, etc. That will definitely help you
-
ELT pipeline using airflow
- Astro SDK*: Made for folks who are doing their ETL in airflow and want to simplify movement between DBs and Pandas
-
After Airflow. Where next for DE?
More of a general principle but when you don't have design patterns, you get varying levels of results right? I think what Astro is doing to introduce "strong defaults" through projects like the astro-sdk or the cloud ide are interesting experiments to remove some of the busy work of common dags (load from s3, do something, push to database) will HELP reduce the cognitive load of really common, simple actions and give them a better single pattern to optimize on. I don't think those efforts reduce the optionality of true power users at all who want to custom code their s3 log sink to have some unique implementation while at the same time maybe solving some of the fragmentation to very frequently performed operations. 🤞
-
Airflow - Passing large data volumes between tasks
Have you looked into the astro python SDK? My team and I built this out over the last year to do exactly this :). You can you use the `@dataframe` decorator to pull the API data into a dataframe, store it in GCS and the access it in future steps. Lemme know if you have any questions!
-
What's the best tool to build pipelines from REST APIs?
I have an example here using COVID data. basically you just write a python function that reads the API and returns a dataframe (or any number of dataframes) and downstream tasks can then read the output as either a dataframe or a SQL table.
What are some alternatives?
dbt2looker - Generate lookml for views from dbt models
Mage - 🧙 The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai
astro - Astro SDK allows rapid and clean development of {Extract, Load, Transform} workflows using Python and SQL, powered by Apache Airflow. [Moved to: https://github.com/astronomer/astro-sdk]
quadratic - Quadratic | Data Science Spreadsheet with Python & SQL
steam-data-engineering - A data engineering project with Airflow, dbt, Terrafrom, GCP and much more!
magic-the-gathering - A complete pipeline to pull data from Scryfall's "Magic: The Gathering"-API, via Prefect orchestration and dbt transformation.
starthinker - Reference framework for building data workflows provided by Google. Accelerates authentication, logging, scheduling, and deployment of solutions using GCP. To borrow a tagline.. "The framework for professionals with deadlines."
weather_data_pipeline - This is a PySpark-based data pipeline that fetches weather data for a few cities, performs some basic processing and transformation on the data, and then writes the processed data to a Google Cloud Storage bucket and a BigQuery table.The data is then viewed in a looker dashboard
astronomer-cosmos - Run your dbt Core projects as Apache Airflow DAGs and Task Groups with a few lines of code
prefect-deployment-patterns - Code examples showing flow deployment to various types of infrastructure
awesome-pipeline - A curated list of awesome pipeline toolkits inspired by Awesome Sysadmin