prefect-deployment-patterns
prefect-deployment-patterns | DE-ZOOMCAMP-PROJECT | |
---|---|---|
1 | 1 | |
93 | 4 | |
- | - | |
0.0 | 7.5 | |
over 1 year ago | about 1 year ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
prefect-deployment-patterns
-
[D] Should I go with Prefect, Argo or Flyte for Model Training and ML workflow orchestration?
Have you used infrastructure blocks in Prefect? You could easily build a block for Sagemaker deploying infrastructure for the flow running with GPUs, then run other flow in a local process, yet another one as Kubernetes job, Docker container, ECS task, AWS batch, etc. Super easy to set up, even from the UI or from CI/CD. There are a bunch of templates and examples here: https://github.com/anna-geller/prefect-deployment-patterns
DE-ZOOMCAMP-PROJECT
-
Analyzing MotoGP Racing Data Using a Data Pipeline
In this project, we will scrape data from the MotoGP website and perform data engineering tasks to clean and prepare the data for analysis. The data will include race results over the years. Once the data is cleaned and prepared, we will perform exploratory data analysis and visualization to gain insights into the data. GitHub-repo
What are some alternatives?
Taipy - Turns Data and AI algorithms into production-ready web applications in no time.
data-engineering-zoomcamp - Free Data Engineering course!
Udacity-Data-Engineering-Projects - Few projects related to Data Engineering including Data Modeling, Infrastructure setup on cloud, Data Warehousing and Data Lake development.
blog-tpi-jupyter - Terraform Provider Iterative + Jupyter + TensorBoard + AWS/Azure/GCP/K8s
buildflow - BuildFlow, is an open source framework for building large scale systems using Python. All you need to do is describe where your input is coming from and where your output should be written, and BuildFlow handles the rest. No configuration outside of the code is required.
ngods-stocks - New Generation Opensource Data Stack Demo
weather_data_pipeline - This is a PySpark-based data pipeline that fetches weather data for a few cities, performs some basic processing and transformation on the data, and then writes the processed data to a Google Cloud Storage bucket and a BigQuery table.The data is then viewed in a looker dashboard
fastkafka - FastKafka is a powerful and easy-to-use Python library for building asynchronous web services that interact with Kafka topics. Built on top of Pydantic, AIOKafka and AsyncAPI, FastKafka simplifies the process of writing producers and consumers for Kafka topics.
canarypy - CanaryPy - A light and powerful canary release for Data Pipelines
f1-data-pipeline - F1 Data Pipeline
dataall - A modern data marketplace that makes collaboration among diverse users (like business, analysts and engineers) easier, increasing efficiency and agility in data projects on AWS.
blueprint-examples - This is where you can find officially supported Cloudify blueprints that work with the latest versions of Cloudify. Please make sure to use the blueprints from this repo when you are evaluating Cloudify.