prefect-deployment-patterns
Udacity-Data-Engineering-Projects
prefect-deployment-patterns | Udacity-Data-Engineering-Projects | |
---|---|---|
1 | 5 | |
107 | 1,715 | |
1.9% | 5.7% | |
0.0 | 0.0 | |
over 2 years ago | about 3 years ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
prefect-deployment-patterns
-
[D] Should I go with Prefect, Argo or Flyte for Model Training and ML workflow orchestration?
Have you used infrastructure blocks in Prefect? You could easily build a block for Sagemaker deploying infrastructure for the flow running with GPUs, then run other flow in a local process, yet another one as Kubernetes job, Docker container, ECS task, AWS batch, etc. Super easy to set up, even from the UI or from CI/CD. There are a bunch of templates and examples here: https://github.com/anna-geller/prefect-deployment-patterns
Udacity-Data-Engineering-Projects
- Pitanje za data engineering?
-
✨ 5 Free Resources to Learn Data Engineering 🚀
🔗 https://github.com/san089/Udacity-Data-Engineering-Projects
-
How can I become a big data engineer?
You can start with googling data engineering learning path to get a sense of what you need to know. If you are looking for simple projects to start with then you can look at this as well (https://github.com/san089/Udacity-Data-Engineering-Projects).
-
Beginner DE projects.
For practice, Data Modeling with Postgres and Udacity Data Engineering Projects as examples, and Data Engineering Project for Beginners - Batch edition for a guided tutorial.
- Data Pipeline Examples in Action
What are some alternatives?
Taipy - Turns Data and AI algorithms into production-ready web applications in no time.
ask-astro - An end-to-end LLM reference implementation providing a Q&A interface for Airflow and Astronomer
weather_data_pipeline - This is a PySpark-based data pipeline that fetches weather data for a few cities, performs some basic processing and transformation on the data, and then writes the processed data to a Google Cloud Storage bucket and a BigQuery table.The data is then viewed in a looker dashboard
data-engineering-book - Accumulated knowledge and experience in the field of Data Engineering
dataall - A modern data marketplace that makes collaboration among diverse users (like business, analysts and engineers) easier, increasing efficiency and agility in data projects on AWS.
Data-Engineering-Projects - Personal Data Engineering Projects