covid-19-data-engineering-pipeline
lambda2docker
covid-19-data-engineering-pipeline | lambda2docker | |
---|---|---|
1 | 1 | |
22 | 5 | |
- | - | |
5.3 | 3.5 | |
5 months ago | 12 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
covid-19-data-engineering-pipeline
-
COVID-19 data pipeline on AWS feat. Glue/PySpark, Docker, Great Expectations, Airflow, and Redshift, templated in CF/CDK, deployable via Github Actions
I've seen amazing projects here already, which honestly were a great inspiration, and today I would like to show you my project. Some time ago, I had the idea to apply every tool I wanted to learn or try out to the same topic and since then this idea has grown into an entire pipeline: https://github.com/moritzkoerber/covid-19-data-engineering-pipeline
lambda2docker
-
lambda2docker - Generate a Dockerfile from an AWS Lambda function
You can find it here: https://github.com/paololazzari/lambda2docker
What are some alternatives?
awesome-chalice - Discover the power of AWS Chalice, the ultimate framework for crafting seamless Python serverless applications. With Chalice, you can effortlessly build and manage HTTPS APIs, create web apps using popular front-end toolkits, and serve as the backend for cross-platform desktop and mobile apps developed with Qt for Python.
Chalice-PynamoDB-Docker-Starter-Kit - A starter kit with some boilerplate code for getting started making low-cost serverless applications in Python on AWS with a great local development setup via Docker Compose
F2-Data-Pipeline - Pipeline for Automated Updates of Kaggle's "Formula 2 Dataset"
spark-on-aws-lambda - Spark runtime on AWS Lambda
dataall - A modern data marketplace that makes collaboration among diverse users (like business, analysts and engineers) easier, increasing efficiency and agility in data projects on AWS.
torchlambda - Lightweight tool to deploy PyTorch models to AWS Lambda
Traffic-Data-Analysis-with-Apache-Spark-Based-on-Mobile-Robot-Data - Mobile robot data were analyzed with Apache-Spark to extract five different statistical result such as travel time, waiting time, average speed, occupancy and density were produced.
mkdocs-material-boilerplate - MkDocs Material Boilerplate (Starter Kit) - Deploy documentation to hosting platforms (Netlify, GitHub Pages, GitLab Pages, and AWS Amplify Console) with Docker, pipenv, and GitHub Actions.
Patek - A collection of reusable pyspark utility functions that help make development easier!
uvicorn-gunicorn-docker - Docker image with Uvicorn managed by Gunicorn for high-performance web applications in Python with performance auto-tuning.
livyc - Apache Spark as a Service with Apache Livy Client
RAUDI - A repo to automatically generate and keep updated a series of Docker images through GitHub Actions.