awesome-chalice
AWS Data Wrangler
awesome-chalice | AWS Data Wrangler | |
---|---|---|
30 | 9 | |
210 | 3,804 | |
- | 0.7% | |
5.1 | 9.4 | |
about 1 year ago | 2 days ago | |
HTML | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
awesome-chalice
- Awesome-Chalice: Porridge for your LLVM'd λLLMs
- The unreasonable effectiveness of unreasonable effectiveness posts
- Awesome Chalice: Python Serverless Microframework for AWS Lambda
- Automatically Manage Your AWS Account via AWS Lambda via AWS Chalice via AWS CDK
- Manage Your AWS Accounts from AWS Lambda with AWS Chalice via AWS CDK
AWS Data Wrangler
-
Read files from s3 using Pandas/s3fs or AWS Data Wrangler?
I had no problem with awswrangler (https://github.com/aws/aws-sdk-pandas) and it supports reading and writing partitions which was really helpful and a few other optimizations that made it a great tool
- I agree that Arrow Tables are great, but we decided to keep the library focused on the Pandas interface. [wont implement]
- Automate some wrangling and data visualization in Python
-
Redshift API vs. other ways to connect?
awslabs has developed their own package for this and given it's for their product, seem likely to maintain it. https://github.com/awslabs/aws-data-wrangler
-
Parquet files
AWS data wrangler works well. it's a wrapper on pandas: https://github.com/awslabs/aws-data-wrangler
-
Reading s3 file data with Python lambda function
you'll find pre-made zips here: https://github.com/awslabs/aws-data-wrangler/releases
-
A guide to load (almost) anything into a DataFrame
Don't forget about https://aws-data-wrangler.readthedocs.io/
-
Go+: Go designed for data science
Yep, agreed. Go is a great language for AWS Lambda type workflows.
Python isn't as great (Python Lambda Layers built on Macs don't always work). AWS Data Wrangler (https://github.com/awslabs/aws-data-wrangler) provides pre-built layers, which is a work around, but something that's as portable as Go would be the best solution.
- Best way to install pandas and bumpy to AWS Lanbda
What are some alternatives?
sqs-with-lambda-using-aws-amplify - Tutorial: Integrate Custom Resource (SQS) with amplify such that sending message to queue invokes lambda with the event message in body. Receive same payload inside the lambda function. In this tutorial, we are integrating SQS to send message in to lambda function in an Amplify project using Cloudformation.
PyAthena - PyAthena is a Python DB API 2.0 (PEP 249) client for Amazon Athena.
covid-19-data-engineering-pipeline - A Covid-19 data pipeline on AWS featuring PySpark/Glue, Docker, Great Expectations, Airflow, and Redshift, templated in CloudFormation and CDK, deployable via Github Actions.
Optimus - :truck: Agile Data Preparation Workflows made easy with Pandas, Dask, cuDF, Dask-cuDF, Vaex and PySpark
print-rider-py - Sharing API exchanges using HttpRider
ga-extractor - Tool for extracting Google Analytics data suitable for migrating to other platforms/databases
aws-elb-autoscaling - Auto Scaling VM-Series firewalls in AWS
python-mysql-replication - Pure Python Implementation of MySQL replication protocol build on top of PyMYSQL
up - Deploy infinitely scalable serverless apps, apis, and sites in seconds to AWS.
gonum - Gonum is a set of numeric libraries for the Go programming language. It contains libraries for matrices, statistics, optimization, and more
cloud-is-free - Learn how to setup Cloud projects... for free!
zef - Toolkit for graph-relational data across space and time