patterns-devkit
AWS Data Wrangler
patterns-devkit | AWS Data Wrangler | |
---|---|---|
5 | 9 | |
106 | 3,804 | |
0.0% | 0.7% | |
2.9 | 9.4 | |
about 1 year ago | 1 day ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
patterns-devkit
AWS Data Wrangler
-
Read files from s3 using Pandas/s3fs or AWS Data Wrangler?
I had no problem with awswrangler (https://github.com/aws/aws-sdk-pandas) and it supports reading and writing partitions which was really helpful and a few other optimizations that made it a great tool
- I agree that Arrow Tables are great, but we decided to keep the library focused on the Pandas interface. [wont implement]
- Automate some wrangling and data visualization in Python
-
Redshift API vs. other ways to connect?
awslabs has developed their own package for this and given it's for their product, seem likely to maintain it. https://github.com/awslabs/aws-data-wrangler
-
Parquet files
AWS data wrangler works well. it's a wrapper on pandas: https://github.com/awslabs/aws-data-wrangler
-
Reading s3 file data with Python lambda function
you'll find pre-made zips here: https://github.com/awslabs/aws-data-wrangler/releases
-
A guide to load (almost) anything into a DataFrame
Don't forget about https://aws-data-wrangler.readthedocs.io/
-
Go+: Go designed for data science
Yep, agreed. Go is a great language for AWS Lambda type workflows.
Python isn't as great (Python Lambda Layers built on Macs don't always work). AWS Data Wrangler (https://github.com/awslabs/aws-data-wrangler) provides pre-built layers, which is a work around, but something that's as portable as Go would be the best solution.
- Best way to install pandas and bumpy to AWS Lanbda
What are some alternatives?
pyspark-example-project - Implementing best practices for PySpark ETL jobs and applications.
PyAthena - PyAthena is a Python DB API 2.0 (PEP 249) client for Amazon Athena.
Dataplane - Dataplane is a data platform that makes it easy to construct a data mesh with automated data pipelines and workflows.
Optimus - :truck: Agile Data Preparation Workflows made easy with Pandas, Dask, cuDF, Dask-cuDF, Vaex and PySpark
pipebird - Pipebird is open source infrastructure for securely sharing data with customers.
ga-extractor - Tool for extracting Google Analytics data suitable for migrating to other platforms/databases
SmartPipeline - A framework for rapid development of robust data pipelines following a simple design pattern
python-mysql-replication - Pure Python Implementation of MySQL replication protocol build on top of PyMYSQL
hamilton - Hamilton helps data scientists and engineers define testable, modular, self-documenting dataflows, that encode lineage and metadata. Runs and scales everywhere python does.
gonum - Gonum is a set of numeric libraries for the Go programming language. It contains libraries for matrices, statistics, optimization, and more
flowrunner - Flowrunner is a lightweight package to organize and represent Data Engineering/Science workflows
zef - Toolkit for graph-relational data across space and time