uber-expenses-tracking
AWS Data Wrangler
uber-expenses-tracking | AWS Data Wrangler | |
---|---|---|
2 | 9 | |
94 | 3,811 | |
- | 0.8% | |
2.6 | 9.4 | |
almost 2 years ago | 8 days ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
uber-expenses-tracking
-
Data Engineering Projects for Beginners
Tracking your Uber Rides and Uber Eats expenses through a data engineering process
- uber-expenses-tracking
AWS Data Wrangler
-
Read files from s3 using Pandas/s3fs or AWS Data Wrangler?
I had no problem with awswrangler (https://github.com/aws/aws-sdk-pandas) and it supports reading and writing partitions which was really helpful and a few other optimizations that made it a great tool
- I agree that Arrow Tables are great, but we decided to keep the library focused on the Pandas interface. [wont implement]
- Automate some wrangling and data visualization in Python
-
Redshift API vs. other ways to connect?
awslabs has developed their own package for this and given it's for their product, seem likely to maintain it. https://github.com/awslabs/aws-data-wrangler
-
Parquet files
AWS data wrangler works well. it's a wrapper on pandas: https://github.com/awslabs/aws-data-wrangler
-
Reading s3 file data with Python lambda function
you'll find pre-made zips here: https://github.com/awslabs/aws-data-wrangler/releases
-
A guide to load (almost) anything into a DataFrame
Don't forget about https://aws-data-wrangler.readthedocs.io/
-
Go+: Go designed for data science
Yep, agreed. Go is a great language for AWS Lambda type workflows.
Python isn't as great (Python Lambda Layers built on Macs don't always work). AWS Data Wrangler (https://github.com/awslabs/aws-data-wrangler) provides pre-built layers, which is a work around, but something that's as portable as Go would be the best solution.
- Best way to install pandas and bumpy to AWS Lanbda
What are some alternatives?
docker-livy - Dockerizing and Consuming an Apache Livy environment
PyAthena - PyAthena is a Python DB API 2.0 (PEP 249) client for Amazon Athena.
airflow-docker - This is my Apache Airflow Local development setup on Windows 10 WSL2/Mac using docker-compose. It will also include some sample DAGs and workflows.
Optimus - :truck: Agile Data Preparation Workflows made easy with Pandas, Dask, cuDF, Dask-cuDF, Vaex and PySpark
text-analysis-speeches-amlo - Text analysis of the speeches, conferences and interviews of the current president of Mexico
ga-extractor - Tool for extracting Google Analytics data suitable for migrating to other platforms/databases
Dropout-Students-Prediction - The goal of this project is to identify students at risk of dropping out the school
python-mysql-replication - Pure Python Implementation of MySQL replication protocol build on top of PyMYSQL
dados-censup - Automação da ingestão de dados disponibilizados pelo INEP referente ao censo superior da educacão brasileira.
gonum - Gonum is a set of numeric libraries for the Go programming language. It contains libraries for matrices, statistics, optimization, and more
pyspark-on-aws-emr - The goal of this project is to offer an AWS EMR template using Spot Fleet and On-Demand Instances that you can use quickly. Just focus on writing pyspark code.
zef - Toolkit for graph-relational data across space and time