AWS Data Wrangler
python-mysql-replication
Our great sponsors
AWS Data Wrangler | python-mysql-replication | |
---|---|---|
9 | 5 | |
3,797 | 2,253 | |
1.2% | - | |
9.4 | 9.2 | |
about 20 hours ago | 19 days ago | |
Python | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AWS Data Wrangler
-
Read files from s3 using Pandas/s3fs or AWS Data Wrangler?
I had no problem with awswrangler (https://github.com/aws/aws-sdk-pandas) and it supports reading and writing partitions which was really helpful and a few other optimizations that made it a great tool
- I agree that Arrow Tables are great, but we decided to keep the library focused on the Pandas interface. [wont implement]
- Automate some wrangling and data visualization in Python
-
Redshift API vs. other ways to connect?
awslabs has developed their own package for this and given it's for their product, seem likely to maintain it. https://github.com/awslabs/aws-data-wrangler
-
Parquet files
AWS data wrangler works well. it's a wrapper on pandas: https://github.com/awslabs/aws-data-wrangler
-
Reading s3 file data with Python lambda function
you'll find pre-made zips here: https://github.com/awslabs/aws-data-wrangler/releases
-
A guide to load (almost) anything into a DataFrame
Don't forget about https://aws-data-wrangler.readthedocs.io/
-
Go+: Go designed for data science
Yep, agreed. Go is a great language for AWS Lambda type workflows.
Python isn't as great (Python Lambda Layers built on Macs don't always work). AWS Data Wrangler (https://github.com/awslabs/aws-data-wrangler) provides pre-built layers, which is a work around, but something that's as portable as Go would be the best solution.
- Best way to install pandas and bumpy to AWS Lanbda
python-mysql-replication
-
Is anyone using PyPy for real work?
I'm maintaining an internal change-data-capture application that uses a python library to decode mysql binlog and store the change records as json in the data lake (like Debezium). For our most busiest databases a single Cpython process couldn't process the amount of incoming changes in real time (thousands of events per second). It's not something that can be easily parallelized, as the bulk of the work is happening in the binlog decoding library (https://github.com/julien-duponchelle/python-mysql-replicati...).
So we've made it configurable to run some instances with Pypy - which was able to work through the data in realtime, i.e. without generating a lag in the data stream. The downside of using pypy was increased memory usage (4-8x) - which isn't really a problem. An actually problem that I didn't really track down was that the test suite (running pytest) was taking 2-3 times longer with Pypy than with CPython.
A few months ago I upgraded the system to run with CPython 3.11 and the performance improvements of 10-20% that come with that version now actually allowed us to drop Pypy and only run CPython. Which is more convenient and makes the deployment and configuration less complex.
-
Why Binlog size grows drastically when isolation level set to "Repeatable Read" & When isolation level set to "Read Committed" the size of Binlog file reduces ?
doing the using Python, https://github.com/julien-duponchelle/python-mysql-replication, the recommended way of doing this
-
How to Use BinLogs to Make an Aurora MySQL Event Stream
The BinLogStreamReader has several inputs that we need to retrieve. First we'll retrieve the cluster's secret with the database host/username/password and then we'll fetch the serverId we stored in S3.
-
How is everyone ingesting backend relational data?
From backend relational tables to data warehouses my team has mostly relied on change data capture replication. We use MySQL upstream, and historically used AWS DMS or Attunity Replicate to replicate directly to SQL server. Recently we made the switch to Snowflake, and used mostly AWS DMS to replicate CDC data to S3 (lists individual inserts, updates, deletes), and then from there use snowpipes to copy to snowflake and then a job to merge that data into the target table to get the latest state. In addition we've used this library in production https://github.com/noplay/python-mysql-replication, and still use it today for one high volume, critical data source. Generally we see data go end to end in a matter of minutes, but occasionally there are spikes in latency.
- Robust data transfer mechanism?
What are some alternatives?
PyAthena - PyAthena is a Python DB API 2.0 (PEP 249) client for Amazon Athena.
PyMySQL - MySQL client library for Python
Optimus - :truck: Agile Data Preparation Workflows made easy with Pandas, Dask, cuDF, Dask-cuDF, Vaex and PySpark
PonyORM - Pony Object Relational Mapper
ga-extractor - Tool for extracting Google Analytics data suitable for migrating to other platforms/databases
sparc-curation - code and files for SPARC curation workflows
gonum - Gonum is a set of numeric libraries for the Go programming language. It contains libraries for matrices, statistics, optimization, and more
preshed - 💥 Cython hash tables that assume keys are pre-hashed
Redash - Make Your Company Data Driven. Connect to any data source, easily visualize, dashboard and share your data.
mycli - A Terminal Client for MySQL with AutoCompletion and Syntax Highlighting.
zef - Toolkit for graph-relational data across space and time
psycopg2cffi - Port to cffi with some speed improvements