python-mysql-replication VS psycopg2cffi

Compare python-mysql-replication vs psycopg2cffi and see what are their differences.

python-mysql-replication

Pure Python Implementation of MySQL replication protocol build on top of PyMYSQL (by julien-duponchelle)

psycopg2cffi

Port to cffi with some speed improvements (by chtd)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
python-mysql-replication psycopg2cffi
5 2
2,255 177
- 1.1%
9.1 0.0
about 1 month ago almost 2 years ago
Python Python
- GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

python-mysql-replication

Posts with mentions or reviews of python-mysql-replication. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-31.
  • Is anyone using PyPy for real work?
    13 projects | news.ycombinator.com | 31 Jul 2023
    I'm maintaining an internal change-data-capture application that uses a python library to decode mysql binlog and store the change records as json in the data lake (like Debezium). For our most busiest databases a single Cpython process couldn't process the amount of incoming changes in real time (thousands of events per second). It's not something that can be easily parallelized, as the bulk of the work is happening in the binlog decoding library (https://github.com/julien-duponchelle/python-mysql-replicati...).

    So we've made it configurable to run some instances with Pypy - which was able to work through the data in realtime, i.e. without generating a lag in the data stream. The downside of using pypy was increased memory usage (4-8x) - which isn't really a problem. An actually problem that I didn't really track down was that the test suite (running pytest) was taking 2-3 times longer with Pypy than with CPython.

    A few months ago I upgraded the system to run with CPython 3.11 and the performance improvements of 10-20% that come with that version now actually allowed us to drop Pypy and only run CPython. Which is more convenient and makes the deployment and configuration less complex.

  • Why Binlog size grows drastically when isolation level set to "Repeatable Read" & When isolation level set to "Read Committed" the size of Binlog file reduces ?
    1 project | /r/mysql | 21 Apr 2023
    doing the using Python, https://github.com/julien-duponchelle/python-mysql-replication, the recommended way of doing this
  • How to Use BinLogs to Make an Aurora MySQL Event Stream
    3 projects | dev.to | 3 Oct 2022
    The BinLogStreamReader has several inputs that we need to retrieve. First we'll retrieve the cluster's secret with the database host/username/password and then we'll fetch the serverId we stored in S3.
  • How is everyone ingesting backend relational data?
    1 project | /r/dataengineering | 28 Jul 2021
    From backend relational tables to data warehouses my team has mostly relied on change data capture replication. We use MySQL upstream, and historically used AWS DMS or Attunity Replicate to replicate directly to SQL server. Recently we made the switch to Snowflake, and used mostly AWS DMS to replicate CDC data to S3 (lists individual inserts, updates, deletes), and then from there use snowpipes to copy to snowflake and then a job to merge that data into the target table to get the latest state. In addition we've used this library in production https://github.com/noplay/python-mysql-replication, and still use it today for one high volume, critical data source. Generally we see data go end to end in a matter of minutes, but occasionally there are spikes in latency.
  • Robust data transfer mechanism?
    1 project | /r/learnpython | 24 Apr 2021

psycopg2cffi

Posts with mentions or reviews of psycopg2cffi. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-31.
  • Is anyone using PyPy for real work?
    13 projects | news.ycombinator.com | 31 Jul 2023
    The only compatibility issue I've run into is database drivers.

    For PostgreSQL, psycopg2 is not supported. psycopg2cffi is largely unmaintained, and the 2.9.0 version in PyPI lacks some newer features of psycopg2: the `psycopg2.sql` module and empty result sets raise a RuntimeError in Python 3.7+. The latest commit in on Github does have these changes [1]. Psycopg 3 [2] and pg8000 [3] (as user tlocke mentioned elsewhere) are viable alternates provided you aren't stuck with older versions of PostgreSQL. I'm going to continue to use psycopg2cffi until I can upgrade an old PostgreSQL 9.4 database.

    For Microsoft SQL Server, pymssql does not support PyPy [4]. It's under new maintainership so it might gain support in the future. pypyodbc hasn't had any activity since 2022, and no new PyPI release since 2021 [5]. The datatypes returned can differ between libodbc1 versions. On Ubuntu 18.04 in particular: empty string columns are returned as a single space, integer columns are returned as a Decimal. Also, if you encounter a mysterious HY010 error ("Function sequence error"), you may need to upgrade libodbc1 to v2.3.7+ from v2.3.4 using the Microsoft repos.

    [1]: https://github.com/chtd/psycopg2cffi

  • Microsoft is hiring, looking to speed up cpython
    4 projects | /r/Python | 10 Jun 2021
    From time to time, I use pgcopy coupled with psycopg2cffi to feed large volumes of data processed by custom parsers written in Python for several formats. The whole process is 4-5x faster with PyPy.

What are some alternatives?

When comparing python-mysql-replication and psycopg2cffi you can also consider the following projects:

AWS Data Wrangler - pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, Neptune, OpenSearch, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).

pgcopy - fast data loading with binary copy

PyMySQL - MySQL client library for Python

hpy - HPy: a better API for Python

PonyORM - Pony Object Relational Mapper

sparc-curation - code and files for SPARC curation workflows

preshed - 💥 Cython hash tables that assume keys are pre-hashed

murmurhash - 💥 Cython bindings for MurmurHash2

mycli - A Terminal Client for MySQL with AutoCompletion and Syntax Highlighting.

Pyjion - Pyjion - A JIT for Python based upon CoreCLR