murmurhash
python-mysql-replicati
Our great sponsors
murmurhash | python-mysql-replicati | |
---|---|---|
2 | 1 | |
42 | - | |
- | - | |
5.0 | - | |
6 months ago | - | |
C++ | ||
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
murmurhash
-
Is anyone using PyPy for real work?
If you have very large dicts, you might find this hash table I wrote for spaCy helpful: https://github.com/explosion/preshed . You need to key the data with 64-bit keys. We use this wrapper around murmurhash for it: https://github.com/explosion/murmurhash
There's no docs so obviously this might not be for you. But the software does work, and is efficient. It's been executed many many millions of times now.
-
Data Ingestion - Build Your Own "Map Reduce"?
Some notes: We don't need Sha256 and not evey base64; nothing will happen if keys will not distribute very equally. we could take MMH3; googling "python murmurhash" gives 2 interesting results; and since both use the same cpp code, let's take the one with most stars Other options would be to simply do (% NUM_SHARDS) or even shift right (however must have shards count == power of 2).
python-mysql-replicati
-
Is anyone using PyPy for real work?
I'm maintaining an internal change-data-capture application that uses a python library to decode mysql binlog and store the change records as json in the data lake (like Debezium). For our most busiest databases a single Cpython process couldn't process the amount of incoming changes in real time (thousands of events per second). It's not something that can be easily parallelized, as the bulk of the work is happening in the binlog decoding library (https://github.com/julien-duponchelle/python-mysql-replicati...).
So we've made it configurable to run some instances with Pypy - which was able to work through the data in realtime, i.e. without generating a lag in the data stream. The downside of using pypy was increased memory usage (4-8x) - which isn't really a problem. An actually problem that I didn't really track down was that the test suite (running pytest) was taking 2-3 times longer with Pypy than with CPython.
A few months ago I upgraded the system to run with CPython 3.11 and the performance improvements of 10-20% that come with that version now actually allowed us to drop Pypy and only run CPython. Which is more convenient and makes the deployment and configuration less complex.
What are some alternatives?
mmh3 - Python extension for MurmurHash (MurmurHash3), a set of fast and robust hash functions.
preshed - 💥 Cython hash tables that assume keys are pre-hashed
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services
python-mysql-replication - Pure Python Implementation of MySQL replication protocol build on top of PyMYSQL
psycopg2cffi - Port to cffi with some speed improvements
sparc-curation - code and files for SPARC curation workflows
MurMurHash - This little tool is to calculate a MurmurHash value of a favicon to hunt phishing websites on the Shodan platform.
pymssql - Official home for the pymssql source code.