dpark VS tdigest

Compare dpark vs tdigest and see what are their differences.

dpark

Python clone of Spark, a MapReduce alike framework in Python (by douban)

tdigest

t-Digest data structure in Python. Useful for percentiles and quantiles, including distributed enviroments like PySpark (by CamDavidsonPilon)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
dpark tdigest
- -
2,691 375
-0.1% -
0.0 0.0
over 3 years ago 12 months ago
Python Python
BSD 3-clause "New" or "Revised" License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

dpark

Posts with mentions or reviews of dpark. We have used some of these posts to build our list of alternatives and similar projects.

We haven't tracked posts mentioning dpark yet.
Tracking mentions began in Dec 2020.

tdigest

Posts with mentions or reviews of tdigest. We have used some of these posts to build our list of alternatives and similar projects.

We haven't tracked posts mentioning tdigest yet.
Tracking mentions began in Dec 2020.

What are some alternatives?

When comparing dpark and tdigest you can also consider the following projects:

Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing

t-digest - A new data structure for accurate on-line accumulation of rank-based statistics such as quantiles and trimmed means

mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services

distributed - A distributed task scheduler for Dask

streamparse - Run Python in Apache Storm topologies. Pythonic API, CLI tooling, and a topology DSL.

PySpark-Boilerplate - A boilerplate for writing PySpark Jobs

dumbo - Python module that allows one to easily write and run Hadoop programs.

etl-markup-toolkit - ETL Markup Toolkit is a spark-native tool for expressing ETL transformations as configuration

luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

data-science-ipython-notebooks - Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.