Top 18 Python Dask Projects
Parallel computing with task schedulingProject mention: File format for large data with many columns | reddit.com/r/Python | 2022-05-15
N-D labeled arrays and datasets in PythonProject mention: Python for Data Analysis, 3rd Edition – The Open Access Version Online | news.ycombinator.com | 2022-07-02
Does polars have N-D labelled arrays, and if so can it perform computations on them quickly? I've been thinking of moving from pandas to xarray , but might consider poplars too if it has some of that functionality.
Clean code begins in your IDE with SonarLint. Up your coding game and discover issues early. SonarLint is a free plugin that helps you find & fix bugs and security issues from the moment you start writing code. Install from your favorite IDE marketplace today.
Mars is a tensor-based unified framework for large-scale data computation which scales numpy, pandas, scikit-learn and Python functions.
STUMPY is a powerful and scalable Python library for modern time series analysisProject mention: Time Series Analysis for air pollution data not aligned [R] [P] | reddit.com/r/MachineLearning | 2022-04-23
Have you tried using STUMPY (https://github.com/TDAmeritrade/stumpy)?
A package which efficiently applies any function to a pandas dataframe or series in the fastest available manner (by jmcarpenter2)Project mention: Tidyverse equivalent in Python? | reddit.com/r/datascience | 2021-09-12
With concat, merge, melt, and pivot_table, that may cover everything I have ever needed. There may be more efficient ways at times, but swifter promises to do that for you, maybe it is true.
Expressive analytics in Python at any scale. (by ibis-project)Project mention: This Week in Python | dev.to | 2022-03-25
ibis – Python data analysis framework for Hadoop and SQL engines
A distributed task scheduler for DaskProject mention: Great forward progress on squashing cluster deadlocks | reddit.com/r/dask | 2021-12-15
Less time debugging, more time building. Scout APM allows you to find and fix performance issues with no hassle. Now with error monitoring and external services monitoring, Scout is a developer's best friend when it comes to application development.
:truck: Agile Data Preparation Workflows made easy with Pandas, Dask, cuDF, Dask-cuDF, Vaex and PySpark (by ironmussa)
Eliot: the logging system that tells you *why* it happened
A unified interface for distributed computing. Fugue executes SQL, Python, and Pandas code on Spark and Dask without any rewrites.Project mention: [P] Open data transformations in Python, no SQL required | reddit.com/r/MachineLearning | 2022-03-01
This looks similar to fugue, am I right? How do they compare?
Fast data store for Pandas time-series dataProject mention: pystore: NEW Data - star count:406.0 | reddit.com/r/algoprojects | 2022-02-26
Image Reading, Metadata Conversion, and Image Writing for Microscopy Images in Python
Distributed XGBoost on RayProject mention: Distributed XGBoost on Ray | news.ycombinator.com | 2021-08-16
ByteHub: making feature stores simpleProject mention: [D] Your Preferred Feature Stores? | reddit.com/r/datascience | 2022-07-03
Native Dask collection for awkward arrays, and the library to use it.Project mention: Awkward: Nested, jagged, differentiable, mixed type, GPU-enabled, JIT'd NumPy | news.ycombinator.com | 2021-12-16
Hi! I'm the original author of Awkward Array (Jim Pivarski), though there are now many contributors with about five regulars. Two of my colleagues just pointed me here—I'm glad you're interested! I can answer any questions you have about it.
First, sorry about all the TODOs in the documentation: I laid out a table of contents structure as a reminder to myself of what ought to be written, but haven't had a chance to fill in all of the topics. From the front page (https://awkward-array.org/), if you click through to the Python API reference (https://awkward-array.readthedocs.io/), that site is 100% filled in. Like NumPy, the library consists of one basic data type, `ak.Array`, and a suite of functions that act on it, `ak.this` and `ak.that`. All of those functions are individually documented, and many have examples.
The basic idea starts with a data structure like Apache Arrow (https://arrow.apache.org/)—a tree of general, variable-length types, organized in memory as a collection of columnar arrays—but performs operations on the data without ever taking it out of its columnar form. (3.5 minute explanation here: https://youtu.be/2NxWpU7NArk?t=661) Those columnar operations are compiled (in C++); there's a core of structure-manipulation functions suggestively named "cpu-kernels" that will also be implemented in CUDA (some already have, but that's in an experimental stage).
A key aspect of this is that structure can be manipulated just by changing values in some internal arrays and rearranging the single tree organizing those arrays. If, for instance, you want to replace a bunch of objects in variable-length lists with another structure, it never needs to instantiate those objects or lists as explicit types (e.g. `struct` or `std::vector`), and so the functions don't need to be compiled for specific data types. You can define any new data types at runtime and the same compiled functions apply. Therefore, JIT compilation is not necessary.
We do have Numba extensions so that you can iterate over runtime-defined data types in JIT-compiled Numba, but that's a second way to manipulate the same data. By analogy with NumPy, you can compute many things using NumPy's precompiled functions, as long as you express your workflow in NumPy's vectorized way. Numba additionally allows you to express your workflow in imperative loops without losing performance. It's the same way with Awkward Array: unpacking a million record structures or slicing a million variable-length lists in a single function call makes use of some precompiled functions (no JIT), but iterating over them at scale with imperative for loops requires JIT-compilation in Numba.
Just as we work with Numba to provide both of these programming styles—array-oriented and imperative—we'll also be working with JAX to add autodifferentiation (Anish Biswas will be starting on this in January; he's actually continuing work from last spring, but in a different direction). We're also working with Martin Durant and Doug Davis to replace our homegrown lazy arrays with industry-standard Dask, as a new collection type (https://github.com/ContinuumIO/dask-awkward/). A lot of my time, with Ianna Osborne and Ioana Ifrim at my university, is being spent refactoring the internals to make these kinds of integrations easier (https://indico.cern.ch/event/855454/contributions/4605044/). We found that we had implemented too much in C++ and need more, but not all, of the code to be in Python to be able to interact with third-party libraries.
If you have any other questions, I'd be happy to answer them!
A low-impact profiler to figure out how much memory each task in Dask is using
Pangeo + Binder (dev repo for a binder/pangeo fusion concept)Project mention: Binder.pangeo.io shut down due to crypto mining | news.ycombinator.com | 2021-12-09
Dozent is a powerful downloader that is used to collect large amounts of Twitter data from the internet archive.
Python Dask related posts
File format for large data with many columns
2 projects | reddit.com/r/Python | 15 May 2022
Time Series Analysis for air pollution data not aligned [R] [P]
1 project | reddit.com/r/MachineLearning | 23 Apr 2022
What is the best way to save a csv.file in number only ? PC hangs when my file is more than 2GB
2 projects | reddit.com/r/learnpython | 4 Apr 2022
[D] STUMPY v1.11.0 Released for Modern Time Series Analysis
2 projects | reddit.com/r/MachineLearning | 22 Mar 2022
pystore: NEW Data - star count:406.0
1 project | reddit.com/r/algoprojects | 26 Feb 2022
pystore: NEW Data - star count:406.0
1 project | reddit.com/r/algoprojects | 25 Feb 2022
pystore: NEW Data - star count:406.0
1 project | reddit.com/r/algoprojects | 24 Feb 2022
What are some of the best open-source Dask projects in Python? This list will help you:
Are you hiring? Post a new remote job listing for free.