Dask
NetworkX
Our great sponsors
Dask | NetworkX | |
---|---|---|
32 | 61 | |
11,982 | 14,153 | |
1.5% | 1.4% | |
9.7 | 9.6 | |
5 days ago | 7 days ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Dask
- The Distributed Tensor Algebra Compiler (2022)
-
A peek into Location Data Science at Ola
Data scientists work on phenomenally large datasets, and Dask is a handy tool for exploration within the confines of a single cloud VM or their local PCs. Location data visualization is an essential part of deciding further algorithm development and roadmap for projects. This lays the foundation for data engineering and science to work at scale, with petabytes of data.
- File format for large data with many columns
-
What is the best way to save a csv.file in number only ? PC hangs when my file is more than 2GB
Dask
-
Large Scale Hydrology: Geocomputational tools that you use
We're using a lot of Python. In addition to these, gridMET, Dask, HoloViz, and kerchunk.
-
msgspec - a fast & friendly JSON/MessagePack library
I wrote this for speeding up the RPC messaging in dask, but figured it might be useful for others as well. The source is available on github here: https://github.com/jcrist/msgspec.
-
What does it mean to scale your python powered pipeline?
Dask: Distributed data frames, machine learning and more
-
Data pipelines with Luigi
To do that, we are efficiently using Dask, simply creating on-demand local (or remote) clusters on task run() method:
-
Is Numpy always more efficient than Pandas? And how much should we rely on Python anyway?
Look into Dask, see: https://dask.org/
-
Ask HN: Is PySPark a Dead-End?
[1] https://dask.org/
NetworkX
-
Routes to LANL from 186 sites on the Internet
Built from this data... https://github.com/networkx/networkx/blob/main/examples/grap...
-
The Hunt for the Missing Data Type
I think one of the elements that author is missing here is that graphs are sparse matrices, and thus can be expressed with Linear Algebra. They mention adjacency matrices, but not sparse adjacency matrices, or incidence matrices (which can express muti and hypergraphs).
Linear Algebra is how almost all academic graph theory is expressed, and large chunks of machine learning and AI research are expressed in this language as well. There was recent thread here about PageRank and how it's really an eigenvector problem over a matrix, and the reality is, all graphs are matrices, they're typically sparse ones.
One question you might ask is, why would I do this? Why not just write my graph algorithms as a function that traverses nodes and edges? And one of the big answers is, parallelism. How are you going to do it? Fork a thread at each edge? Use a thread pool? What if you want to do it on CUDA too? Now you have many problems. How do you know how to efficiently schedule work? By treating graph traversal as a matrix multiplication, you just say Ax = b, and let the library figure it out on the specific hardware you want to target.
Here for example is a recent question on the NetworkX repo for how to find the boundary of a triangular mesh, it's one single line of GraphBLAS if you consider the graph as a matrix:
https://github.com/networkx/networkx/discussions/7326
This brings a very powerful language to the table, Linear Algebra. A language spoken by every scientist, engineer, mathematician and researcher on the planet. By treating graphs like matrices graph algorithms become expressible as mathematical formulas. For example, neural networks are graphs of adjacent layers, and the operation used to traverse from layer to layer is matrix multiplication. This generalizes to all matrices.
There is a lot of very new and powerful research and development going on around sparse graphs with linear algebra in the GraphBLAS API standard, and it's best reference implementation, SuiteSparse:GraphBLAS:
https://github.com/DrTimothyAldenDavis/GraphBLAS
SuiteSparse provides a highly optimized, parallel and CPU/GPU supported sparse Matrix Multiplication. This is relevant because traversing graph edges IS matrix multiplication when you realize that graphs are matrices.
Recently NetworkX has grown the ability to have different "graph engine" backends, and one of the first to be developed uses the python-graphblas library that binds to SuiteSparse. I'm not a directly contributor to that particular work but as I understand it there has been great results.
-
Build the dependency graph of your BigQuery pipelines at no cost: a Python implementation
In the project we used Python lib networkx and a DiGraph object (Direct Graph). To detect a table reference in a Query, we use sqlglot, a SQL parser (among other things) that works well with Bigquery.
- NetworkX – Network Analysis in Python
-
Custom libraries and utility tools for challenges
If you program in Python, can use NetworkX for that. But it's probably a good idea to implement the basic algorithms yourself at least one time.
-
Google open-sources their graph mining library
For those wanting to play with graphs and ML I was browsing the arangodb docs recently and I saw that it includes integrations to various graph libraries and machine learning frameworks [1]. I also saw a few jupyter notebooks dealing with machine learning from graphs [2].
Integrations include:
* NetworkX -- https://networkx.org/
* DeepGraphLibrary -- https://www.dgl.ai/
* cuGraph (Rapids.ai Graph) -- https://docs.rapids.ai/api/cugraph/stable/
* PyG (PyTorch Geometric) -- https://pytorch-geometric.readthedocs.io/en/latest/
--
1: https://docs.arangodb.com/3.11/data-science/adapters/
2: https://github.com/arangodb/interactive_tutorials#machine-le...
-
org-roam-pygraph: Build a graph of your org-roam collection for use in Python
org-roam-ui is a great interactive visualization tool, but its main use is visualization. The hope of this library is that it could be part of a larger graph analysis pipeline. The demo provides an example graph visualization, but what you choose to do with the resulting graph certainly isn't limited to that. See for example networkx.
What are some alternatives?
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Numba - NumPy aware dynamic Python compiler using LLVM
julia - The Julia Programming Language
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
RDKit - The official sources for the RDKit library
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
snap - Stanford Network Analysis Platform (SNAP) is a general purpose network analysis and graph mining library.
Interactive Parallel Computing with IPython - IPython Parallel: Interactive Parallel Computing in Python
SymPy - A computer algebra system written in pure Python
statsmodels - Statsmodels: statistical modeling and econometrics in Python