AdaptiveCpp
NetworkX
AdaptiveCpp | NetworkX | |
---|---|---|
19 | 61 | |
1,046 | 14,225 | |
2.8% | 1.1% | |
9.7 | 9.6 | |
1 day ago | 1 day ago | |
C++ | Python | |
BSD 2-clause "Simplified" License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AdaptiveCpp
-
What Every Developer Should Know About GPU Computing
Sapphire Rapids is a CPU.
AMD's primary focus for a GPU software ecosystem these days seems to be implementing CUDA with s/cuda/hip, so AMD directly supports and encourages running GPU software written in CUDA on AMD GPUs.
The only implementation for sycl on AMD GPUs that I can find is a hobby project that apparently is not allowed to use either the 'hip' or 'sycl' names. https://github.com/AdaptiveCpp/AdaptiveCpp
-
AMD May Get Across the CUDA Moat
Not natively, but AdaptiveCpp (previously hiSycl, then OpenSycl) has a single source single compiler pass, where they basically store LLVM IR as an intermediate representation.
https://github.com/AdaptiveCpp/AdaptiveCpp/blob/develop/doc/...
Performance penalty was within ew precents, at least according to the paper (figure 9 and 10)
-
Offloading standard C++ PSTL to Intel, NVIDIA and AMD GPUs with AdaptiveCpp
AdaptiveCpp (formerly known as hipSYCL) is an independent, open source, clang-based heterogeneous C++ compiler project. I thought some of you might be interested in knowing that we recently added support to offload standard C++ parallel STL algorithms to GPUs from all major vendors. E.g.:
-
AMD's HIPRT Working Its Way To Blender With ~25% Faster Rendering
In fact SYCL was initially called hipSYCL because it is based on AMD's ROCm/HIP. AMD had hipSYCL code running on the Frontier supercomputer four years ago at least and continues to support it.
-
hipSYCL can now generate a binary that runs on any Intel/NVIDIA/AMD GPU - in a single compiler pass. It is now the first single-pass SYCL compiler, and the first with unified code representation across backends.
Apple Silicon support through Metal is something that is actively discussed in hipSYCL. See https://github.com/illuhad/hipSYCL/issues/864 https://github.com/illuhad/hipSYCL/issues/460 (loooong discussion)
-
Bringing Nvidia® and AMD support to oneAPI
But really, the DPC++ part of oneAPI (which is many APIs) is just SYCL + extensions, and there are several other SYCL implementations which have already featured CUDA and Hip (AMD) support for a long time. The most popular and widely-used is hipSYCL, which we've been using in an HPC context on NV hardware for over 4 years now.
-
Intel oneAPI 2023 Released - AMD & NVIDIA Plugins Available
Unfortunately, the AMD and Nvidia plugins are proprietary. AMD users are probably better served with hipSYCL, if they somehow find an application using SYCL...
-
There is framework for everything.
Also, you might want to take a look at an implementation like hipSYCL :)
-
The Next Platform: "Intel Takes The SYCL To Nvidia's CUDA With Migration Tool"
Yup. SYCL is the future: https://github.com/illuhad/hipSYCL
-
Phoronix: "Intel's Vulkan Linux Driver Adds Experimental Mesh Shader Support For DG2/Alchemist"
ROCm is completely independent from these. It's a compute stack containing an OpenCL implementation for Radeon GPUs, plus a CUDA-like language called HIP which can be compiled to either device code for Radeon GPUs or to PTX to work with Nvidia GPUs. However, some researchers also created hipSYCL that allows SYCL to run atop HIP; you can think of it like DXVK - the program contains the DirectX/SYCL API, and DXVK/hipSYCL converts it to Vulkan/HIP (with one difference - DXVK does the conversion at runtime, while hipSYCL does it at compile time).
NetworkX
-
Routes to LANL from 186 sites on the Internet
Built from this data... https://github.com/networkx/networkx/blob/main/examples/grap...
-
The Hunt for the Missing Data Type
I think one of the elements that author is missing here is that graphs are sparse matrices, and thus can be expressed with Linear Algebra. They mention adjacency matrices, but not sparse adjacency matrices, or incidence matrices (which can express muti and hypergraphs).
Linear Algebra is how almost all academic graph theory is expressed, and large chunks of machine learning and AI research are expressed in this language as well. There was recent thread here about PageRank and how it's really an eigenvector problem over a matrix, and the reality is, all graphs are matrices, they're typically sparse ones.
One question you might ask is, why would I do this? Why not just write my graph algorithms as a function that traverses nodes and edges? And one of the big answers is, parallelism. How are you going to do it? Fork a thread at each edge? Use a thread pool? What if you want to do it on CUDA too? Now you have many problems. How do you know how to efficiently schedule work? By treating graph traversal as a matrix multiplication, you just say Ax = b, and let the library figure it out on the specific hardware you want to target.
Here for example is a recent question on the NetworkX repo for how to find the boundary of a triangular mesh, it's one single line of GraphBLAS if you consider the graph as a matrix:
https://github.com/networkx/networkx/discussions/7326
This brings a very powerful language to the table, Linear Algebra. A language spoken by every scientist, engineer, mathematician and researcher on the planet. By treating graphs like matrices graph algorithms become expressible as mathematical formulas. For example, neural networks are graphs of adjacent layers, and the operation used to traverse from layer to layer is matrix multiplication. This generalizes to all matrices.
There is a lot of very new and powerful research and development going on around sparse graphs with linear algebra in the GraphBLAS API standard, and it's best reference implementation, SuiteSparse:GraphBLAS:
https://github.com/DrTimothyAldenDavis/GraphBLAS
SuiteSparse provides a highly optimized, parallel and CPU/GPU supported sparse Matrix Multiplication. This is relevant because traversing graph edges IS matrix multiplication when you realize that graphs are matrices.
Recently NetworkX has grown the ability to have different "graph engine" backends, and one of the first to be developed uses the python-graphblas library that binds to SuiteSparse. I'm not a directly contributor to that particular work but as I understand it there has been great results.
-
Build the dependency graph of your BigQuery pipelines at no cost: a Python implementation
In the project we used Python lib networkx and a DiGraph object (Direct Graph). To detect a table reference in a Query, we use sqlglot, a SQL parser (among other things) that works well with Bigquery.
- NetworkX – Network Analysis in Python
-
Custom libraries and utility tools for challenges
If you program in Python, can use NetworkX for that. But it's probably a good idea to implement the basic algorithms yourself at least one time.
-
Google open-sources their graph mining library
For those wanting to play with graphs and ML I was browsing the arangodb docs recently and I saw that it includes integrations to various graph libraries and machine learning frameworks [1]. I also saw a few jupyter notebooks dealing with machine learning from graphs [2].
Integrations include:
* NetworkX -- https://networkx.org/
* DeepGraphLibrary -- https://www.dgl.ai/
* cuGraph (Rapids.ai Graph) -- https://docs.rapids.ai/api/cugraph/stable/
* PyG (PyTorch Geometric) -- https://pytorch-geometric.readthedocs.io/en/latest/
--
1: https://docs.arangodb.com/3.11/data-science/adapters/
2: https://github.com/arangodb/interactive_tutorials#machine-le...
-
org-roam-pygraph: Build a graph of your org-roam collection for use in Python
org-roam-ui is a great interactive visualization tool, but its main use is visualization. The hope of this library is that it could be part of a larger graph analysis pipeline. The demo provides an example graph visualization, but what you choose to do with the resulting graph certainly isn't limited to that. See for example networkx.
What are some alternatives?
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
Numba - NumPy aware dynamic Python compiler using LLVM
HIP-CPU - An implementation of HIP that works on CPUs, across OSes.
Dask - Parallel computing with task scheduling
triSYCL - Generic system-wide modern C++ for heterogeneous platforms with SYCL from Khronos Group
julia - The Julia Programming Language
HIP - HIP: C++ Heterogeneous-Compute Interface for Portability
RDKit - The official sources for the RDKit library
cuda-api-wrappers - Thin C++-flavored header-only wrappers for core CUDA APIs: Runtime, Driver, NVRTC, NVTX.
snap - Stanford Network Analysis Platform (SNAP) is a general purpose network analysis and graph mining library.
cuda_memtest - Fork of CUDA GPU memtest :eyeglasses:
SymPy - A computer algebra system written in pure Python