Kernels
ompi
Our great sponsors
Kernels | ompi | |
---|---|---|
6 | 10 | |
401 | 2,008 | |
0.5% | 2.9% | |
7.2 | 9.7 | |
11 days ago | 6 days ago | |
C | C | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Kernels
-
Can you give me some proof that storing multidimansional data into a 1d array is the standard and best way to do it?
https://github.com/ParRes/Kernels/tree/default/C1z has some examples I’ve tested in the past. 2d is in the filenames of the relevant ones.
-
Fortran on GPU
I've evaluated all of these against each other. One presentation is https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s41620/ (sorry, you have to register - it's not my preference). The performance numbers there are based on code derived from https://github.com/ParRes/Kernels/tree/default/FORTRAN (the code differences are not interesting). Another comparison is found in https://github.com/jeffhammond/nwchem-tce-triples-kernels, which is more complicated in some ways.
-
Cross Platform Computing Framework?
If you want to learn by viewing code side by side, https://github.com/ParRes/Kernels/tree/default/Cxx11 might be useful. I haven’t kept up with my RAJA ports because they kept making breaking changes in the API a few years ago (should be stable now).
-
Co-Array MPI issue.
Try https://github.com/ParRes/Kernels/tree/default/FORTRAN coarray programs. Those were written by people who know what they’re doing and have been proven to execute correctly before. That might help you understand if your implementation is broken.
-
I am in grad school and starting a CFD class soon. I am proficient in Python and Matlab, but the course requires Fortran. How rough of a time will I have coding difficult concepts in a new language? I’m hoping my logic skills will overcome any syntax issues I run into, but wanted to ask
https://github.com/ParRes/Kernels has examples of the same thing written in Fortran, MATLAB/Octave and Numpy, if it helps.
-
Small Open Source HPC Code Recommendations
You absolutely went to take a look at the Parallel Research Kernels (PRK) repo at https://github.com/ParRes/Kernels .
ompi
-
Ask HN: Does anyone care about OpenPOWER?
The commercial Linux world (see https://github.com/open-mpi/ompi/issues/4349) and other open source OSes (eg FreeBSD) seem to have lined up behind little-endian PowerPC. IBM still has a big-endian problem with AIX, IBM i, and Linux on Z.
-
Announcing Chapel 1.32
Roughly, the sets of computational problems that people used (use?) MPI for. Things like numerical solvers for sparse matrices that are so big that you need to split them across your entire cluster. These still require a lot of node-to-node communication, and on top of it, the pattern is dependent on each problem (so easy solutions like map-reduce are effectively out). See eg https://www.open-mpi.org/, and https://courses.csail.mit.edu/18.337/2005/book/Lecture_08-Do... for the prototypical use case.
-
How much are you meant to comment on a code?
One of the guys at the local LUG is one of the lead maintainers of Open MPI. He told us about a comment that ran into the hundreds of lines, all for a one-line change in the code.
-
Which license to choose when you want credit
But it would be very inconvenient to have to keep crediting everyone who's ever worked on it. If you look at old projects, their licenses can have like 10-20 of those lines (here's one I was recently looking into).
-
First True Exascale Supercomputer
I have a bit of experience programming for a highly-parallel supercomputer, specifically in my case an IBM BlueGene/Q. In that case, the answer is a lot of message passing (we used Open MPI [0]). Since the nodes are discrete and don't have any shared memory, you end up with something kinda reminiscent of the actor model as popularized by Erlang and co -- but in C for number-crunching performance.
That said, each of the nodes is itself composed of multiple cores with shared memory. So in cases where you really want to grind out performance, you actually end up using message passing to divvy up chunks of work, and then use classic pthreads to parallelize things further, with lower latency.
Debugging is a bit of a nightmare, though, since some bugs inevitably only come up once you have a large number of nodes running the algorithm in parallel. But you'll probably be in a mainframe-style time-sharing setup, so you may have to wait hours or more to rerun things.
This applies less to some of the newer supercomputers, which are more or less clusters of GPUs instead of clusters of CPUs. I imagine there's some commonality, but I haven't worked with any of them so I can't really say.
[0] https://www.open-mpi.org/
- Managing parallelism by process vs by machine
-
MPI + CUDA Program for thermal conductivity problem
I would suggest using OpenMPI because it's pretty easy to get started with. You can build OpenMPI with CUDA support, then you can pass device pointers directly to MPI_Send and MPI_Recv. Then you don't have to deal with transfers and synchronization issues.
-
Distributed Training Made Easy with PyTorch-Ignite
backends from native torch distributed configuration: nccl, gloo, mpi.
-
FEA computer simulation question
I use a linux ubuntu machine with MPI (https://www.open-mpi.org/). I had a question on making my computer simulations faster. Would be better to get an older AMD 9590 machine clocked at 4.7 ghz or continue using my Ryzen 7 1700 machine clocked at something like 3.5ghz?
-
C Deep
OpenMPI - Message passing interface implementation. BSD-3-Clause
What are some alternatives?
grbl-L-Mega - An open source, embedded, high performance g-code-parser and CNC milling controller written in optimized C that will run on an Arduino Mega2560. Forked from GRBL modified for use on a lathe with spindle sync threading
gloo - Collective communications library with various primitives for multi-machine training.
analisis-numerico-computo-cientifico - Análisis numérico y cómputo científico
Redis - Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.
miniMD - MiniMD Molecular Dynamics Mini-App
NCCL - Optimized primitives for collective multi-GPU communication
john - John the Ripper jumbo - advanced offline password cracker, which supports hundreds of hash and cipher types, and runs on many operating systems, CPUs, GPUs, and even some FPGAs
FlatBuffers - FlatBuffers: Memory Efficient Serialization Library
computecpp-sdk - Collection of samples and utilities for using ComputeCpp, Codeplay's SYCL implementation
libvips - A fast image processing library with low memory needs.
JohnTheRipper - John the Ripper jumbo - advanced offline password cracker, which supports hundreds of hash and cipher types, and runs on many operating systems, CPUs, GPUs, and even some FPGAs [Moved to: https://github.com/openwall/john]
SWIFT - Modern astrophysics and cosmology particle-based code. Mirror of gitlab developments at https://gitlab.cosma.dur.ac.uk/swift/swiftsim