ispc
micro-profiler
Our great sponsors
ispc | micro-profiler | |
---|---|---|
4 | 1 | |
2,386 | 227 | |
0.8% | - | |
9.5 | 0.0 | |
7 days ago | 7 months ago | |
C++ | C++ | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ispc
-
Implementing a GPU's Programming Model on a CPU
This so-called GPU programming model has existed many decades before the appearance of the first GPUs, but at that time the compilers were not so good like the CUDA compilers, so the burden for a programmer was greater.
As another poster has already mentioned, there exists a compiler for CPUs which has been inspired by CUDA and which has been available for many years: ISPC (Implicit SPMD Program Compiler), at https://github.com/ispc/ispc .
NVIDIA has the very annoying habit of using a lot of terms that are different from those that have been previously used in computer science for decades. The worst is that NVIDIA has not invented new words, but they have frequently reused words that have been widely used with other meanings.
SIMT (Single-Instruction Multiple Thread) is not the worst term coined by NVIDIA, but there was no need for yet another acronym. For instance they could have used SPMD (Single Program, Multiple Data Stream), which dates from 1988, two decades before CUDA.
Moreover, SIMT is the same thing that was called "array of processes" by C.A.R. Hoare in August 1978 (in "Communicating Sequential Processes"), or "replicated parallel" by Occam in 1985 or "PARALLEL DO" by "OpenMP Fortran" in 1997-10 or "parallel for" by "OpenMP C and C++" in 1998-10.
The only (but extremely important) innovation brought by CUDA is that the compiler is smart enough so that the programmer does not need to know the structure of the processor, i.e. how many cores it has and how many SIMD lanes has each core. The CUDA compiler distributes automatically the work over the available SIMD lanes and available cores and in most cases the programmer does not care whether two executions of the function that must be executed for each data item are done on two different cores or on two different SIMD lanes of the same core.
-
SIMD intrinsics and the possibility of a standard library solution
ISPC: https://github.com/ispc/ispc
-
Prefix Sum with SIMD
Have you looked at [ISPC - Intel SPMD Program Compiler][0]?
[0]: https://github.com/ispc/ispc
- Duff’s Device in 2021
micro-profiler
We haven't tracked posts mentioning micro-profiler yet.
Tracking mentions began in Dec 2020.
What are some alternatives?
highway - Performance-portable, length-agnostic SIMD with runtime dispatch
Beef - Beef Programming Language
waifu2x-converter-cpp - Improved fork of Waifu2X C++ using OpenCL and OpenCV
ParallelReductionsBenchmark - Thrust, CUB, TBB, AVX2, CUDA, OpenCL, OpenMP, SyCL - all it takes to sum a lot of numbers fast!
HashTableBenchmark - A simple cross-platform speed & memory-efficiency benchmark for the most common hash-table implementations in the C++ world
elena-lang - ELENA is a general-purpose language with late binding. It is multi-paradigm, combining features of functional and object-oriented programming. Rich set of tools are provided to deal with message dispatching : multi-methods, message qualifying, generic message handlers, run-time interfaces
compute-runtime - Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver
lunix - Lua Unix Module.
pcm - Intel® Performance Counter Monitor (Intel® PCM)
thor-os - Simple operating system in C++, written from scratch
eve - Expressive Vector Engine - SIMD in C++ Goes Brrrr
simde - Implementations of SIMD instruction sets for systems which don't natively support them.