ispc
simde
Our great sponsors
ispc | simde | |
---|---|---|
4 | 7 | |
2,405 | 2,171 | |
1.2% | 3.6% | |
9.5 | 9.1 | |
4 days ago | 4 days ago | |
C++ | C | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ispc
-
Implementing a GPU's Programming Model on a CPU
This so-called GPU programming model has existed many decades before the appearance of the first GPUs, but at that time the compilers were not so good like the CUDA compilers, so the burden for a programmer was greater.
As another poster has already mentioned, there exists a compiler for CPUs which has been inspired by CUDA and which has been available for many years: ISPC (Implicit SPMD Program Compiler), at https://github.com/ispc/ispc .
NVIDIA has the very annoying habit of using a lot of terms that are different from those that have been previously used in computer science for decades. The worst is that NVIDIA has not invented new words, but they have frequently reused words that have been widely used with other meanings.
SIMT (Single-Instruction Multiple Thread) is not the worst term coined by NVIDIA, but there was no need for yet another acronym. For instance they could have used SPMD (Single Program, Multiple Data Stream), which dates from 1988, two decades before CUDA.
Moreover, SIMT is the same thing that was called "array of processes" by C.A.R. Hoare in August 1978 (in "Communicating Sequential Processes"), or "replicated parallel" by Occam in 1985 or "PARALLEL DO" by "OpenMP Fortran" in 1997-10 or "parallel for" by "OpenMP C and C++" in 1998-10.
The only (but extremely important) innovation brought by CUDA is that the compiler is smart enough so that the programmer does not need to know the structure of the processor, i.e. how many cores it has and how many SIMD lanes has each core. The CUDA compiler distributes automatically the work over the available SIMD lanes and available cores and in most cases the programmer does not care whether two executions of the function that must be executed for each data item are done on two different cores or on two different SIMD lanes of the same core.
-
SIMD intrinsics and the possibility of a standard library solution
ISPC: https://github.com/ispc/ispc
-
Prefix Sum with SIMD
Have you looked at [ISPC - Intel SPMD Program Compiler][0]?
[0]: https://github.com/ispc/ispc
- Duff’s Device in 2021
simde
-
The Case of the Missing SIMD Code
I was curious about these libraries a few weeks ago and did some searching. Is there one that's got a clearly dominating set of users or contributors?
I don't know what a good way to compare these might be, other than perhaps activity/contributor count.
[1] https://github.com/simd-everywhere/simde
[2] https://github.com/ermig1979/Simd
[3] https://github.com/google/highway
[4] https://gitlab.com/libeigen/eigen
[5] https://github.com/shibatch/sleef
-
Rise: Accelerate the Development of Open Source Software for RISC-V
I note that SIMDe doesn't have RISC-V support yet (but it does support Loongson LoongArch):
https://github.com/simd-everywhere/simde/
There are still a ton of things to do to get the Debian riscv64 port going too:
https://wiki.debian.org/PortsDocs/New
- SIMD intrinsics and the possibility of a standard library solution
-
Portable SIMD library
SIMDe is everything you're after: https://github.com/simd-everywhere/simde
- SIMD Everywhere – SIMD intrinsics on hardware which doesn't support them
-
Making Your Own Tools
> low level code that can run on multiple hardware architectures
I thought SIMD Everywhere was a pretty interesting project for that, lets you write x86 SSE/AVX code and run it on non-x86 architectures:
https://github.com/simd-everywhere/simde
-
Adobe Photoshop Ships on Macs Apple Silicon/M1 – 50% Faster
> architecture-specific features such as SSE/AVX which is not portable.
I don’t have hands-on experience, but somewhere on HN I saw this: https://github.com/simd-everywhere/simde If starting a new cross-platform project today, I would try that library first, before doing the usual intrinsics.
What are some alternatives?
highway - Performance-portable, length-agnostic SIMD with runtime dispatch
nsimd - Agenium Scale vectorization library for CPUs and GPUs
Beef - Beef Programming Language
sse2neon - A translator from Intel SSE intrinsics to Arm/Aarch64 NEON implementation
ParallelReductionsBenchmark - Thrust, CUB, TBB, AVX2, CUDA, OpenCL, OpenMP, SyCL - all it takes to sum a lot of numbers fast!
android-inline-hook - :fire: ShadowHook is an Android inline hook library which supports thumb, arm32 and arm64.
micro-profiler - Cross-platform low-footprint realtime C/C++ Profiler
libsimdpp - Portable header-only C++ low level SIMD library
elena-lang - ELENA is a general-purpose language with late binding. It is multi-paradigm, combining features of functional and object-oriented programming. Rich set of tools are provided to deal with message dispatching : multi-methods, message qualifying, generic message handlers, run-time interfaces
Sparkle - A software update framework for macOS
lunix - Lua Unix Module.
picoRTOS - Very small, lightning fast, yet portable RTOS with SMP suppport