glmark2 VS ffi-overhead

Compare glmark2 vs ffi-overhead and see what are their differences.

glmark2

glmark2 is an OpenGL 2.0 and ES 2.0 benchmark (by glmark2)

ffi-overhead

comparing the c ffi (foreign function interface) overhead on various programming languages (by dyu)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
glmark2 ffi-overhead
1 18
386 635
2.3% -
5.5 0.0
about 1 month ago 9 months ago
C C
GNU General Public License v3.0 only Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

glmark2

Posts with mentions or reviews of glmark2. We have used some of these posts to build our list of alternatives and similar projects.

We haven't tracked posts mentioning glmark2 yet.
Tracking mentions began in Dec 2020.

ffi-overhead

Posts with mentions or reviews of ffi-overhead. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-01.
  • Can Fortran survive another 15 years?
    7 projects | news.ycombinator.com | 1 May 2023
    What about the other benchmarks on the same site? https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Bio/BCR/ BCR takes about a hundred seconds and is pretty indicative of systems biological models, coming from 1122 ODEs with 24388 terms that describe a stiff chemical reaction network modeling the BCR signaling network from Barua et al. Or the discrete diffusion models https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Jumps/Dif... which are the justification behind the claims in https://www.biorxiv.org/content/10.1101/2022.07.30.502135v1 that the O(1) scaling methods scale better than O(log n) scaling for large enough models? I mean.

    > If you use special routines (BLAS/LAPACK, ...), use them everywhere as the respective community does.

    It tests with and with BLAS/LAPACK (which isn't always helpful, which of course you'd see from the benchmarks if you read them). One of the key differences of course though is that there are some pure Julia tools like https://github.com/JuliaLinearAlgebra/RecursiveFactorization... which outperform the respective OpenBLAS/MKL equivalent in many scenarios, and that's one noted factor for the performance boost (and is not trivial to wrap into the interface of the other solvers, so it's not done). There are other benchmarks showing that it's not apples to apples and is instead conservative in many cases, for example https://github.com/SciML/SciPyDiffEq.jl#measuring-overhead showing the SciPyDiffEq handling with the Julia JIT optimizations gives a lower overhead than direct SciPy+Numba, so we use the lower overhead numbers in https://docs.sciml.ai/SciMLBenchmarksOutput/stable/MultiLang....

    > you must compile/write whole programs in each of the respective languages to enable full compiler/interpreter optimizations

    You do realize that a .so has lower overhead to call from a JIT compiled language than from a static compiled language like C because you can optimize away some of the bindings at the runtime right? https://github.com/dyu/ffi-overhead is a measurement of that, and you see LuaJIT and Julia as faster than C and Fortran here. This shouldn't be surprising because it's pretty clear how that works?

    I mean yes, someone can always ask for more benchmarks, but now we have a site that's auto updating tons and tons of ODE benchmarks with ODE systems ranging from size 2 to the thousands, with as many things as we can wrap in as many scenarios as we can wrap. And we don't even "win" all of our benchmarks because unlike for you, these benchmarks aren't for winning but for tracking development (somehow for Hacker News folks they ignore the utility part and go straight to language wars...).

    If you have a concrete change you think can improve the benchmarks, then please share it at https://github.com/SciML/SciMLBenchmarks.jl. We'll be happy to make and maintain another.

  • Understanding N and 1 queries problem
    3 projects | news.ycombinator.com | 2 Jan 2023
    Piling on about overhead (and SQLite), many high-level languages take some hit for using an FFI. So you're still incentivized to avoid tons of SQLite calls.

    https://github.com/dyu/ffi-overhead

  • Are there plans to improve concurrency in Rust?
    8 projects | /r/rust | 26 Dec 2022
    Go doesn't even have native thread stacks. When call any FFI function Go has to switch over to an on-demand stack and coordinate the goroutine and the runtime to avoid preemption and starvation. This is part of why Go's calling overhead is over 30x slower than C/C++/Rust (source). It's understandbly become Go community culture to act like FFI is just not even an option and reinvent everything in Go, but that reinvented Go suffers from these other problems plus many more (such as optimizing far worse than GCC or LLVM).
  • Comparing the C FFI overhead on various languages
    4 projects | news.ycombinator.com | 14 May 2022
    Some of the results look outdated. The Dart results look bad (25x slower than C), but looking at the code (https://github.com/dyu/ffi-overhead/tree/master/dart) it appears to be five years old. Dart has a new FFI as of Dart 2.5 (2019): https://medium.com/dartlang/announcing-dart-2-5-super-charge... I'm curious how the new FFI would fare in these benchmarks.
    4 projects | news.ycombinator.com | 14 May 2022
    There is no Python benchmark but you can find a PR claiming it has 123,198ms. That would be a worst one by a wide margin.

    https://github.com/dyu/ffi-overhead/pull/18

    4 projects | news.ycombinator.com | 14 May 2022
  • Would docker be faster if it were written in rust?
    3 projects | /r/rust | 18 Feb 2022
    In that case, the libcontainer library would be faster if written in most other languages seeing as Go has unfortunate C-calling performance. In this FFI benchmark Rust is on par with C with 1193ms (total benchmarking time), while Go took 37975ms doing the same.
  • Using Windows API in Julia?
    3 projects | /r/Julia | 1 Feb 2022
    Hi there folks! I'm going to call the Windows API as rapidly as possible and will be doing some calculations with the results, and I thought Julia might be perfect for this task as its FFI is impressively fast, and of course, Julia is fast regarding numbers as well :).
  • What is Haskell's secret sauce for a fast FFI?
    2 projects | /r/haskell | 16 Jan 2022
    If you look at the benchmark, the overhead of calling into C is higher for both Go and OCaml. You can see in the header that only integers are being exchanged, which should be passed in registers with no conversion being applied. https://github.com/dyu/ffi-overhead/blob/master/newplus/plus.h
    2 projects | /r/haskell | 16 Jan 2022
    If you look at the numbers in the FFI overhead benchmarks, Haskell's overhead is almost the same as other systems languages, and much lower than other GCed languages.

What are some alternatives?

When comparing glmark2 and ffi-overhead you can also consider the following projects:

sqlite

go - The Go programming language

glslViewer - Console-based GLSL Sandbox for 2D/3D shaders

kodi-standalone-service - Use systemd to allow for standalone operation of kodi.

krustlet - Kubernetes Rust Kubelet

kutil - Go Utilities

kms-glsl - CLI that runs OpenGL fragment shaders using the DRM/KMS Linux kernel subsystem

lzbench - lzbench is an in-memory benchmark of open-source LZ77/LZSS/LZMA compressors

CheeseShop - Examples of using PyO3 Rust bindings for Python with little to no silliness.

drminfo - Some info / test tools for linux drm drivers (also fbdev).

skynet - Skynet 1M threads microbenchmark

JuliaWin32API - Mechanically generated header files for using the WIN32 API from Julia