ffi-overhead VS CheeseShop

Compare ffi-overhead vs CheeseShop and see what are their differences.

ffi-overhead

comparing the c ffi (foreign function interface) overhead on various programming languages (by dyu)

CheeseShop

Examples of using PyO3 Rust bindings for Python with little to no silliness. (by aeshirey)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
ffi-overhead CheeseShop
19 2
639 1
- -
0.0 3.8
10 months ago 7 months ago
C Rust
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ffi-overhead

Posts with mentions or reviews of ffi-overhead. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-26.
  • 3 years of fulltime Rust game development, and why we're leaving Rust behind
    21 projects | news.ycombinator.com | 26 Apr 2024
    The overhead for Go in benchmarks is insane in contrast to other languages - https://github.com/dyu/ffi-overhead Are there reasons why Go does not copy what Julia does?
  • Can Fortran survive another 15 years?
    7 projects | news.ycombinator.com | 1 May 2023
    What about the other benchmarks on the same site? https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Bio/BCR/ BCR takes about a hundred seconds and is pretty indicative of systems biological models, coming from 1122 ODEs with 24388 terms that describe a stiff chemical reaction network modeling the BCR signaling network from Barua et al. Or the discrete diffusion models https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Jumps/Dif... which are the justification behind the claims in https://www.biorxiv.org/content/10.1101/2022.07.30.502135v1 that the O(1) scaling methods scale better than O(log n) scaling for large enough models? I mean.

    > If you use special routines (BLAS/LAPACK, ...), use them everywhere as the respective community does.

    It tests with and with BLAS/LAPACK (which isn't always helpful, which of course you'd see from the benchmarks if you read them). One of the key differences of course though is that there are some pure Julia tools like https://github.com/JuliaLinearAlgebra/RecursiveFactorization... which outperform the respective OpenBLAS/MKL equivalent in many scenarios, and that's one noted factor for the performance boost (and is not trivial to wrap into the interface of the other solvers, so it's not done). There are other benchmarks showing that it's not apples to apples and is instead conservative in many cases, for example https://github.com/SciML/SciPyDiffEq.jl#measuring-overhead showing the SciPyDiffEq handling with the Julia JIT optimizations gives a lower overhead than direct SciPy+Numba, so we use the lower overhead numbers in https://docs.sciml.ai/SciMLBenchmarksOutput/stable/MultiLang....

    > you must compile/write whole programs in each of the respective languages to enable full compiler/interpreter optimizations

    You do realize that a .so has lower overhead to call from a JIT compiled language than from a static compiled language like C because you can optimize away some of the bindings at the runtime right? https://github.com/dyu/ffi-overhead is a measurement of that, and you see LuaJIT and Julia as faster than C and Fortran here. This shouldn't be surprising because it's pretty clear how that works?

    I mean yes, someone can always ask for more benchmarks, but now we have a site that's auto updating tons and tons of ODE benchmarks with ODE systems ranging from size 2 to the thousands, with as many things as we can wrap in as many scenarios as we can wrap. And we don't even "win" all of our benchmarks because unlike for you, these benchmarks aren't for winning but for tracking development (somehow for Hacker News folks they ignore the utility part and go straight to language wars...).

    If you have a concrete change you think can improve the benchmarks, then please share it at https://github.com/SciML/SciMLBenchmarks.jl. We'll be happy to make and maintain another.

  • When dealing with C, when is Go slow?
    1 project | /r/golang | 16 Apr 2023
    If you're calling back and forth between C and Go in a performance critical way. It's one of the slowest languages for wrapping C that there is. I've personally hit this bottleneck in numerous projects, wrapping things like libutp and sqlite. See also https://github.com/dyu/ffi-overhead
  • Understanding N and 1 queries problem
    3 projects | news.ycombinator.com | 2 Jan 2023
    Piling on about overhead (and SQLite), many high-level languages take some hit for using an FFI. So you're still incentivized to avoid tons of SQLite calls.

    https://github.com/dyu/ffi-overhead

  • Are there plans to improve concurrency in Rust?
    8 projects | /r/rust | 26 Dec 2022
    Go doesn't even have native thread stacks. When call any FFI function Go has to switch over to an on-demand stack and coordinate the goroutine and the runtime to avoid preemption and starvation. This is part of why Go's calling overhead is over 30x slower than C/C++/Rust (source). It's understandbly become Go community culture to act like FFI is just not even an option and reinvent everything in Go, but that reinvented Go suffers from these other problems plus many more (such as optimizing far worse than GCC or LLVM).
  • Comparing the C FFI overhead on various languages
    1 project | /r/patient_hackernews | 14 May 2022
    1 project | /r/hackernews | 14 May 2022
    4 projects | news.ycombinator.com | 14 May 2022
    Some of the results look outdated. The Dart results look bad (25x slower than C), but looking at the code (https://github.com/dyu/ffi-overhead/tree/master/dart) it appears to be five years old. Dart has a new FFI as of Dart 2.5 (2019): https://medium.com/dartlang/announcing-dart-2-5-super-charge... I'm curious how the new FFI would fare in these benchmarks.
  • Would docker be faster if it were written in rust?
    3 projects | /r/rust | 18 Feb 2022
    In that case, the libcontainer library would be faster if written in most other languages seeing as Go has unfortunate C-calling performance. In this FFI benchmark Rust is on par with C with 1193ms (total benchmarking time), while Go took 37975ms doing the same.
  • Using Windows API in Julia?
    3 projects | /r/Julia | 1 Feb 2022
    Hi there folks! I'm going to call the Windows API as rapidly as possible and will be doing some calculations with the results, and I thought Julia might be perfect for this task as its FFI is impressively fast, and of course, Julia is fast regarding numbers as well :).

CheeseShop

Posts with mentions or reviews of CheeseShop. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-06-11.
  • Apache Spark UDFs in Rust
    2 projects | /r/rust | 11 Jun 2021
    By comparison, PyO3 handles virtually all that boilerplate, so your Rust functions can accept and return many native Rust types and everything just works (for example). Or maybe I'm missing some fundamental difference with how JVM data are handled versus Python.
  • PyO3: Rust Bindings for the Python Interpreter
    18 projects | news.ycombinator.com | 29 Jan 2021
    At work, I'm using PyO3 for a project that churns through a lot of data (step 1) and does some pattern mining (step 2). This is the second generation of the project and is on-demand compared with the large, batch project in Spark that it is replacing. The Rust+Python project has really good performance, and using Rust for the core logic is such a joy compared with Scala or Python that a lot of other pieces are written in.

    Learning PyO3, I cobbled together a sample project[0] to demonstrate how some functionality works. It's a little outdated (uses PyO3 0.11.0 compared with the current 0.13.1) and doesn't show everything, but I think it's reasonably clear.

    One thing I noticed is that passing very large data from Rust and into Python's memory space is a bit of a challenge. I haven't quite grokked who owns what when and how memory gets correctly dropped, but I think the issues I've had are with the amount of RAM used at any moment and not with any memory leaks.

    [0] https://github.com/aeshirey/CheeseShop

What are some alternatives?

When comparing ffi-overhead and CheeseShop you can also consider the following projects:

go - The Go programming language

whatlang-pyo3 - Python Binding for Rust WhatLang, a language detection library

sqlite

dtparse - Fast datetime parser for Python written in Rust

krustlet - Kubernetes Rust Kubelet

rust-numpy - PyO3-based Rust bindings of the NumPy C-API

glmark2 - glmark2 is an OpenGL 2.0 and ES 2.0 benchmark

pythran - Ahead of Time compiler for numeric kernels

kutil - Go Utilities

rayon - Rayon: A data parallelism library for Rust

lzbench - lzbench is an in-memory benchmark of open-source LZ77/LZSS/LZMA compressors

py2many - Transpiler of Python to many other languages