Building a great tech team takes more than a paycheck. Zero payroll costs, get AI-driven insights to retain best talent, and delight them with amazing local benefits. 100% free and compliant. Learn more →
Top 16 C FFI Projects
-
An alternative is metacall. The example in the readme is about calling Python from Javascript, but it also works with other languages, like Ruby, C#, Java, and other languages
https://github.com/metacall/core
List of supported languages here https://github.com/metacall/core/blob/develop/docs/README.md...
In the future, maybe webidl (or extensions of it) will bring interoperability between languages too. At the moment there is https://mozilla.github.io/uniffi-rs/ for interoperability between Rust and a number of languages (basically the ones mozilla needs: Swift, Kotlin, Javascript)
-
dart_native
Write iOS&macOS&Android Code using Dart. This package liberates you from redundant glue code and low performance of Flutter Channel.
-
Onboard AI
Learn any GitHub repo in 59 seconds. Onboard AI learns any GitHub repo in minutes and lets you chat with it to locate functionality, understand different parts, and generate new code. Use it for free at www.getonboard.dev.
-
ffi-overhead
comparing the c ffi (foreign function interface) overhead on various programming languages
What about the other benchmarks on the same site? https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Bio/BCR/ BCR takes about a hundred seconds and is pretty indicative of systems biological models, coming from 1122 ODEs with 24388 terms that describe a stiff chemical reaction network modeling the BCR signaling network from Barua et al. Or the discrete diffusion models https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Jumps/Dif... which are the justification behind the claims in https://www.biorxiv.org/content/10.1101/2022.07.30.502135v1 that the O(1) scaling methods scale better than O(log n) scaling for large enough models? I mean.
> If you use special routines (BLAS/LAPACK, ...), use them everywhere as the respective community does.
It tests with and with BLAS/LAPACK (which isn't always helpful, which of course you'd see from the benchmarks if you read them). One of the key differences of course though is that there are some pure Julia tools like https://github.com/JuliaLinearAlgebra/RecursiveFactorization... which outperform the respective OpenBLAS/MKL equivalent in many scenarios, and that's one noted factor for the performance boost (and is not trivial to wrap into the interface of the other solvers, so it's not done). There are other benchmarks showing that it's not apples to apples and is instead conservative in many cases, for example https://github.com/SciML/SciPyDiffEq.jl#measuring-overhead showing the SciPyDiffEq handling with the Julia JIT optimizations gives a lower overhead than direct SciPy+Numba, so we use the lower overhead numbers in https://docs.sciml.ai/SciMLBenchmarksOutput/stable/MultiLang....
> you must compile/write whole programs in each of the respective languages to enable full compiler/interpreter optimizations
You do realize that a .so has lower overhead to call from a JIT compiled language than from a static compiled language like C because you can optimize away some of the bindings at the runtime right? https://github.com/dyu/ffi-overhead is a measurement of that, and you see LuaJIT and Julia as faster than C and Fortran here. This shouldn't be surprising because it's pretty clear how that works?
I mean yes, someone can always ask for more benchmarks, but now we have a site that's auto updating tons and tons of ODE benchmarks with ODE systems ranging from size 2 to the thousands, with as many things as we can wrap in as many scenarios as we can wrap. And we don't even "win" all of our benchmarks because unlike for you, these benchmarks aren't for winning but for tracking development (somehow for Hacker News folks they ignore the utility part and go straight to language wars...).
If you have a concrete change you think can improve the benchmarks, then please share it at https://github.com/SciML/SciMLBenchmarks.jl. We'll be happy to make and maintain another.
-
-
-
-
lua-resty-ffi
lua-resty-ffi provides an efficient and generic API to do hybrid programming in openresty/envoy with mainstream languages (Go, Python, Java, Rust, Nodejs, etc.).
The main goal of lua-resty-ffi project is to reuse ecosystems from other mainstream programming languages. As known, the ecosystem of C/Lua is very weak. lua-resty-ffi already supports Go, Java, Python, Rust, and Node.js. Please refer to https://github.com/kingluo/lua-resty-ffi for the rationality and design of this project. Thanks.
-
InfluxDB
Collect and Analyze Billions of Data Points in Real Time. Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.
-
-
python-c-io_uring-example
Using io_uring Linux Kernel interface from Python by JITing C code with MetaCall.
The other way around is also possible, importing Zig into Python, I have an example with C: https://github.com/metacall/python-c-io_uring-example
-
-
-
-
-
-
Project mention: Wrote a new module and want to setup CI to run the test suite for each commit (Github) | /r/perl | 2023-01-14
I started with that too but the dev was unable or unwilling to provide perls with different build types so I wrote a custom workflow which builds perl from source (as required) with different flags (quadmath, threads, etc.) and uses GitHub's caching system to prevent rebuilding them without cause. Works on windows, ubuntu, and macos and a cached perl built with by one workflow can be reused by any of the repo's other workflows. I might make it a shared Action if it ever seems useful for anyone besides myself.
-
Revelo Payroll
Free Global Payroll designed for tech teams. Building a great tech team takes more than a paycheck. Zero payroll costs, get AI-driven insights to retain best talent, and delight them with amazing local benefits. 100% free and compliant.
C FFI related posts
- kirby.nvim: design update
- When dealing with C, when is Go slow?
- C Strings and my slow descent to madness
- Python frontend with Zig backend
- Use Nodejs to extend Openresty/Nginx
- Comparing the C FFI overhead on various languages
- Comparing the C FFI overhead on various languages
-
A note from our sponsor - Revelo Payroll
try.revelo.com | 1 Oct 2023
Index
What are some of the best open-source FFI projects in C? This list will help you:
Project | Stars | |
---|---|---|
1 | core | 1,379 |
2 | dart_native | 931 |
3 | ffi-overhead | 615 |
4 | hlua | 493 |
5 | rust-in-flutter | 414 |
6 | rust-lua53 | 152 |
7 | lua-resty-ffi | 57 |
8 | td_rlua | 52 |
9 | python-c-io_uring-example | 24 |
10 | php-iup | 21 |
11 | gtk | 9 |
12 | php_opencv | 2 |
13 | bindings-levmar | 2 |
14 | bindings-sc3 | 1 |
15 | heatshrink | 0 |
16 | Affix.pm | 0 |