django-readers
ffi-overhead
Our great sponsors
django-readers | ffi-overhead | |
---|---|---|
3 | 19 | |
180 | 639 | |
0.6% | - | |
6.7 | 0.0 | |
13 days ago | 10 months ago | |
Python | C | |
BSD 2-clause "Simplified" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
django-readers
-
Django Styleguide
* For read endpoints and associated business logic, I'd encourage https://www.django-readers.org/ (disclaimer: I'm the author).
-
Understanding N and 1 queries problem
We solved the N+1 queries problem where I work by raising the level of abstraction from "queries plus serialisation" to "what shape data is required". We open sourced the solution at https://www.django-readers.org/.
-
This Week in Python
django-readers – function-oriented toolkit for better organisation of business logic and efficient selection and projection of data in Django projects
ffi-overhead
-
3 years of fulltime Rust game development, and why we're leaving Rust behind
The overhead for Go in benchmarks is insane in contrast to other languages - https://github.com/dyu/ffi-overhead Are there reasons why Go does not copy what Julia does?
-
Can Fortran survive another 15 years?
What about the other benchmarks on the same site? https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Bio/BCR/ BCR takes about a hundred seconds and is pretty indicative of systems biological models, coming from 1122 ODEs with 24388 terms that describe a stiff chemical reaction network modeling the BCR signaling network from Barua et al. Or the discrete diffusion models https://docs.sciml.ai/SciMLBenchmarksOutput/stable/Jumps/Dif... which are the justification behind the claims in https://www.biorxiv.org/content/10.1101/2022.07.30.502135v1 that the O(1) scaling methods scale better than O(log n) scaling for large enough models? I mean.
> If you use special routines (BLAS/LAPACK, ...), use them everywhere as the respective community does.
It tests with and with BLAS/LAPACK (which isn't always helpful, which of course you'd see from the benchmarks if you read them). One of the key differences of course though is that there are some pure Julia tools like https://github.com/JuliaLinearAlgebra/RecursiveFactorization... which outperform the respective OpenBLAS/MKL equivalent in many scenarios, and that's one noted factor for the performance boost (and is not trivial to wrap into the interface of the other solvers, so it's not done). There are other benchmarks showing that it's not apples to apples and is instead conservative in many cases, for example https://github.com/SciML/SciPyDiffEq.jl#measuring-overhead showing the SciPyDiffEq handling with the Julia JIT optimizations gives a lower overhead than direct SciPy+Numba, so we use the lower overhead numbers in https://docs.sciml.ai/SciMLBenchmarksOutput/stable/MultiLang....
> you must compile/write whole programs in each of the respective languages to enable full compiler/interpreter optimizations
You do realize that a .so has lower overhead to call from a JIT compiled language than from a static compiled language like C because you can optimize away some of the bindings at the runtime right? https://github.com/dyu/ffi-overhead is a measurement of that, and you see LuaJIT and Julia as faster than C and Fortran here. This shouldn't be surprising because it's pretty clear how that works?
I mean yes, someone can always ask for more benchmarks, but now we have a site that's auto updating tons and tons of ODE benchmarks with ODE systems ranging from size 2 to the thousands, with as many things as we can wrap in as many scenarios as we can wrap. And we don't even "win" all of our benchmarks because unlike for you, these benchmarks aren't for winning but for tracking development (somehow for Hacker News folks they ignore the utility part and go straight to language wars...).
If you have a concrete change you think can improve the benchmarks, then please share it at https://github.com/SciML/SciMLBenchmarks.jl. We'll be happy to make and maintain another.
-
When dealing with C, when is Go slow?
If you're calling back and forth between C and Go in a performance critical way. It's one of the slowest languages for wrapping C that there is. I've personally hit this bottleneck in numerous projects, wrapping things like libutp and sqlite. See also https://github.com/dyu/ffi-overhead
-
Understanding N and 1 queries problem
Piling on about overhead (and SQLite), many high-level languages take some hit for using an FFI. So you're still incentivized to avoid tons of SQLite calls.
https://github.com/dyu/ffi-overhead
-
Are there plans to improve concurrency in Rust?
Go doesn't even have native thread stacks. When call any FFI function Go has to switch over to an on-demand stack and coordinate the goroutine and the runtime to avoid preemption and starvation. This is part of why Go's calling overhead is over 30x slower than C/C++/Rust (source). It's understandbly become Go community culture to act like FFI is just not even an option and reinvent everything in Go, but that reinvented Go suffers from these other problems plus many more (such as optimizing far worse than GCC or LLVM).
-
Comparing the C FFI overhead on various languages
Some of the results look outdated. The Dart results look bad (25x slower than C), but looking at the code (https://github.com/dyu/ffi-overhead/tree/master/dart) it appears to be five years old. Dart has a new FFI as of Dart 2.5 (2019): https://medium.com/dartlang/announcing-dart-2-5-super-charge... I'm curious how the new FFI would fare in these benchmarks.
-
Would docker be faster if it were written in rust?
In that case, the libcontainer library would be faster if written in most other languages seeing as Go has unfortunate C-calling performance. In this FFI benchmark Rust is on par with C with 1193ms (total benchmarking time), while Go took 37975ms doing the same.
-
Using Windows API in Julia?
Hi there folks! I'm going to call the Windows API as rapidly as possible and will be doing some calculations with the results, and I thought Julia might be perfect for this task as its FFI is impressively fast, and of course, Julia is fast regarding numbers as well :).
What are some alternatives?
pyscript - Try PyScript: https://pyscript.com Examples: https://tinyurl.com/pyscript-examples Community: https://discord.gg/HxvBtukrg2
go - The Go programming language
django_channels_bingo_game - Real Time Multiplayer Bingo Game Using Django Channels and Javascript
sqlite
rapidpro - TextIt is a hosted service allowing organizations to visually build scalable interactive messaging applications.
krustlet - Kubernetes Rust Kubelet
Django-Styleguide - Django styleguide used in HackSoft projects
glmark2 - glmark2 is an OpenGL 2.0 and ES 2.0 benchmark
spinach - Modern Redis task queue for Python 3
kutil - Go Utilities
datasette-lite - Datasette running in your browser using WebAssembly and Pyodide
lzbench - lzbench is an in-memory benchmark of open-source LZ77/LZSS/LZMA compressors