numexpr
StaticCompiler.jl
numexpr | StaticCompiler.jl | |
---|---|---|
4 | 16 | |
2,143 | 474 | |
0.7% | - | |
8.2 | 6.9 | |
about 1 month ago | about 1 month ago | |
Python | Julia | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
numexpr
-
Making Python 100x faster with less than 100 lines of Rust
You can just slap numexpr on top of it to compile this line on the fly.
https://github.com/pydata/numexpr
- Extending Python with Rust
-
[D] How to avoid CPU bottlenecking in PyTorch - training slowed by augmentations and data loading?
Are you doing any costly chained NumPy operations in your preprocessing? E.g. max(abs(large_ary)), this produces multiple copies of your data, https://github.com/pydata/numexpr can greatly reduce time spent with such operations
-
Selection in pandas using query
What is not entirely obvious here is that under the hood you can install a nice library called numexpr (docs, src) that exists to make calculations with large NumPy (and pandas) objects potentially much faster. When you use query or eval, this expression is passed into numexpr and optimized using its bag of tricks. Expected performance improvement can be between .95x and up to 20x, with average performance around 3-4x for typical use cases. You can read details in the docs, but essentially numexpr takes vectorized operations and makes them work in chunks that optimize for cache and CPU branch prediction. If your arrays are really large, your cache will not be hit as often. If you break your large arrays into very small pieces, your CPU won’t be as efficient.
StaticCompiler.jl
-
Potential of the Julia programming language for high energy physics computing
Yes, julia can be called from other languages rather easily, Julia functions can be exposed and called with a C-like ABI [1], and then there's also various packages for languages like Python [2] or R [3] to call Julia code.
With PackageCompiler.jl [4] you can even make AOT compiled standalone binaries, though these are rather large. They've shrunk a fair amount in recent releases, but they're still a lot of low hanging fruit to make the compiled binaries smaller, and some manual work you can do like removing LLVM and filtering stdlibs when they're not needed.
Work is also happening on a more stable / mature system that acts like StaticCompiler.jl [5] except provided by the base language and people who are more experienced in the compiler (i.e. not a janky prototype)
[1] https://docs.julialang.org/en/v1/manual/embedding/
[2] https://pypi.org/project/juliacall/
[3] https://www.rdocumentation.org/packages/JuliaCall/
[4] https://github.com/JuliaLang/PackageCompiler.jl
[5] https://github.com/tshort/StaticCompiler.jl
-
Julia App Deployment
PackageCompiler, but it' s a fat runtime and not cross compile. A thin runtime is currently not possible without sacrifices for feature as https://github.com/tshort/StaticCompiler.jl.
-
JuLox: What I Learned Building a Lox Interpreter in Julia
https://github.com/tshort/StaticCompiler.jl/issues/59 Would working on this feasible?
- Making Python 100x faster with less than 100 lines of Rust
- What's Julia's biggest weakness?
-
Size of a "hello world" application
I just read the project's documentation at https://github.com/tshort/StaticCompiler.jl. It does produce a "hello world" application that is only 8.4k in size 👍. I do like that it can work on Mac OS. Hopefully Windows support will come soon.
-
Why Julia 2.0 isn’t coming anytime soon (and why that is a good thing)
See https://github.com/tshort/StaticCompiler.jl
- My Experiences with Julia
-
Julia for health physics/radiation detection
You're probably dancing around the edges of what [PackageCompiler.jl](https://github.com/JuliaLang/PackageCompiler.jl) is capable of targeting. There are a few new capabilities coming online, namely [separating codegen from runtime](https://github.com/JuliaLang/julia/pull/41936) and [compiling small static binaries](https://github.com/tshort/StaticCompiler.jl), but you're likely to hit some snags on the bleeding edge.
-
We Use Julia, 10 Years Later
using StaticCompiler # `] add https://github.com/tshort/StaticCompiler.jl` to get latest master
What are some alternatives?
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
julia - The Julia Programming Language
pygfx - A python render engine running on wgpu.
PackageCompiler.jl - Compile your Julia Package
greptimedb - An open-source, cloud-native, distributed time-series database with PromQL/SQL/Python supported. Available on GreptimeCloud.
acados - Fast and embedded solvers for nonlinear optimal control
jnumpy - Writing Python C extensions in Julia within 5 minutes.
GPUCompiler.jl - Reusable compiler infrastructure for Julia GPU backends.
jsmpeg - MPEG1 Video Decoder in JavaScript
oneAPI.jl - Julia support for the oneAPI programming toolkit.
poly-match - Source for the "Making Python 100x faster with less than 100 lines of Rust" blog post
LoopVectorization.jl - Macro(s) for vectorizing loops.