-
prometeo
An experimental Python-to-C transpiler and domain specific language for embedded high-performance computing
This is awesome! The direction of using a subset of python, while leveraging the user base and static typing to accomplish some other everyday task in a different language is very legit IMO.
I took a cursory look at:
https://github.com/zanellia/prometeo/blob/master/prometeo/cg...
It seems quite similar in spirit to
https://github.com/adsharma/py2many/blob/main/pyrs/transpile...
I'm not spending much time on py2many last few months (started a new job). Let me know if any of it sounds useful - especially the ability to transpile to 7-8 languages including Julia, C++ and Rust.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
> Soo... it takes Python syntax and produces a C program, with no links back to Python - is that right? It uses a strict subset of Python, so that Prometeo programs are valid Python, but not necessarily the opposite. Is that fair?
yep
> Do you envisage this being a conduit for tight loop optimisation in Python? Or is it rather "you'd like a C program but can't write C good"?
There are already plenty of options for calling high-performance libraries from Python. Now 1) interpreting Python programs that use, e.g., NumPy, can be slow. 2) Compiling these programs using, e.g., Cython or Nuitka, can speed up the code across calls to high-performance libraries, but the resulting code will still rely on the Python runtime library, which can be slow/unreliable in an embedded context.
Coming to the second part of the question, writing C code directly is definitely an option, but, after doing a bit of that, I realized how tedious/error prone it is to develop/maintain/extend relatively complex code bases for embedded scientific computing (e.g. this one https://github.com/acados/acados). Or, to put it as Bjarne Stroustroup once said "fiddling with machine addresses and memory is rather unpleasant and not
-
Intrinsics (or directly assembly) are used in BLASFEO (https://github.com/giaf/blasfeo) the linear package used by prometeo. It would be cool to generate assembly directly for a few things, but that would require quite a bit of work!
-
If somebody is interested to use this prometeo tool to implement a MPC controller, would be possible to interface it directly also to HPIPM (https://github.com/giaf/hpipm) or similar standalone, well optimized QP solver?
-
This is awesome! The direction of using a subset of python, while leveraging the user base and static typing to accomplish some other everyday task in a different language is very legit IMO.
I took a cursory look at:
https://github.com/zanellia/prometeo/blob/master/prometeo/cg...
It seems quite similar in spirit to
https://github.com/adsharma/py2many/blob/main/pyrs/transpile...
I'm not spending much time on py2many last few months (started a new job). Let me know if any of it sounds useful - especially the ability to transpile to 7-8 languages including Julia, C++ and Rust.
-
-
I think most HPC people would disagree with this statement. State-of-the-art HPC code is still written in ASM (see e.g., https://github.com/xianyi/OpenBLAS) [that's what Intel is doing too]
-
Regarding all the questions about Julia:
There's ongoing work to reduce runtime dependencies of Julia (for example in 1.8, you can strip out the compiler and metadata), but then it's only approaching Go/Swift and other static languages with runtimes.
Generating standalone runtime free LLVM is another path, that is actually already pretty mature as it's what is being done for the GPU stack.
Someone just has to retarget that to cpu LLVM, and there's a start here: https://github.com/tshort/StaticCompiler.jl/issues/43
-
Metatheory.jl
Makes Julia reason with equations. General purpose metaprogramming, symbolic computation and algebraic equational reasoning library for the Julia programming language: E-Graphs & equality saturation, term rewriting and more.
Well IMO it can definitely be rewritten in Julia, and to an easier degree than python since Julia allows hooking into the compiler pipeline at many areas of the stack. It's lispy an built from the ground up for codegen, with libraries like (https://github.com/JuliaSymbolics/Metatheory.jl) that provide high level pattern matching with e-graphs. The question is whether it's worth your time to learn Julia to do so.
You could also do it at the LLVM level: https://github.com/JuliaComputingOSS/llvm-cbe
For interesting takes on that, you can see https://github.com/JuliaLinearAlgebra/Octavian.jl which relies on loopvectorization.jl to do transforms on Julia AST beyond what LLVM does. Because of that, Octavian.jl beats openblas on many linalg benchmarks
-
Well IMO it can definitely be rewritten in Julia, and to an easier degree than python since Julia allows hooking into the compiler pipeline at many areas of the stack. It's lispy an built from the ground up for codegen, with libraries like (https://github.com/JuliaSymbolics/Metatheory.jl) that provide high level pattern matching with e-graphs. The question is whether it's worth your time to learn Julia to do so.
You could also do it at the LLVM level: https://github.com/JuliaComputingOSS/llvm-cbe
For interesting takes on that, you can see https://github.com/JuliaLinearAlgebra/Octavian.jl which relies on loopvectorization.jl to do transforms on Julia AST beyond what LLVM does. Because of that, Octavian.jl beats openblas on many linalg benchmarks
-
Well IMO it can definitely be rewritten in Julia, and to an easier degree than python since Julia allows hooking into the compiler pipeline at many areas of the stack. It's lispy an built from the ground up for codegen, with libraries like (https://github.com/JuliaSymbolics/Metatheory.jl) that provide high level pattern matching with e-graphs. The question is whether it's worth your time to learn Julia to do so.
You could also do it at the LLVM level: https://github.com/JuliaComputingOSS/llvm-cbe
For interesting takes on that, you can see https://github.com/JuliaLinearAlgebra/Octavian.jl which relies on loopvectorization.jl to do transforms on Julia AST beyond what LLVM does. Because of that, Octavian.jl beats openblas on many linalg benchmarks
-
No, I mean nanosecond and picosecond precision real-time systems. Exhibit A: https://github.com/m-labs/artiq
Related posts
-
Common Misconceptions about Compilers
-
SIMD in Pure Python
-
Codon: Python Compiler
-
The father of Swift made another baby: Mojo: looks to be based on Python using MLIR
-
Hey guys, have any of you tried creating your own language using Python? I'm interested in giving it a shot and was wondering if anyone has any tips or resources to recommend. Thanks in advance!