-
I can't speak for the parent commenter, but there is ofte. code 'around' the machine learning code that benefits from high-performance implementations. To give two examples:
1. We recently implemented an edit tree lemmatizer for spaCy. The machine learning model predicts labels that map to edit trees. However, in order to lemmatize tokens, the trees need to be applied. I implemented all the tree wrangling in Cython to speed up processing and save memory (trees can be encoded as compact C unions):
https://github.com/explosion/spaCy/blob/master/spacy/pipelin...
2. I am working on a biaffine parser for spaCy. Most implementations of biaffine parsing use a Python implementation of MST decoding, which is unfortunately quite slow. Some people have reported it to dominate parsing time (rather than a rather expensive transformer + biaffine layer). I have implemented MST decoding in Cython and it barely shows up in profiles:
https://github.com/explosion/spacy-experimental/blob/master/...
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
I can't speak for the parent commenter, but there is ofte. code 'around' the machine learning code that benefits from high-performance implementations. To give two examples:
1. We recently implemented an edit tree lemmatizer for spaCy. The machine learning model predicts labels that map to edit trees. However, in order to lemmatize tokens, the trees need to be applied. I implemented all the tree wrangling in Cython to speed up processing and save memory (trees can be encoded as compact C unions):
https://github.com/explosion/spaCy/blob/master/spacy/pipelin...
2. I am working on a biaffine parser for spaCy. Most implementations of biaffine parsing use a Python implementation of MST decoding, which is unfortunately quite slow. Some people have reported it to dominate parsing time (rather than a rather expensive transformer + biaffine layer). I have implemented MST decoding in Cython and it barely shows up in profiles:
https://github.com/explosion/spacy-experimental/blob/master/...
-
-
I would recommend using NanoBind, the follow up of PyBind11 by the same author (Wensel Jakob), and move as much performance critical code to C or C++. https://github.com/wjakob/nanobind
If you really care about performance called from Python, consider something like NVIDIA Warp (Preview). Warp jits and runs your code on CUDA or CPU. Although Warp targets physics simulation, geometry processing, and procedural animation, it can be used for other tasks as well. https://github.com/NVIDIA/warp
Jax is another option, by Google, jitting and vectorizing code for TPU, GPU or CPU. https://github.com/google/jax
-
I would recommend using NanoBind, the follow up of PyBind11 by the same author (Wensel Jakob), and move as much performance critical code to C or C++. https://github.com/wjakob/nanobind
If you really care about performance called from Python, consider something like NVIDIA Warp (Preview). Warp jits and runs your code on CUDA or CPU. Although Warp targets physics simulation, geometry processing, and procedural animation, it can be used for other tasks as well. https://github.com/NVIDIA/warp
Jax is another option, by Google, jitting and vectorizing code for TPU, GPU or CPU. https://github.com/google/jax
-
jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
I would recommend using NanoBind, the follow up of PyBind11 by the same author (Wensel Jakob), and move as much performance critical code to C or C++. https://github.com/wjakob/nanobind
If you really care about performance called from Python, consider something like NVIDIA Warp (Preview). Warp jits and runs your code on CUDA or CPU. Although Warp targets physics simulation, geometry processing, and procedural animation, it can be used for other tasks as well. https://github.com/NVIDIA/warp
Jax is another option, by Google, jitting and vectorizing code for TPU, GPU or CPU. https://github.com/google/jax
-
Nuitka
Nuitka is a Python compiler written in Python. It's fully compatible with Python 2.6, 2.7, 3.4-3.13. You feed it your Python app, it does a lot of clever things, and spits out an executable or extension module.
-
if the object you wanna bind fits into the mold of "an algorithm with inputs and outputs, and some helper methods" I've got automatic binding of a limited set of C++ features working in https://github.com/celtera/avendish ; for now I've been using pybind11 but I guess everything I need is supported by nanobind so maybe i'll do the port...
-
epython
EPython is a typed-subset of the Python for extending the language new builtin types and methods
This is related to the idea of EPython that we are working on (as we have funding): https://github.com/epython-dev/epython
It currently emits Cython for the C-backend (and PyIodide). It is very alpha currently, but if people are interested in helping, get in touch.