Our great sponsors
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
Most non-deep ML techniques aren't built on a crapload of matmuladd operations, which is what GPUs are good at and why we use them for DL. So relatively few components of sklearn would benefit from it and I'd be deeply surprised if those parts weren't already implemented for accelerators in other libraries (or transformable via hummingbird). Contributing to those projects would be more valuable than another reimplementation, lest you fall into the 15 standards problem
From direct discussions with the sklearn team, note that this may change relatively soon: a GPU engineer funded by Intel was recently added to the core development team. Last time I met with the team in person (6 months ago), the project was to factor some of the most GPU friendly computations out of the sklearn code base, such as K-Nearest Neighbor search or kernel-related computations, and to document an internal API to let external developers easily develop accelerated backends. As shown by e.g. our KeOps library, GPUs are extremely well suited to classical ML and sklearn is the perfect platform to let users fully take advantage of their hardware. Let’s hope that OP’s question will become redundant at some point in 2023-24 :-)
Related posts
- Treebomination: Convert a scikit-learn decision tree into a Keras model
- Export and run models with ONNX
- I learned about Microsoft's Hummingbird library today. 1000x performance??
- [D] Microsoft library, Hummingbird, compiles trained ML models into tensor computation for faster inference.
- Implementing a ChatGPT-like LLM from scratch, step by step