scikit-cuda VS kernel_tuner

Compare scikit-cuda vs kernel_tuner and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
scikit-cuda kernel_tuner
1 4
967 243
- 9.9%
2.5 9.1
7 months ago 3 days ago
Python Python
GNU General Public License v3.0 or later Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

scikit-cuda

Posts with mentions or reviews of scikit-cuda. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-01-22.

kernel_tuner

Posts with mentions or reviews of kernel_tuner. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-12.
  • Ask HN: What apps have you created for your own use?
    212 projects | news.ycombinator.com | 12 Dec 2023
    I've created Kernel Tuner (https://github.com/KernelTuner/kernel_tuner) as a small software development tool, because I was writing a lot of CUDA and OpenCL kernels at the time. I didn't want to manually figure out what best thread block dimensions and work division among threads were on every GPU over and over again.

    The tool evolved quite a bit since the first versions. I'm also using it for testing GPU code, teaching, and it has become one of the main drivers behind a lot of the research that I do.

  • PhD'ers, what are you working on? What CS topics excite you?
    2 projects | /r/computerscience | 17 Jan 2023
    We have an open science policy, so anyone can use our framework yourself to optimize stuff, if you want! The original paper is linked at the bottom of the GitHub page.
  • How to Optimize a CUDA Matmul Kernel for CuBLAS-Like Performance: A Worklog
    5 projects | news.ycombinator.com | 4 Jan 2023
    This is a great post for people who are new to optimizing GPU code.

    It is interesting to see that the author got this far without interchanging the innermost loop over k to the outermost loop, as is done in CUTLASS (https://github.com/NVIDIA/cutlass).

    As you can see in this blog post the code ends up with a lot of compile-time constants (e.g. BLOCKSIZE, BM, BN, BK, TM, TN) one way to optimize this code further is to use an auto-tuner to find the optimal value for all of these parameters for your GPU and problem size, for example Kernel Tuner (https://github.com/KernelTuner/kernel_tuner)

  • Kernel Tuner
    1 project | news.ycombinator.com | 30 Apr 2021

What are some alternatives?

When comparing scikit-cuda and kernel_tuner you can also consider the following projects:

cupy - NumPy & SciPy for GPU

halutmatmul - Hashed Lookup Table based Matrix Multiplication (halutmatmul) - Stella Nera accelerator

cuml - cuML - RAPIDS Machine Learning Library

pyopencl - OpenCL integration for Python, plus shiny features

PyCUDA - CUDA integration for Python, plus shiny features

tf-quant-finance - High-performance TensorFlow library for quantitative finance.

arrayfire-python - Python bindings for ArrayFire: A general purpose GPU library.

cusim - Superfast CUDA implementation of Word2Vec and Latent Dirichlet Allocation (LDA)

BlendLuxCore - Blender Integration for LuxCore

tmu - Implements the Tsetlin Machine, Coalesced Tsetlin Machine, Convolutional Tsetlin Machine, Regression Tsetlin Machine, and Weighted Tsetlin Machine, with support for continuous features, drop clause, Type III Feedback, focused negative sampling, multi-task classifier, autoencoder, literal budget, and one-vs-one multi-class classifier. TMU is written in Python with wrappers for C and CUDA-based clause evaluation and updating.

catboost - A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.