myADMonitor
kernel_tuner
myADMonitor | kernel_tuner | |
---|---|---|
4 | 4 | |
36 | 243 | |
- | 3.7% | |
10.0 | 9.1 | |
over 1 year ago | 5 days ago | |
C# | Python | |
GNU Affero General Public License v3.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
myADMonitor
-
Ask HN: What apps have you created for your own use?
I usually have to commit large changes in Active Directory in production environments, to accommodate merges, splits or acquisitions. I created a tool to monitor in real time the changes happening in AD so I can see if there is anything wrong or unexpected happening. For that I created this tool for personal use and I open sourced it:
https://github.com/mihemihe/myADMonitor
- Best way to track changes to an AD Attribute?
- myADMonitor - Open-Source Live changes tracking for Active Directory.
kernel_tuner
-
Ask HN: What apps have you created for your own use?
I've created Kernel Tuner (https://github.com/KernelTuner/kernel_tuner) as a small software development tool, because I was writing a lot of CUDA and OpenCL kernels at the time. I didn't want to manually figure out what best thread block dimensions and work division among threads were on every GPU over and over again.
The tool evolved quite a bit since the first versions. I'm also using it for testing GPU code, teaching, and it has become one of the main drivers behind a lot of the research that I do.
-
PhD'ers, what are you working on? What CS topics excite you?
We have an open science policy, so anyone can use our framework yourself to optimize stuff, if you want! The original paper is linked at the bottom of the GitHub page.
-
How to Optimize a CUDA Matmul Kernel for CuBLAS-Like Performance: A Worklog
This is a great post for people who are new to optimizing GPU code.
It is interesting to see that the author got this far without interchanging the innermost loop over k to the outermost loop, as is done in CUTLASS (https://github.com/NVIDIA/cutlass).
As you can see in this blog post the code ends up with a lot of compile-time constants (e.g. BLOCKSIZE, BM, BN, BK, TM, TN) one way to optimize this code further is to use an auto-tuner to find the optimal value for all of these parameters for your GPU and problem size, for example Kernel Tuner (https://github.com/KernelTuner/kernel_tuner)
- Kernel Tuner
What are some alternatives?
PowerShell-Watch - A PowerShell Watch-Command cmdlet for repeatedly running a command or block of code until a change in the output occurs.
halutmatmul - Hashed Lookup Table based Matrix Multiplication (halutmatmul) - Stella Nera accelerator
access-manager - Access Manager provides web-based access to local admin (LAPS) passwords, BitLocker recovery keys, and just-in-time administrative access to Windows computers in a modern, secure, and user-friendly way.
pyopencl - OpenCL integration for Python, plus shiny features
passcore - A self-service password management tool for Active Directory
tf-quant-finance - High-performance TensorFlow library for quantitative finance.
ADCollector - A lightweight tool to quickly extract valuable information from the Active Directory environment for both attacking and defending.
arrayfire-python - Python bindings for ArrayFire: A general purpose GPU library.
winsw - A wrapper executable that can run any executable as a Windows service, in a permissive license.
scikit-cuda - Python interface to GPU-powered libraries
BlendLuxCore - Blender Integration for LuxCore
catboost - A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.