tmu
pyopencl
tmu | pyopencl | |
---|---|---|
5 | 2 | |
109 | 1,029 | |
2.8% | - | |
9.2 | 8.1 | |
about 1 month ago | 11 days ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tmu
- Tsetlin machine – the other AI toolbooks
- Tsetlin Machine Unified (TMU) - One Codebase to Rule Them All
-
[R] New Tsetlin machine learning scheme creates up to 80x smaller logical rules, benefitting hardware efficiency and interpretability.
Code: https://github.com/cair/tmu
-
This Artificial Intelligence (AI) Research From Norway Introduces Tsetlin Machine-Based Autoencoder For Representing Words Using Logical Expressions
Quick Read: https://www.marktechpost.com/2023/01/10/this-artificial-intelligence-ai-research-from-norway-introduces-tsetlin-machine-based-autoencoder-for-representing-words-using-logical-expressions/ Paper: https://arxiv.org/pdf/2301.00709.pdf Github: https://github.com/cair/tmu
-
Do we really need 300 floats to represent the meaning of a word? Representing words with words - a logical approach to word embedding using a self-supervised Tsetlin Machine Autoencoder.
Here is a new self-supervised machine learning approach that captures word meaning with concise logical expressions. The logical expressions consist of contextual words like “black,” “cup,” and “hot” to define other words like “coffee,” thus being human-understandable. I raise the question in the heading because our logical embedding performs competitively on several intrinsic and extrinsic benchmarks, matching pre-trained GLoVe embeddings on six downstream classification tasks. Thanks to my clever PhD student Bimal, we now have even more fun and exciting research ahead of us. Our long term research goal is, of course, to provide an energy efficient and transparent alternative to deep learning. You find the paper here: https://arxiv.org/abs/2301.00709 , an implementation of the Tsetlin Machine Autoencoder here: https://github.com/cair/tmu, and a simple word embedding demo here: https://github.com/cair/tmu/blob/main/examples/IMDbAutoEncoderDemo.py.
pyopencl
-
An example for OpenCL 3.0?
Please note that OpenCL consists of two parts: host API and a separate language which is used to write kernels (code which is going to be offloaded to devices). OpenCL specification describes host APIs as C-style APIs and that is what implementors has to provide. However, there are number of various libraries which provides bindings for other languages: - C++ - Python - Go - Rust
-
Doubts on pyopencl
I thought the project could be dead, but then I looked into the latest commits to the repository, and it is certainly not dead as a project.
What are some alternatives?
nvitop - An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management.
PyCUDA - CUDA integration for Python, plus shiny features
chainer - A flexible framework of neural networks for deep learning
python-performance - Repository for the book Fast Python - published by Manning
scikit-cuda - Python interface to GPU-powered libraries
arrayfire-python - Python bindings for ArrayFire: A general purpose GPU library.
inventory-hunter - ⚡️ Get notified as soon as your next CPU, GPU, or game console is in stock
TsetlinMachine - Code and datasets for the Tsetlin Machine
plotoptix - Data visualisation and ray tracing in Python based on OptiX 7.7 framework.
catboost - A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
LSQR-CUDA - This is a LSQR-CUDA implementation written by Lawrence Ayers under the supervision of Stefan Guthe of the GRIS institute at the Technische Universität Darmstadt. The LSQR library was authored Chris Paige and Michael Saunders.