DOKSparse
virtuoso-opensource
DOKSparse | virtuoso-opensource | |
---|---|---|
2 | 1 | |
2 | 844 | |
- | - | |
4.2 | 8.9 | |
10 months ago | 7 days ago | |
Cuda | C | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DOKSparse
- GDlog: A GPU-Accelerated Deductive Engine
-
tensor.to_sparse() Memory Allocation
If using sparse tensors is a must, you can look into DOK sparse format, which is supported for 2d matrices in scipy. it kinda allows you to access any element of the sparse tensor in constant time, which makes it possible to create your tensor directly in sparse format, skipping the need to create a dense numpy array first. In case you need a GPU version of this, I have a library that implements sparse dok tensor in pytorch and cuda. currently it's GPU only.
virtuoso-opensource
-
GDlog: A GPU-Accelerated Deductive Engine
https://en.wikipedia.org/wiki/Datalog#Evaluation
...
VMware/ddlog: Differential datalog
> Bottom-up: DDlog starts from a set of input facts and computes all possible derived facts by following user-defined rules, in a bottom-up fashion. In contrast, top-down engines are optimized to answer individual user queries without computing all possible facts ahead of time. For example, given a Datalog program that computes pairs of connected vertices in a graph, a bottom-up engine maintains the set of all such pairs. A top-down engine, on the other hand, is triggered by a user query to determine whether a pair of vertices is connected and handles the query by searching for a derivation chain back to ground facts. The bottom-up approach is preferable in applications where all derived facts must be computed ahead of time and in applications where the cost of initial computation is amortized across a large number of queries.
From https://community.openlinksw.com/t/virtuoso-openlink-reasoni... https://github.com/openlink/virtuoso-opensource/issues/660 :
> The Virtuoso built-in (rule sets) and custom inferencing and reasoning is backward chaining, where the inferred results are materialised at query runtime. This results in fewer physical triples having to exist in the database, saving space and ultimately cost of ownership, i.e., less physical resources are required, compared to forward chaining where the inferred data is pre-generated as physical triples, requiring more physical resources for hosting the data.
FWIU it's called ShaclSail, and there's a NotifyingSail: org.eclipse.rdf4j.sail.shacl.ShaclSail: https://rdf4j.org/javadoc/3.2.0/org/eclipse/rdf4j/sail/shacl...
What are some alternatives?
cub - [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl
treeedb - Generate Soufflé Datalog types, relations, and facts that represent ASTs from a variety of programming languages.
MegBA - MegBA: A GPU-Based Distributed Library for Large-Scale Bundle Adjustment
FuXi - Chimezie Ogbuji's FuXi reasoner. NON-FUNCTIONING, RETAINED FOR ARCHIVAL PURPOSES. For working code plus version and associated support requirements see:
CUDA-Guide - CUDA Guide
pydatalog - Fork of pyDatalog https://sites.google.com/site/pydatalog/
cuhnsw - CUDA implementation of Hierarchical Navigable Small World Graph algorithm
highway - Performance-portable, length-agnostic SIMD with runtime dispatch
TorchPQ - Approximate nearest neighbor search with product quantization on GPU in pytorch and cuda
roxi - Reactive Reasoning
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
NMT4RDFS - Neural Machine Translation for RDFS reasoning: code and datasets for "Deep learning for noise-tolerant RDFS reasoning" http://www.semantic-web-journal.net/content/deep-learning-noise-tolerant-rdfs-reasoning-4