MegBA
DOKSparse
MegBA | DOKSparse | |
---|---|---|
1 | 2 | |
431 | 2 | |
1.2% | - | |
4.5 | 4.2 | |
5 months ago | 10 months ago | |
Cuda | Cuda | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
MegBA
DOKSparse
- GDlog: A GPU-Accelerated Deductive Engine
-
tensor.to_sparse() Memory Allocation
If using sparse tensors is a must, you can look into DOK sparse format, which is supported for 2d matrices in scipy. it kinda allows you to access any element of the sparse tensor in constant time, which makes it possible to create your tensor directly in sparse format, skipping the need to create a dense numpy array first. In case you need a GPU version of this, I have a library that implements sparse dok tensor in pytorch and cuda. currently it's GPU only.
What are some alternatives?
PBA - Photometric Bundle Adjustment for Dense Multi-View Stereo
cub - [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl
pixel-perfect-sfm - Pixel-Perfect Structure-from-Motion with Featuremetric Refinement (ICCV 2021, Best Student Paper Award)
CUDA-Guide - CUDA Guide
FirstCollisionTimestepRarefiedGasSimulator - This simulator computes all possible intersections for a very small timestep for a particle model
cuhnsw - CUDA implementation of Hierarchical Navigable Small World Graph algorithm
TornadoVM - TornadoVM: A practical and efficient heterogeneous programming framework for managed languages
TorchPQ - Approximate nearest neighbor search with product quantization on GPU in pytorch and cuda
ceres-solver - A large scale non-linear optimization library
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
Scalix - Scalix is a data parallel compute library that automatically scales to the available compute resources.
cccl - CUDA C++ Core Libraries