

-
Is Eigen still alive? There's been no release in 3 years, and no news about it: https://gitlab.com/libeigen/eigen/-/issues/2699
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
Terathon-Math-Library
C++ math library for 2D/3D/4D vector, matrix, quaternion, and geometric algebra.
If you want something similar, but for games:
https://github.com/EricLengyel/Terathon-Math-Library
-
-
For typical game physics engines... not that much. Math libraries like Eigen or Blaze use lots of template metaprogramming techniques under the hood that can help when you're doing large batched matrix multiplications (since it can remove temporary allocations at compile-time and can also fuse operations efficiently, as well as applying various SIMD optimizations), but it doesn't really help when you need lots of small operations (with mat3 / mat4 / vec3 / quat / etc.). Typical game physics engines tend to use iterative algorithms for their solvers (Gauss-Seidel, PBD, etc...) instead of batched "matrix"-oriented ones, so you'll get less benefits out of Eigen / Blaze compared to what you typically see in deep learning / scientific computing workloads.
The codebases I've seen in many game physics engines seem to all roll their own math libraries for these stuff, or even just use SIMD (SSE / AVX) intrinsics directly. Examples: PhysX (https://github.com/NVIDIA-Omniverse/PhysX), Box2D (https://github.com/erincatto/box2d), Bullet (https://github.com/bulletphysics/bullet3)...
-
For typical game physics engines... not that much. Math libraries like Eigen or Blaze use lots of template metaprogramming techniques under the hood that can help when you're doing large batched matrix multiplications (since it can remove temporary allocations at compile-time and can also fuse operations efficiently, as well as applying various SIMD optimizations), but it doesn't really help when you need lots of small operations (with mat3 / mat4 / vec3 / quat / etc.). Typical game physics engines tend to use iterative algorithms for their solvers (Gauss-Seidel, PBD, etc...) instead of batched "matrix"-oriented ones, so you'll get less benefits out of Eigen / Blaze compared to what you typically see in deep learning / scientific computing workloads.
The codebases I've seen in many game physics engines seem to all roll their own math libraries for these stuff, or even just use SIMD (SSE / AVX) intrinsics directly. Examples: PhysX (https://github.com/NVIDIA-Omniverse/PhysX), Box2D (https://github.com/erincatto/box2d), Bullet (https://github.com/bulletphysics/bullet3)...
-
Bullet
Bullet Physics SDK: real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning etc.
For typical game physics engines... not that much. Math libraries like Eigen or Blaze use lots of template metaprogramming techniques under the hood that can help when you're doing large batched matrix multiplications (since it can remove temporary allocations at compile-time and can also fuse operations efficiently, as well as applying various SIMD optimizations), but it doesn't really help when you need lots of small operations (with mat3 / mat4 / vec3 / quat / etc.). Typical game physics engines tend to use iterative algorithms for their solvers (Gauss-Seidel, PBD, etc...) instead of batched "matrix"-oriented ones, so you'll get less benefits out of Eigen / Blaze compared to what you typically see in deep learning / scientific computing workloads.
The codebases I've seen in many game physics engines seem to all roll their own math libraries for these stuff, or even just use SIMD (SSE / AVX) intrinsics directly. Examples: PhysX (https://github.com/NVIDIA-Omniverse/PhysX), Box2D (https://github.com/erincatto/box2d), Bullet (https://github.com/bulletphysics/bullet3)...
-
If you are talking about non-small matrix multiplication in MKL, is now in opensource as a part of oneDNN. It literally has exactly the same code, as in MKL (you can see this by inspecting constants or doing high-precision benchmarks).
For small matmul there is libxsmm. It may take tremendous efforts make something faster than oneDNN and libxsmm, as jit-based approach of https://github.com/oneapi-src/oneDNN/blob/main/src/gpu/jit/g... is too flexible: if someone finds a better sequence, oneDNN can reuse it without major change of design.
But MKL is not limited to matmul, I understand it...
-
Nutrient
Nutrient – The #1 PDF SDK Library, trusted by 10K+ developers. Other PDF SDKs promise a lot - then break. Laggy scrolling, poor mobile UX, tons of bugs, and lack of support cost you endless frustrations. Nutrient’s SDK handles billion-page workloads - so you don’t have to debug PDFs. Used by ~1 billion end users in more than 150 different countries.