mmperf
Flops
mmperf | Flops | |
---|---|---|
2 | 3 | |
121 | 277 | |
3.3% | - | |
4.3 | 3.2 | |
7 months ago | 8 months ago | |
C++ | C++ | |
Apache License 2.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mmperf
-
PyTorch on Apple M1 Faster Than TensorFlow-Metal
Here are the matmul sizes for the MiniLM model used for inference: https://github.com/mmperf/mmperf/blob/main/benchmark_sizes/b...
These are the matmul sizes for the BERT training workload https://github.com/mmperf/mmperf/blob/main/benchmark_sizes/b...
Yes we use the latest MoltenVK (1.3.204.0) installed in the system.
I will let @noxa and other IREE devs chime in on the SPIR-V path but we do support prefix sums etc in the GPU path.
//part of nod.ai team.
-
M1 Pro First Impressions: Core Management and CPU Performance
Could you give me a benchmark in particular? Or maybe this one works: https://github.com/mmperf/mmperf. I'll run it in an hour.
Flops
-
Threadripper 7000 Storm Peak CPU Surfaces with 64 Zen 4 Cores
Go and measure it yourself, if you have one :)
https://github.com/Mysticial/Flops/
You can also get a theoretical computation of the Flops, which matches nicely with the experimental measurement. You have to take into account:
- the clock frequency (~3.9 GHz on multithreaded workloads on my machine)
-
Is there any way to update firmware (for a Kraken x62) on Linux? If so, is doing so worthwhile?
In synthetic tests (that are probably 1-2% lower than theoretical max FLOPS), my i9-7940x consistently pulls around 1525 GFLOPS DP / 3050 GFLOPS SP (14 core, 3.6 GHz, AVX512). Admittedly, my CPU is overclocked, but not that much...for AVX512 stuff mine goes 3.6 GHz vs 3.1 GHz base speed. Extrapolating, at base clock this is 1313 GFLOPS DP / 2626 GFLOPS SP. This is ~3.2024x (or a +220.24% increase) Broadwell-e.
-
Any suggestions for improving my memory overclock?
That said, I am pushing nearly 2 TFLOPS compute (this is real world compute - synthetic benchmarks like this top out at ~3.3 TFLOPS), so I suppose that is fair lol.
What are some alternatives?
shark-samples
aws-graviton-getting-started - Helping developers to use AWS Graviton2 and Graviton3 processors which power the 6th and 7th generation of Amazon EC2 instances (C6g[d], M6g[d], R6g[d], T4g, X2gd, C6gn, I4g, Im4gn, Is4gen, G5g, C7g[d][n], M7g[d], R7g[d]).
flops - Tiny cpu benchmark
rust-crc32fast - Fast, SIMD-accelerated CRC32 (IEEE) checksum computation in Rust
cutlass - CUDA Templates for Linear Algebra Subroutines
iree - A retargetable MLIR-based machine learning compiler and runtime toolkit.
performance_results - performance results/benchmarks for a variety of machines
Rectangle - Move and resize windows on macOS with keyboard shortcuts and snap areas
xcode-hardware-performance - Results from running Xcode on a non-trivial open source project using various Macs