tflite-micro VS pi-gemm

Compare tflite-micro vs pi-gemm and see what are their differences.

tflite-micro

Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors). (by tensorflow)

pi-gemm

A Raspberry Pi GPU-accelerated implementation of the GEMM matrix-multiply function (by jetpacapp)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
tflite-micro pi-gemm
2 1
1,654 87
6.3% -
9.4 10.0
2 days ago almost 10 years ago
C++ C++
Apache License 2.0 GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

tflite-micro

Posts with mentions or reviews of tflite-micro. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-07-26.

pi-gemm

Posts with mentions or reviews of pi-gemm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-01-10.

What are some alternatives?

When comparing tflite-micro and pi-gemm you can also consider the following projects:

onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators

TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference

nebuly - The user analytics platform for LLMs

neural-compressor - SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime

deepsparse - Sparsity-aware deep learning inference runtime for CPUs

sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

tensorflow - An Open Source Machine Learning Framework for Everyone