onnx-simplifier
PaddleOCR2Pytorch
Our great sponsors
onnx-simplifier | PaddleOCR2Pytorch | |
---|---|---|
3 | 1 | |
3,546 | 760 | |
- | - | |
7.1 | 4.4 | |
16 days ago | 2 months ago | |
C++ | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
onnx-simplifier
-
Show: Cross-platform Image segmentation on video using eGUI, onnxruntime and ffmpeg
onnx-simplifier can shed some of incompatibilities in widespread use, but is itself bug ridden and lagging behind the standard. For any serious model, or when you don't get lucky simplifying the model upstream, you'd generally want good support of opset 11.
-
[Technical Article] OCR Upgrade
ONNX Simplifier:https://github.com/daquexian/onnx-simplifier
-
PyTorch 1.10
As far as I know, the ONNX format won't give you a performance boost on its own. However, there are ONNX optimizers for the ONNX runtime which will speed up your inference.
But if you are using Nvidia Hardware, then TensorRT should give you the best performance possible, especially if you change the precision level. Don't forget to simplify your ONNX model before you converting it to TensorRT though: https://github.com/daquexian/onnx-simplifier
PaddleOCR2Pytorch
-
[Technical Article] OCR Upgrade
Path №2: use the tools provided by the project https://github.com/frotms/PaddleOCR2Pytorch to convert the models of the Paddle Inference to the PTH files of Pytorch, then use the tools in Pytorch to convert the PTH files to the TorchScript files in Trace mode, and finally use PNNX in the NCNN toolbox to output them to PNNX and NCNN formats, and take the model files in NCNN format for deployment.
What are some alternatives?
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
opencv-mobile - The minimal opencv for Android, iOS, ARM Linux, Windows, Linux, MacOS, WebAssembly
torch2trt - An easy to use PyTorch to TensorRT converter
Paddle2ONNX - ONNX Model Exporter for PaddlePaddle
PaddleOCR - Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)
functorch - functorch is JAX-like composable function transforms for PyTorch.
ncnn - ncnn is a high-performance neural network inference framework optimized for the mobile platform
nn - 🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
OpenCV - Open Source Computer Vision Library
TensorRT - PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
deepin-ocr