mmselfsup
mmdeploy
mmselfsup | mmdeploy | |
---|---|---|
5 | 4 | |
3,089 | 2,524 | |
0.8% | 2.5% | |
5.3 | 7.9 | |
11 months ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mmselfsup
-
MMDeploy: Deploy All the Algorithms of OpenMMLab
MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
-
Does anyone know how a loss curve like this can happen? Details in comments
For some reason, the loss goes up shaply right at the start and slowly goes back down. I am self-supervised pretraining an image modeling with DenseCL using mmselfsup (https://github.com/open-mmlab/mmselfsup). This shape happened on the Coco-2017 dataset and my custom dataset. As you can see, it happens consistently for different runs. How could the loss increase so sharply and is it indicative of an issue with the training? The loss peaks before the first epoch is finished. Unfortunately, the library does not support validation.
- Defect Detection using RPI
- [D] State-of-the-Art for Self-Supervised (Pre-)Training of CNN architectures (e.g. ResNet)?
- Rebirth! OpenSelfSup is upgraded to MMSelfSup
mmdeploy
- [D] Object detection models that can be easily converted to CoreML
-
Orange Pi 5 Plus Koboldcpp Demo (MPT, Falcon, Mini-Orca, Openllama)
The RK3588 also has a NPU for accelerating neural networks. The bad news is the API is not supported by any of the inference engines (afaik), but the NPU can run models directly that have been converted to the RKNN format. It is a long shot, but you can find details here.
-
MMDeploy: Deploy All the Algorithms of OpenMMLab
BibTeX @misc{=mmdeploy, title={OpenMMLab's Model Deployment Toolbox.}, author={MMDeploy Contributors}, howpublished = {\url{https://github.com/open-mmlab/mmdeploy}}, year={2021} }
-
Removing the bounding box generated by OnnxRuntime segmentation
I have a semantic segmentation model trained using the mmdetection repo. Then it is converted to the ONNX format using the mmdeploy repo.
What are some alternatives?
Unsupervised-Semantic-Segmentation - Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals. [ICCV 2021]
FastDeploy - ⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio 20+ main stream scenarios and 150+ SOTA models with end-to-end optimization, multi-platform and multi-framework support.
anomalib - An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
mmflow - OpenMMLab optical flow toolbox and benchmark
calibrated-backprojection-network - PyTorch Implementation of Unsupervised Depth Completion with Calibrated Backprojection Layers (ORAL, ICCV 2021)
mmfewshot - OpenMMLab FewShot Learning Toolbox and Benchmark
mmagic - OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox. Unlock the magic 🪄: Generative-AI (AIGC), easy-to-use APIs, awsome model zoo, diffusion models, for text-to-image generation, image/video restoration/enhancement, etc.
mmdetection - OpenMMLab Detection Toolbox and Benchmark
barlowtwins - Implementation of Barlow Twins paper
mmpretrain - OpenMMLab Pre-training Toolbox and Benchmark
Revisiting-Contrastive-SSL - Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
mmrotate - OpenMMLab Rotated Object Detection Toolbox and Benchmark