yolov5
deepsparse
Our great sponsors
yolov5 | deepsparse | |
---|---|---|
129 | 21 | |
46,738 | 2,866 | |
2.9% | 2.7% | |
8.9 | 9.6 | |
6 days ago | 6 days ago | |
Python | Python | |
GNU Affero General Public License v3.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
yolov5
-
จำแนกสายพันธ์ุหมากับแมวง่ายๆด้วยYoLoV5
Ref https://www.youtube.com/watch?v=0GwnxFNfZhM https://github.com/ultralytics/yolov5 https://dev.to/gfstealer666/kaaraich-yolo-alkrithuemainkaartrwcchcchabwatthu-object-detection-3lef https://www.kaggle.com/datasets/devdgohil/the-oxfordiiit-pet-dataset/data
- How would i go about having YOLO v5 return me a list from left to right of all detected objects in an image?
-
Building a Drowsiness Detection Web App from scratch - pt2
!git clone https://github.com/ultralytics/yolov5.git ## Navigate to the model %cd yolov5/ ## Install requirements !pip install -r requirements.txt ## Download the YOLOv5 model !wget https://github.com/ultralytics/yolov5/releases/download/v6.0/yolov5s.pt
-
[Help: Project] Transfer Learning on YOLOv8
Specifically what I did was take the coco128.yaml, added 6 new classes from Dataset A (which have already been converted to YOLO Darknet TXT), from index 0-5 and subsequently adjusted the indices of the other COCO classes. The I proceeded to train and validate on Dataset A for 20 epochs.
-
Changing labels of default YOLOv5 model
I am using the default YOLOv5m6 model here with sahi/yolov5 library for my object detection project. I want to change just some of labels - for example when YOLO detects a human, I want it to label the human as "threat", not "person". Is there any way I can do it just changing some code, or I should train the model from scratch by just changing labels?
-
First time working with computer vision, need help figuring out a problem in my model
You should add them without annotations. Go through this.
-
AI Camera?
You are correct and if you check the firmware, it's yet another famous 3rd party project without attribution, namely https://github.com/ultralytics/yolov5
-
First non-default print on K1 - success
On one side, being a Linux user for 24 years now, it annoys me that they rip off code and claiming it as theirs again, thus violating licenses, but on the other thanks to k3d's exploit I'm able to tinker more with the machine and if needed do (selective) updates by hand then with a closed source system. It's not just "klipper", with klipper, fluidd and moonraker, it's also ffmpeg and mjpegstreamer. It's gonna be interesting since they also use a project that isn't just GPL, but APGL (in short "If your software gives service online, you have to publish the source code of it and any library that it borrows functions from.") - they use yolov5 (for AI).
- How does the background class work in object detection?
deepsparse
-
Fast Llama 2 on CPUs with Sparse Fine-Tuning and DeepSparse
Interesting company. Yannic Kilcher interviewed Nir Shavit last year and they went into some depth: https://www.youtube.com/watch?v=0PAiQ1jTN5k DeepSparse is on GitHub: https://github.com/neuralmagic/deepsparse
-
The future of quantization techniques in deep learning.
sparsity https://github.com/neuralmagic/deepsparse
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 1), what is the easiest way to speed up inference (assume only PyTorch and primarily GPU but also some CPU)? I have been using ONNX and Torchscript but there is a bit of a learning curve and sometimes it can be tricky to get the model to actually work. Is there anything else worth trying? I am enthused by things like TorchDynamo (although I have not tested it extensively) due to its apparent ease of use. I also saw the post yesterday about Kernl using (OpenAI) Triton kernels to speed up transformer models which also looks interesting. Are things like SageMaker Neo or NeuralMagic worth trying? My only reservation with some of these is they still seem to be pretty model/architecture specific. I am a little reluctant to put much time into these unless I know others have had some success first.
-
[D] Most efficient open source language model ?
You should look into deepsparse, they are working on delivering GPU level performance on consumer CPUs with some great results: https://github.com/neuralmagic/deepsparse. There is a great interview with the founder, Nir Shavit here: https://piped.kavin.rocks/watch?v=0PAiQ1jTN5k
-
[R] New sparsity research (oBERT) enabled 175X increase in CPU performance for MLPerf submission
Utilizing the oBERT research we published at Neural Magic and some further iteration, we’ve enabled an increase in NLP performance of 175X while retaining 99% accuracy on the question-answering task in MLPerf. A combination of distillation, layer dropping, quantization, and unstructured pruning with oBERT enabled these large performance gains through the DeepSparse Engine. All of our contributions and research are open-sourced or free to use. Read through the oBERT paper on arxiv, try out the research in SparseML, and dive into the writeup to learn more about how we achieved these impressive results and utilize them for your own use cases!
-
An open-source library for optimizing deep learning inference. (1) You select the target optimization, (2) nebullvm searches for the best optimization techniques for your model-hardware configuration, and then (3) serves an optimized model that runs much faster in inference
Open-source projects leveraged by nebullvm include OpenVINO, TensorRT, Intel Neural Compressor, SparseML and DeepSparse, Apache TVM, ONNX Runtime, TFlite and XLA. A huge thank you to the open-source community for developing and maintaining these amazing projects.
-
[R] BERT-Large: Prune Once for DistilBERT Inference Performance
BERT-Large (345 million parameters) is now faster than the much smaller DistilBERT (66 million parameters) all while retaining the accuracy of the much larger BERT-Large model! We made this possible with Intel Labs by applying cutting-edge sparsification and quantization research from their Prune Once For All paper and utilizing it in the DeepSparse engine. It makes BERT-Large 12x smaller while delivering 8x latency speedup on commodity CPUs. We open-sourced the research in SparseML; run through the overview here and give it a try!
-
[R] How well do sparse ImageNet models transfer? Prune once and deploy anywhere for inference performance speedups! (arxiv link in comments)
And benchmark/deploy with 8X better performance in DeepSparse!
- Sparseserver.ui – test the performance of Sparse Transformers
-
[P] SparseServer.UI : A UI to test performance of Sparse Transformers
Hi _Arsenie, this runs the deepsparse.server command for multiple models. and btw, we recently updated the READMEs for the Deepsparse Engine https://github.com/neuralmagic/deepsparse
What are some alternatives?
mmdetection - OpenMMLab Detection Toolbox and Benchmark
NudeNet - Neural Nets for Nudity Detection and Censoring
detectron2 - Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
darknet - YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )
model-optimization - A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Deep-SORT-YOLOv4 - People detection and optional tracking with Tensorflow backend.
sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
yolor - implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks (https://arxiv.org/abs/2105.04206)
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
OpenCV - Open Source Computer Vision Library
PINTO_model_zoo - A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.