pytracking
vision
pytracking | vision | |
---|---|---|
5 | 19 | |
3,086 | 15,454 | |
1.8% | 2.0% | |
5.1 | 9.4 | |
about 2 months ago | 3 days ago | |
Python | Python | |
GNU General Public License v3.0 only | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytracking
-
Need help with an idea I had to record my kids soccer games
A third way to do it, is using some form of algorithm which allows you to select a video target to track. Something like this: https://github.com/visionml/pytracking. The algorithm can probably output the XY coordinates of the object its tracking, which is sent to the same mechanism described above.
-
ETH Zurich Team Introduce Exemplar Transformers: A New Efficient Transformer Layer For Real-Time Visual Object Tracking
Github: https://github.com/visionml/pytracking
Looking at this diagram, it looks kinda like a Kalman Filter schematic
- [R] ETH Zurich Proposes Exemplar Transformers: Robust Visual Tracking Thatโs 8x Faster and CPU-Compatible
vision
-
Transitioning From PyTorch to Burn
Let's start by defining the ResNet module according to the Residual Network architecture, as replicated[1] by the torchvision implementation of the model we will import. Detailed architecture variants with a depth of 18, 34, 50, 101 and 152 layers can be found in the table below.
-
Validation loss goes up after third epoch
The goal is to do keypoint-detection of fish (eg nose, tail etc) in a fishtank. By using a stereocamera for this, I'm also getting depth information which lets me measure the fish-length underwater. Im only training on RGB-Images though. I'm transfer-learning pytorch's keypoint-rcnn-resnet50, because thats the only available one in https://github.com/pytorch/vision/blob/main/torchvision/models/detection/keypoint_rcnn.py.
-
Reading a DL paper: YOLO summary and discussion
Found relevant code at https://github.com/pytorch/vision + all code implementations here
-
Open discussion and useful links people trying to do Object Detection
* Why doesnt Pytorch have YOLO! https://github.com/pytorch/vision/issues/6341
-
My Neural Net is stuck, I've run out of ideas
Sorry to be annoying but I thought it was nice to give you some news as well. I was confused as to why there isnt yolo in pytorch, here it is why https://github.com/pytorch/vision/issues/6341
-
Anyone ever get a virus from custom models?
The problem is the industry; People are still using .ckpt/.pth files to share weights, and unfortunately in their research work, they would need to reproduce the works of others. even pytorch include pretrained weights using pickles. https://github.com/pytorch/vision/blob/main/torchvision/models/inception.py
-
[Discussion] Stochastic Depth with BatchNorm ?
My question is more related to the variance of the batchs. If one batch contains samples that skip a connection and samples that do not ('row' mode in the Torchvision implementation), even if the values are ajusted to preserve the expected value, the variance will be much higher because we have in practice two distributions (for x_n and x_n + f(x_n)/p), which will mess up with the update of the batch normalization. Also, at inference time, all forward passes will be done as x_{n+1} = x_n + f(x_n), which has a different variance. The torchvision implementation also offers a 'batch' mode that kinda reduce this issue (because the global variance computed this way will be the mean of both distribution variances, instead of the variance of the joint distribution) but it does not seem to be the default mode (it does not even exist in the timm implementation).
-
Solution for "RuntimeError: Couldn't load custom C++ ops"
RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.version and your torchvision version with torchvision.version and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.
-
[D] My experience with running PyTorch on the M1 GPU
$ python vgg16-cifar10.py --device "cuda" torch 1.11.0+cu102 device cuda Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to data/cifar-10-python.tar.gz 170499072it [00:46, 3628424.66it/s] Extracting data/cifar-10-python.tar.gz to data Downloading: "https://github.com/pytorch/vision/archive/v0.11.0.zip" to /home/md/.cache/torch/hub/v0.11.0.zip Epoch: 001/001 | Batch 0000/1406 | Loss: 2.6563 Epoch: 001/001 | Batch 0100/1406 | Loss: 2.4686 Epoch: 001/001 | Batch 0200/1406 | Loss: 2.1224 Epoch: 001/001 | Batch 0300/1406 | Loss: 2.1879 Epoch: 001/001 | Batch 0400/1406 | Loss: 2.1733 Epoch: 001/001 | Batch 0500/1406 | Loss: 2.2413 Epoch: 001/001 | Batch 0600/1406 | Loss: 2.0518 Epoch: 001/001 | Batch 0700/1406 | Loss: 2.1621 Epoch: 001/001 | Batch 0800/1406 | Loss: 1.9033 Epoch: 001/001 | Batch 0900/1406 | Loss: 1.8379 Epoch: 001/001 | Batch 1000/1406 | Loss: 1.9572 Epoch: 001/001 | Batch 1100/1406 | Loss: 1.8823 Epoch: 001/001 | Batch 1200/1406 | Loss: 1.7978 Epoch: 001/001 | Batch 1300/1406 | Loss: 2.0239 Epoch: 001/001 | Batch 1400/1406 | Loss: 1.8389 Time / epoch without evaluation: 6.75 min <------------------ Epoch: 001/001 | Train: 25.52% | Validation: 26.40% | Best Validation (Ep. 001): 26.40% Time elapsed: 9.03 min Total Training Time: 9.03 min Test accuracy 26.54% Total Time: 9.48 min
-
Pytorch libraries
It is here in the source repository https://github.com/pytorch/vision/blob/main/torchvision/datasets/utils.py
What are some alternatives?
pysot - SenseTime Research platform for single object tracking, implementing algorithms like SiamRPN and SiamMask.
yolov5 - YOLOv5 ๐ in PyTorch > ONNX > CoreML > TFLite
django-matomo - A simple app to add the Matomo JS tracking code to your template.
torch2trt - An easy to use PyTorch to TensorRT converter
Face Recognition - The world's simplest facial recognition api for Python and the command line
apple_m1_pro_python - A collection of ML scripts to test the M1 Pro MacBook Pro
EasyOCR - Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.
nn - ๐งโ๐ซ 60 Implementations/tutorials of deep learning papers with side-by-side notes ๐; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), ๐ฎ reinforcement learning (ppo, dqn), capsnet, distillation, ... ๐ง
GTR - Global Tracking Transformers, CVPR 2022
functorch - functorch is JAX-like composable function transforms for PyTorch.
pombo - Theft-recovery tracking opensource software.
TensorRT - PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT