vision VS nn

Compare vision vs nn and see what are their differences.

vision

Datasets, Transforms and Models specific to Computer Vision (by pytorch)

nn

🧑‍🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠 (by lab-ml)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
vision nn
19 26
15,374 47,503
1.5% 6.6%
9.5 7.7
4 days ago 25 days ago
Python Jupyter Notebook
BSD 3-clause "New" or "Revised" License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

vision

Posts with mentions or reviews of vision. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-14.
  • Transitioning From PyTorch to Burn
    5 projects | dev.to | 14 Feb 2024
    Let's start by defining the ResNet module according to the Residual Network architecture, as replicated[1] by the torchvision implementation of the model we will import. Detailed architecture variants with a depth of 18, 34, 50, 101 and 152 layers can be found in the table below.
  • Validation loss goes up after third epoch
    1 project | /r/deeplearning | 27 Jun 2023
    The goal is to do keypoint-detection of fish (eg nose, tail etc) in a fishtank. By using a stereocamera for this, I'm also getting depth information which lets me measure the fish-length underwater. Im only training on RGB-Images though. I'm transfer-learning pytorch's keypoint-rcnn-resnet50, because thats the only available one in https://github.com/pytorch/vision/blob/main/torchvision/models/detection/keypoint_rcnn.py.
  • Reading a DL paper: YOLO summary and discussion
    2 projects | /r/deeplearning | 26 Feb 2023
    Found relevant code at https://github.com/pytorch/vision + all code implementations here
  • Open discussion and useful links people trying to do Object Detection
    4 projects | /r/deeplearning | 18 Feb 2023
    * Why doesnt Pytorch have YOLO! https://github.com/pytorch/vision/issues/6341
  • My Neural Net is stuck, I've run out of ideas
    2 projects | /r/deeplearning | 16 Feb 2023
    Sorry to be annoying but I thought it was nice to give you some news as well. I was confused as to why there isnt yolo in pytorch, here it is why https://github.com/pytorch/vision/issues/6341
  • Anyone ever get a virus from custom models?
    1 project | /r/StableDiffusion | 27 Jan 2023
    The problem is the industry; People are still using .ckpt/.pth files to share weights, and unfortunately in their research work, they would need to reproduce the works of others. even pytorch include pretrained weights using pickles. https://github.com/pytorch/vision/blob/main/torchvision/models/inception.py
  • [Discussion] Stochastic Depth with BatchNorm ?
    2 projects | /r/MachineLearning | 26 Dec 2022
    My question is more related to the variance of the batchs. If one batch contains samples that skip a connection and samples that do not ('row' mode in the Torchvision implementation), even if the values are ajusted to preserve the expected value, the variance will be much higher because we have in practice two distributions (for x_n and x_n + f(x_n)/p), which will mess up with the update of the batch normalization. Also, at inference time, all forward passes will be done as x_{n+1} = x_n + f(x_n), which has a different variance. The torchvision implementation also offers a 'batch' mode that kinda reduce this issue (because the global variance computed this way will be the mean of both distribution variances, instead of the variance of the joint distribution) but it does not seem to be the default mode (it does not even exist in the timm implementation).
  • Solution for "RuntimeError: Couldn't load custom C++ ops"
    2 projects | /r/StableDiffusion | 7 Sep 2022
    RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.version and your torchvision version with torchvision.version and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.
  • [D] My experience with running PyTorch on the M1 GPU
    4 projects | /r/MachineLearning | 19 May 2022
    $ python vgg16-cifar10.py --device "cuda" torch 1.11.0+cu102 device cuda Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to data/cifar-10-python.tar.gz 170499072it [00:46, 3628424.66it/s] Extracting data/cifar-10-python.tar.gz to data Downloading: "https://github.com/pytorch/vision/archive/v0.11.0.zip" to /home/md/.cache/torch/hub/v0.11.0.zip Epoch: 001/001 | Batch 0000/1406 | Loss: 2.6563 Epoch: 001/001 | Batch 0100/1406 | Loss: 2.4686 Epoch: 001/001 | Batch 0200/1406 | Loss: 2.1224 Epoch: 001/001 | Batch 0300/1406 | Loss: 2.1879 Epoch: 001/001 | Batch 0400/1406 | Loss: 2.1733 Epoch: 001/001 | Batch 0500/1406 | Loss: 2.2413 Epoch: 001/001 | Batch 0600/1406 | Loss: 2.0518 Epoch: 001/001 | Batch 0700/1406 | Loss: 2.1621 Epoch: 001/001 | Batch 0800/1406 | Loss: 1.9033 Epoch: 001/001 | Batch 0900/1406 | Loss: 1.8379 Epoch: 001/001 | Batch 1000/1406 | Loss: 1.9572 Epoch: 001/001 | Batch 1100/1406 | Loss: 1.8823 Epoch: 001/001 | Batch 1200/1406 | Loss: 1.7978 Epoch: 001/001 | Batch 1300/1406 | Loss: 2.0239 Epoch: 001/001 | Batch 1400/1406 | Loss: 1.8389 Time / epoch without evaluation: 6.75 min <------------------ Epoch: 001/001 | Train: 25.52% | Validation: 26.40% | Best Validation (Ep. 001): 26.40% Time elapsed: 9.03 min Total Training Time: 9.03 min Test accuracy 26.54% Total Time: 9.48 min
  • Pytorch libraries
    1 project | /r/learnprogramming | 8 Feb 2022
    It is here in the source repository https://github.com/pytorch/vision/blob/main/torchvision/datasets/utils.py

nn

Posts with mentions or reviews of nn. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-01-09.

What are some alternatives?

When comparing vision and nn you can also consider the following projects:

yolov5 - YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite

GFPGAN-for-Video-SR - A colab notebook for video super resolution using GFPGAN

torch2trt - An easy to use PyTorch to TensorRT converter

labml - 🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱

apple_m1_pro_python - A collection of ML scripts to test the M1 Pro MacBook Pro

functorch - functorch is JAX-like composable function transforms for PyTorch.

ZoeDepth - Metric depth estimation from a single image

TensorRT - PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

onnx-simplifier - Simplify your onnx model

Basic-UI-for-GPT-J-6B-with-low-vram - A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loading takes 12gb free ram.