Pytorch CUDA out of memory persists after lowering batch size and clearing gpu cache

This page summarizes the projects mentioned and recommended in the original post on /r/pytorch

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • koila

    Prevent PyTorch's `CUDA error: out of memory` in just 1 line of code.

  • Having 53760 neurons takes much memory. Try adding more Conv2D layers or play with stride. Also, try .detach() to data and labels after training. Lastly, I would suggest to take a look at https://github.com/rentruewang/koila. Have not tried yet but it should be helpful.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts