[P] Farewell, CUDA OOM: Automatic Gradient Accumulation

This page summarizes the projects mentioned and recommended in the original post on /r/MachineLearning

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • composer

    Supercharge Your Model Training (by mosaicml)

  • Which is why I'm excited to announce that we (MosaicML) just released an automatic way to avoid these errors. Namely, we just added automatic gradient accumulation to Composer, our open source library for faster + easier neural net training.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Composer – A PyTorch Library for Efficient Neural Network Training

    1 project | news.ycombinator.com | 18 Aug 2023
  • Train neural networks up to 7x faster

    1 project | news.ycombinator.com | 30 Jun 2023
  • [D] Am I stupid for avoiding high level frameworks?

    1 project | /r/MachineLearning | 23 Nov 2022
  • I highly and genuinely recommend Fast.ai course to beginners

    2 projects | /r/learnmachinelearning | 21 Jun 2022
  • [D] Is anyone working on interesting ML libraries and looking for contributors?

    4 projects | /r/MachineLearning | 17 Jun 2022