nos VS pytorch-accelerated

Compare nos vs pytorch-accelerated and see what are their differences.

nos

Module to Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elastic quotas - Effortless optimization at its finest! (by nebuly-ai)

pytorch-accelerated

A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal, but extensible training loop which is flexible enough to handle the majority of use cases, and capable of utilizing different hardware options with no code changes required. Docs: https://pytorch-accelerated.readthedocs.io/en/latest/ (by Chris-hughes10)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
nos pytorch-accelerated
19 1
570 157
1.9% -
5.6 4.6
4 months ago 3 months ago
Go Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

nos

Posts with mentions or reviews of nos. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-01.
  • Plug and play modules to optimize the performances of your AI systems
    3 projects | news.ycombinator.com | 1 Mar 2023
    Some of the available modules include:

    Speedster: Automatically apply the best set of SOTA optimization techniques to achieve the maximum inference speed-up on your hardware. https://github.com/nebuly-ai/nebullvm/blob/main/apps/acceler...

    Nos: Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elastic quotas. https://github.com/nebuly-ai/nos

    ChatLLaMA: Build faster and cheaper ChatGPT-like training process based on LLaMA architectures. https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...

    OpenAlphaTensor: Increase the computational performances of an AI model with custom-generated matrix multiplication algorithm fine-tuned for your specific hardware. https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...

    Forward-Forward: The Forward Forward algorithm is a method for training deep neural networks that replaces the backpropagation forward and backward passes with two forward passes. https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...

  • Nos – Open-Source to Maximize GPU Utilization in Kubernetes
    3 projects | news.ycombinator.com | 9 Feb 2023
    Hi HN! I’m Michele Zanotti and today I’m releasing nos, an open-source module to efficiently run GPU workloads on Kubernetes!

    Nos is meant to increase GPU utilization and cut down infrastructure and operational costs providing 2 main features:

    1. Dynamic GPU Partitioning: you can think of this as a cluster autoscaler for GPUs. Instead of scaling up the number of nodes and GPUs, it dynamically partitions them into smaller “GPU slices”. This ensures that each workload only uses the GPU resources it actually needs, resulting in spare GPU capacity that could be used for other workloads. To partition GPUs, nos leverages Nvidia's MPS and MIG [1,2], finally making them dynamic.

    2. Elastic Resource Quota management: it allows to increase the number of Pods running on the cluster by allowing teams (namespaces) to borrow quotas of reserved resources from other teams as long as they are not using them.

    https://github.com/nebuly-ai/nos

    Let me know your thoughts on the project in the comments. And don't forget to leave a star on GitHub if you like the project :)

    Nos addresses some key challenges of Kubernetes tied to the fact that Kubernetes was not designed to support GPU and AI / machine learning workloads. In Kubernetes, GPUs are managed with [3] Nvidia k8s Device Plugin, which has a few major downsides. First, it requires the allocation of an integer number of GPUs per workload, not allowing workloads to request only fractions of GPU. Second, when enabling GPU shared access either with time-slicing or MIG, the device plugin advertises to Kubernetes a fixed set of GPU resources that do not dynamically adapt to the requests of the Pods at each time.

    This often leads to both underutilized GPUs and pending Pods, and/or the cluster admin having to spend a lot of time looking for workarounds to make the best use of GPUs.

    For example, consider a company with a k8s cluster with 20 GPUs, where 3 of these GPUs have been reserved for the data science team using Resource Quota objects. In most cases, the workloads of data scientists (notebooks, scripts, etc.) require much less memory/compute resources than those of an entire GPU, yet Kubernetes will force each container to consume an entire GPU. Also, if the team once needs to run a heavy workload, it may want to use as many resources as possible. However, the Resource Quota over their namespace would constrain the team to use at most the 3 GPUs reserved for them, even if the company cluster may be full of unused GPUs!

    Instead, with nos the data science team would use nos Dynamic GPU Partitioning to request GPU slices so that many workloads can share the same GPU. Also, Elastic Resource Quotas would allow the team to consume more than the 3 reserved GPUs, borrowing quotas from other teams that are not using them. To recap, the team would be able to launch more Pods and the company would likely need fewer nodes. All this with minimal effort required by the cluster admin, who only has to set up nos.

    Let me know what you think of nos, feedback would be very helpful! :) And please leave a star on GitHub if you like this opensource https://github.com/nebuly-ai/nos

    Here are some other links that may be useful

    - Tutorial on how to use Dynamic GPU Partitioning with Nvidia MIG https://towardsdatascience.com/dynamic-mig-partitioning-in-k...

    1 project | news.ycombinator.com | 9 Feb 2023
    1 project | news.ycombinator.com | 9 Feb 2023
  • Introducing Nos - Opensource to Maximize GPU Utilization in Kubernetes (more in the comments)
    1 project | /r/kubernetes | 31 Jan 2023
    1 project | /r/programming | 30 Jan 2023
  • Opensource to maximize GPU utilization in Kubernetes
    1 project | /r/u_galaxy_dweller | 30 Jan 2023
    Let me know what you think of nos, feedback would be very helpful! :) And please leave a star on GitHub if you like this opensource https://github.com/nebuly-ai/nos
  • New Opensource to Maximize GPU Utilization in Kubernetes
    1 project | /r/opensource | 30 Jan 2023
  • Show HN: Nos – Open-Source to Maximize GPU Utilization in Kubernetes
    2 projects | news.ycombinator.com | 30 Jan 2023
  • An open-source to train faster deep learning models
    1 project | /r/programming | 28 Jun 2022

pytorch-accelerated

Posts with mentions or reviews of pytorch-accelerated. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-06-21.
  • I highly and genuinely recommend Fast.ai course to beginners
    2 projects | /r/learnmachinelearning | 21 Jun 2022
    I would love to know your thoughts on PyTorch Lightning vs. other, even more lightweight libraries, if you have the time. PL strikes me as being less idiosyncratic than FastAI, but I'm still not sure whether it would be better in engineering work to go even more lightweight (when I'm not just writing the code myself) -- something that offers up just optimizations and a trainer, a la MosaicML's [Composer](https://github.com/mosaicml/composer) or Chris Hughes's [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) .

What are some alternatives?

When comparing nos and pytorch-accelerated you can also consider the following projects:

gpu-operator - NVIDIA GPU Operator creates/configures/manages GPUs atop Kubernetes

composer - Supercharge Your Model Training

nebuly - The user analytics platform for LLMs

pytorch-tutorial - PyTorch Tutorial for Deep Learning Researchers

gosl - Linear algebra, eigenvalues, FFT, Bessel, elliptic, orthogonal polys, geometry, NURBS, numerical quadrature, 3D transfinite interpolation, random numbers, Mersenne twister, probability distributions, optimisation, differential equations.

PPO-PyTorch - Minimal implementation of clipped objective Proximal Policy Optimization (PPO) in PyTorch

k8s-device-plugin - NVIDIA device plugin for Kubernetes

avalanche - Avalanche: an End-to-End Library for Continual Learning based on PyTorch.

metagpu - K8s device plugin for GPU sharing

Activeloop Hub - Data Lake for Deep Learning. Build, manage, query, version, & visualize datasets. Stream data real-time to PyTorch/TensorFlow. https://activeloop.ai [Moved to: https://github.com/activeloopai/deeplake]

serve - Serve, optimize and scale PyTorch models in production

Machine-Learning-Collection - A resource for learning about Machine learning & Deep Learning