Python GPU

Open-source Python projects categorized as GPU

Top 23 Python GPU Projects

  • Pytorch

    Tensors and Dynamic neural networks in Python with strong GPU acceleration

  • Project mention: Tinygrad: Hacked 4090 driver to enable P2P | news.ycombinator.com | 2024-04-12

    fyi should work on most 40xx[1]

    [1] https://github.com/pytorch/pytorch/issues/119638#issuecommen...

  • DeepSpeed

    DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

  • Project mention: Can we discuss MLOps, Deployment, Optimizations, and Speed? | /r/LocalLLaMA | 2023-12-06

    DeepSpeed can handle parallelism concerns, and even offload data/model to RAM, or even NVMe (!?) . I'm surprised I don't see this project used more.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • ivy

    The Unified AI Framework

  • Project mention: Keras 3.0 | news.ycombinator.com | 2023-11-28

    See also https://github.com/unifyai/ivy which I have not tried but seems along the lines of what you are describing, working with all the major frameworks

  • tvm

    Open deep learning compiler stack for cpu, gpu and specialized accelerators

  • Project mention: Making AMD GPUs competitive for LLM inference | news.ycombinator.com | 2023-08-09

    Yes, this is coming! Myself and others at OctoML and in the TVM community are actively working on multi-gpu support in the compiler and runtime. Here are some of the merged and active PRs on the multi-GPU (multi-device) roadmap:

    Support in TVM’s graph IR (Relax) - https://github.com/apache/tvm/pull/15447

  • scalene

    Scalene: a high-performance, high-precision CPU, GPU, and memory profiler for Python with AI-powered optimization proposals

  • Project mention: Memray – A Memory Profiler for Python | news.ycombinator.com | 2024-02-10

    I collected a list of profilers (also memory profilers, also specifically for Python) here: https://github.com/albertz/wiki/blob/master/profiling.md

    Currently I actually need a Python memory profiler, because I want to figure out whether there is some memory leak in my application (PyTorch based training script), and where exactly (in this case, it's not a problem of GPU memory, but CPU memory).

    I tried Scalene (https://github.com/plasma-umass/scalene), which seems to be powerful, but somehow the output it gives me is not useful at all? It doesn't really give me a flamegraph, or a list of the top lines with memory allocations, but instead it gives me a listing of all source code lines, and prints some (very sparse) information on each line. So I need to search through that listing now by hand to find the spots? Maybe I just don't know how to use it properly.

    I tried Memray, but first ran into an issue (https://github.com/bloomberg/memray/issues/212), but after using some workaround, it worked now. I get a flamegraph out, but it doesn't really seem accurate? After a while, there don't seem to be any new memory allocations at all anymore, and I don't quite trust that this is correct.

    There is also Austin (https://github.com/P403n1x87/austin), which I also wanted to try (have not yet).

    Somehow this experience so far was very disappointing.

    (Side node, I debugged some very strange memory allocation behavior of Python before, where all local variables were kept around after an exception, even though I made sure there is no reference anymore to the exception object, to the traceback, etc, and I even called frame.clear() for all frames to really clear it. It turns out, frame.f_locals will create another copy of all the local variables, and the exception object and all the locals in the other frame still stay alive until you access frame.f_locals again. At that point, it will sync the f_locals again with the real (fast) locals, and then it can finally free everything. It was quite annoying to find the source of this problem and to find workarounds for it. https://github.com/python/cpython/issues/113939)

  • ImageAI

    A python library built to empower developers to build applications and systems with self-contained Computer Vision capabilities

  • Project mention: Photo gallery frontend with encryption and search | /r/selfhosted | 2023-11-27

    Hi. I want to implement an image server similar to Photoprism using ImageAI to tag images based on objects and context. However I don't want to spend to much time working on the frontend, at first I were thinking about using Danbooru and use Flexbooru or the web interface on my phone. But it doesn't have any encryption or password protection (since the purpose of it is to be used as a public image board).

  • cupy

    NumPy & SciPy for GPU

  • Project mention: CuPy: NumPy and SciPy for GPU | news.ycombinator.com | 2023-11-28
  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • catboost

    A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

  • Project mention: CatBoost: Open-source gradient boosting library | news.ycombinator.com | 2024-03-05
  • AlphaPose

    Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System

  • server

    The Triton Inference Server provides an optimized cloud and edge inferencing solution. (by triton-inference-server)

  • Project mention: FLaNK Weekly 08 Jan 2024 | dev.to | 2024-01-08
  • chainer

    A flexible framework of neural networks for deep learning

  • Project mention: ChaiNNer – Node/Graph based image processing and AI upscaling GUI | news.ycombinator.com | 2023-07-19

    There is already an AI framework named Chainer: https://github.com/chainer/chainer

  • skypilot

    SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.

  • Project mention: Ask HN: Most efficient way to fine-tune an LLM in 2024? | news.ycombinator.com | 2024-04-04
  • tf-quant-finance

    High-performance TensorFlow library for quantitative finance.

  • Project mention: tf-quant-finance: NEW Derivatives and Hedging - star count:3911.0 | /r/algoprojects | 2023-06-10
  • nvitop

    An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management.

  • Project mention: Nvtop: Linux Task Monitor for Nvidia, AMD and Intel GPUs | news.ycombinator.com | 2024-03-12

    That's why the authors recommend pipx for installing nvitop. I am not a sysadmin, but I prefer pipx over relying on the (often outdated) distro sources.

    https://github.com/XuehaiPan/nvitop?tab=readme-ov-file#insta...

  • gpustat

    📊 A simple command-line utility for querying and monitoring GPU status

  • Project mention: Nvtop: Linux Task Monitor for Nvidia, AMD and Intel GPUs | news.ycombinator.com | 2024-03-12

    My favorite would be gpustat [1]. This shows the bare minimum amount of information to let's me know that the training has problems/running well

    [1] https://github.com/wookayin/gpustat

  • pytorch-forecasting

    Time series forecasting with PyTorch

  • Project mention: FLaNK Stack Weekly for 14 Aug 2023 | dev.to | 2023-08-14
  • jittor

    Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.

  • Project mention: VSL; Vlang's Scientific Library | /r/vlang | 2023-06-09

    Would it make sense to have a backend support for OpenXLA, Apache TVM, Jittor or other similar to get free GPU, TPU and other accelerators for free ?

  • asitop

    Perf monitoring CLI tool for Apple Silicon

  • Project mention: Nvtop: Linux Task Monitor for Nvidia, AMD and Intel GPUs | news.ycombinator.com | 2024-03-12

    There’s also asitop https://github.com/tlkh/asitop

  • leptonai

    A Pythonic framework to simplify AI service building

  • Project mention: Show HN: Running LLMs in one line of Python without Docker | news.ycombinator.com | 2023-10-04

    Hello Hacker News! We're Yangqing, Xiang and JJ from lepton.ai. We are building a platform to run any AI models as easy as writing local code, and to get your favorite models in minutes. It's like container for AI, but without the hassle of actually building a docker image.

    We built and contributed to some of the world's most popular AI software - PyTorch 1.0, ONNX, Caffe, etcd, Kubernetes, etc. We also managed hundreds of thousands of computers in our previous jobs. And we found that the AI software stack is usually unnecessarily complex - and we want to change that.

    Imagine if you are a developer who sees a good model on github, or HuggingFace. To make it a production ready service, the current solution usually requires you to build a docker image. But think about it - I have a few python code and a few python dependencies. That sounds like a huge overhead, right?

    lepton.ai is really a pythonic way to free you from such difficulties. You write a simple python scaffold around your PyTorch / TensorFlow code, and lepton launches it as a full-fledged service callable via python, javascript, or any language that understands OpenAPI. We use containers under the hood, but you don't need to worry about all the infrastructure nuts and bolts.

    We have made the python library open-source at https://github.com/leptonai/leptonai/. With it, launching a common HuggingFace model is as simple as a one liner. For example, if you have a GPU, Stable Diffusion XL is as simple as:

    ```

  • pygraphistry

    PyGraphistry is a Python library to quickly load, shape, embed, and explore big graphs with the GPU-accelerated Graphistry visual graph analyzer

  • Project mention: Graph Data Fits in Memory | news.ycombinator.com | 2024-04-15

    Extra fun: We find most enterprise/gov graph analytics work only requires 1-2 attributes to go along with the graph index, and those attributes often are already numeric (time, $, ...) or can be dictionary-encoded as discussed here (categorical, ID, ...)... so even 'tough' billion scale graphs are fine on 1 gpu.

    Early, but that's been the basic thinking into our new GFQL system: slice into the columns you want, and then do all the in-GPU traversals you want. In our V1, we keep things dataframe-native include the in-GPU data representation, and are already working on the first extensions to support switching to more graph-native indexing for steps as needed.

    Ex: https://github.com/graphistry/pygraphistry/blob/master/demos...

  • jetson_stats

    📊 Simple package for monitoring and control your NVIDIA Jetson [Orin, Xavier, Nano, TX] series

  • PyCUDA

    CUDA integration for Python, plus shiny features

  • torchrec

    Pytorch domain library for recommendation systems

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020). The latest post mention was on 2024-04-15.

Python GPU related posts

Index

What are some of the best open-source GPU projects in Python? This list will help you:

Project Stars
1 Pytorch 77,544
2 DeepSpeed 32,447
3 ivy 14,016
4 tvm 11,130
5 scalene 11,125
6 ImageAI 8,383
7 cupy 7,753
8 catboost 7,731
9 AlphaPose 7,701
10 server 7,277
11 chainer 5,861
12 skypilot 5,602
13 tf-quant-finance 4,259
14 nvitop 3,899
15 gpustat 3,830
16 pytorch-forecasting 3,578
17 jittor 2,987
18 asitop 2,797
19 leptonai 2,419
20 pygraphistry 2,044
21 jetson_stats 2,018
22 PyCUDA 1,740
23 torchrec 1,719
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com