Tensor

Top 23 Tensor Open-Source Projects

  • Pytorch

    Tensors and Dynamic neural networks in Python with strong GPU acceleration

  • Project mention: My Favorite DevTools to Build AI/ML Applications! | dev.to | 2024-04-23

    TensorFlow, developed by Google, and PyTorch, developed by Facebook, are two of the most popular frameworks for building and training complex machine learning models. TensorFlow is known for its flexibility and robust scalability, making it suitable for both research prototypes and production deployments. PyTorch is praised for its ease of use, simplicity, and dynamic computational graph that allows for more intuitive coding of complex AI models. Both frameworks support a wide range of AI models, from simple linear regression to complex deep neural networks.

  • tvm

    Open deep learning compiler stack for cpu, gpu and specialized accelerators

  • Project mention: Making AMD GPUs competitive for LLM inference | news.ycombinator.com | 2023-08-09

    Yes, this is coming! Myself and others at OctoML and in the TVM community are actively working on multi-gpu support in the compiler and runtime. Here are some of the merged and active PRs on the multi-GPU (multi-device) roadmap:

    Support in TVM’s graph IR (Relax) - https://github.com/apache/tvm/pull/15447

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • einops

    Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)

  • Project mention: Einops: Flexible and powerful tensor operations for readable and reliable code | news.ycombinator.com | 2023-12-12
  • cupy

    NumPy & SciPy for GPU

  • Project mention: CuPy: NumPy and SciPy for GPU | news.ycombinator.com | 2023-11-28
  • burn

    Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.

  • Project mention: Transitioning From PyTorch to Burn | dev.to | 2024-02-14

    [package] name = "resnet_burn" version = "0.1.0" edition = "2021" [dependencies] burn = { git = "https://github.com/tracel-ai/burn.git", rev = "75cb5b6d5633c1c6092cf5046419da75e7f74b11", features = ["ndarray"] } burn-import = { git = "https://github.com/tracel-ai/burn.git", rev = "75cb5b6d5633c1c6092cf5046419da75e7f74b11" } image = { version = "0.24.7", features = ["png", "jpeg"] }

  • MegEngine

    MegEngine 是一个快速、可拓展、易于使用且支持自动求导的深度学习框架

  • mars

    Mars is a tensor-based unified framework for large-scale data computation which scales numpy, pandas, scikit-learn and Python functions.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • nx

    Multi-dimensional arrays (tensors) and numerical definitions for Elixir (by elixir-nx)

  • Project mention: Unpacking Elixir: Concurrency | news.ycombinator.com | 2023-08-25

    Does nx not work for you? https://github.com/elixir-nx/nx/tree/main/nx#readme

  • DataFrame

    C++ DataFrame for statistical, Financial, and ML analysis -- in modern C++ using native types and contiguous memory storage

  • Project mention: New multithreaded version of C++ DataFrame was released | news.ycombinator.com | 2024-02-13
  • awesome-tensor-compilers

    A list of awesome compiler projects and papers for tensor computation and deep learning.

  • Project mention: MatX: Faster Chips for LLMs | news.ycombinator.com | 2023-08-05

    > So long as Pytorch only practically works with Nvidia GPUs, everything else is little more than a rounding error.

    This is changing.

    https://github.com/merrymercy/awesome-tensor-compilers

    There are more and better projects that can compile an existing PyTorch codebase into a more optimized format for a range of devices. Triton (which is part of PyTorch) TVM and the MLIR based efforts (like torch-MLIR or IREE) are big ones, but there are smaller fish like GGML and Tinygrad, or more narrowly focused projects like Meta's AITemplate (which works on AMD datacenter GPUs).

    Hardware is in a strange place now... It feels like everyone but Cerebras and AMD/Intel was squeezed out, but with all the money pouring in, I think this is temporary.

  • dfdx

    Deep learning in Rust, with shape checked tensors and neural networks

  • Project mention: Shape Typing in Python | news.ycombinator.com | 2024-04-13
  • hyperlearn

    2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old.

  • Project mention: 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning | news.ycombinator.com | 2023-12-01

    Good point - the main issue is we encountered this exact issue with our old package Hyperlearn (https://github.com/danielhanchen/hyperlearn).

    I OSSed all the code to the community - I'm actually an extremely open person and I love contributing to the OSS community.

    The issue was the package got gobbled up by other startups and big tech companies with no credit - I didn't want any cash from it, but it stung and hurt really bad hearing other startups and companies claim it was them who made it faster, whilst it was actually my work. It hurt really bad - as an OSS person, I don't want money, but just some recognition for the work.

    I also used to accept and help everyone with their writing their startup's software, but I never got paid or even any thanks - sadly I didn't expect the world to be such a hostile place.

    So after a sad awakening, I decided with my brother instead of OSSing everything, we would first OSS something which is still very good - 5X faster training is already very reasonable.

    I'm all open to other suggestions on how we should approach this though! There are no evil intentions - in fact I insisted we OSS EVERYTHING even the 30x faster algos, but after a level headed discussion with my brother - we still have to pay life expenses no?

    If you have other ways we can go about this - I'm all ears!! We're literally making stuff up as we go along!

  • Arraymancer

    A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends

  • Project mention: Arraymancer – Deep Learning Nim Library | news.ycombinator.com | 2024-03-28

    It is a small DSL written using macros at https://github.com/mratsim/Arraymancer/blob/master/src/array....

    Nim has pretty great meta-programming capabilities and arraymancer employs some cool features like emitting cuda-kernels on the fly using standard templates depending on backend !

  • taco

    The Tensor Algebra Compiler (taco) computes sparse tensor expressions on CPUs and GPUs

  • Project mention: The Distributed Tensor Algebra Compiler (2022) | news.ycombinator.com | 2023-06-15

    I agree! Much of this work was done as part of the overarching TACO project (https://github.com/tensor-compiler/taco), in an attempt to distribute sparse tensor computations (https://rohany.github.io/publications/sc2022-spdistal.pdf). MLIR recently (~mid 2022) began implementing the ideas from TACO into a "sparse tensor" dialect, so perhaps some of these ideas could make it into there. I'm working with MLIR these days, and if I could re-do the project now I would probably utilize and targetb the MLIR linalg infrastructure!

  • egison

    The Egison Programming Language

  • Project mention: The Egison Programming Language | /r/patient_hackernews | 2023-04-29
  • dlpack

    common in-memory tensor structure

  • Project mention: Beginner projects/resources to learn about ML Compilers | /r/Compilers | 2023-04-27

    For tensor layout, this is the standard for all libraries: https://github.com/dmlc/dlpack/blob/main/include/dlpack/dlpack.h

  • opt_einsum

    ⚡️Optimizing einsum functions in NumPy, Tensorflow, Dask, and more with contraction order optimization.

  • executorch

    On-device AI across mobile, embedded and edge for PyTorch

  • Project mention: ExecuTorch: Enabling On-Device interference for embedded devices | news.ycombinator.com | 2023-10-17

    Yes ExecuTorch is currently targeted at Edge devices. The runtime is written in C++ with 50KB binary size (without kernels) and should run in most of platforms. You are right that we have not integrated to Nvidia backend yet. Have you tried torch.compile() in PyTorch 2.0? It would do the Nvidia optimization for you without Torchscript. If you have specific binary size or edge specific request, feel free to file issues in https://github.com/pytorch/executorch/issues

  • norse

    Deep learning with spiking neural networks (SNNs) in PyTorch.

  • Project mention: Neuromorphic learning, working memory, and metaplasticity in nanowire networks | news.ycombinator.com | 2023-04-24

    This gives you a ludicrous advantage over current neural net accelerators. Specifically 3-5 orders is magnitude in energy and time, as demonstrated in the BranScaleS system https://www.humanbrainproject.eu/en/science-development/focu...

    Unfortunately, that doesn't solve the problem of learning. Just because you can build efficient neuromorphic systems doesn't mean that we know how to train them. Briefly put, the problem is that a physical system has physical constraints. You can't just read the global state in NWN and use gradient descent as we would in deep learning. Rather, we have to somehow use local signals to approximate local behaviour that's helpful on a global scale. That's why they use Hebbian learning in the paper (what fires together, wires together), but it's tricky to get right and I haven't personally seen examples that scale to systems/problems of "interesting" sizes. This is basically the frontier of the field: we need local, but generalizable, learning rules that are stable across time and compose freely into higher-order systems.

    Regarding educational material, I'm afraid I haven't seen great entries for learning about SNNs in full generality. I co-author a simulator (https://github.com/norse/norse/) based on PyTorch with a few notebook tutorials (https://github.com/norse/notebooks) that may be helpful.

    I'm actually working on some open resources/course material for neuromorphic computing. So if you have any wishes/ideas, please do reach out. Like, what would a newcomer be looking for specifically?

  • Tullio.jl

  • DiffSharp

    DiffSharp: Differentiable Functional Programming

  • Grassmann.jl

    ⟨Grassmann-Clifford-Hodge⟩ multilinear differential geometric algebra

  • TensorOperations.jl

    Julia package for tensor contractions and related operations

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

Tensor related posts

Index

What are some of the best open-source Tensor projects? This list will help you:

Project Stars
1 Pytorch 77,783
2 tvm 11,156
3 einops 7,897
4 cupy 7,753
5 burn 7,020
6 MegEngine 4,713
7 mars 2,675
8 nx 2,460
9 DataFrame 2,258
10 awesome-tensor-compilers 2,167
11 dfdx 1,600
12 hyperlearn 1,510
13 Arraymancer 1,304
14 taco 1,203
15 egison 900
16 dlpack 849
17 opt_einsum 803
18 executorch 710
19 norse 611
20 Tullio.jl 581
21 DiffSharp 573
22 Grassmann.jl 449
23 TensorOperations.jl 414

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com