PyTorch-Guide
chainer
PyTorch-Guide | chainer | |
---|---|---|
2 | 2 | |
23 | 5,867 | |
- | 0.1% | |
1.8 | 0.0 | |
over 2 years ago | 9 months ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
PyTorch-Guide
- Useful Tools and Programs for Deep Learning with PyTorch
-
Cool PyTorch Guide/Wiki
PyTorch Guide/Wiki: https://github.com/mikeroyal/PyTorch-Guide
chainer
-
ChaiNNer – Node/Graph based image processing and AI upscaling GUI
There is already an AI framework named Chainer: https://github.com/chainer/chainer
-
Protip: the upscaler matters a lot
Sorry maybe someone could chime in and help but I use chainer to upscale. https://github.com/chainer/chainer
What are some alternatives?
halutmatmul - Hashed Lookup Table based Matrix Multiplication (halutmatmul) - Stella Nera accelerator
chaiNNer - A node-based image processing GUI aimed at making chaining image processing tasks easy and customizable. Born as an AI upscaling application, chaiNNer has grown into an extremely flexible and powerful programmatic image processing application.
NeuralCDE - Code for "Neural Controlled Differential Equations for Irregular Time Series" (Neurips 2020 Spotlight)
leptonai - A Pythonic framework to simplify AI service building
cog - Containers for machine learning
tmu - Implements the Tsetlin Machine, Coalesced Tsetlin Machine, Convolutional Tsetlin Machine, Regression Tsetlin Machine, and Weighted Tsetlin Machine, with support for continuous features, drop clause, Type III Feedback, focused negative sampling, multi-task classifier, autoencoder, literal budget, and one-vs-one multi-class classifier. TMU is written in Python with wrappers for C and CUDA-based clause evaluation and updating.
bittensor - Internet-scale Neural Networks
XNOR-popcount-GEMM-PyTorch-CPU-CUDA - A PyTorch implemenation of real XNOR-popcount (1-bit op) GEMM Linear PyTorch extension support both CPU and CUDA
TransformerEngine - A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
SmallPebble - Minimal deep learning library written from scratch in Python, using NumPy/CuPy.
warp-drive - Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning Framework on a GPU (JMLR 2022)
pytortto - deep learning from scratch. uses numpy/cupy, trains in GPU, follows pytorch API