thinc
jittor
Our great sponsors
thinc | jittor | |
---|---|---|
4 | 4 | |
2,789 | 2,991 | |
0.5% | - | |
7.6 | 7.6 | |
3 days ago | 15 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
thinc
-
JAX – NumPy on the CPU, GPU, and TPU, with great automatic differentiation
Agree, though I wouldn’t call PyTorch a drop-in for NumPy either. CuPy is the drop-in. Excepting some corner cases, you can use the same code for both. Thinc’s ops work with both NumPy and CuPy:
https://github.com/explosion/thinc/blob/master/thinc/backend...
-
Tinygrad: A simple and powerful neural network framework
I love those tiny DNN frameworks, some examples that I studied in the past (I still use PyTorch for work related projects) :
thinc.by the creators of spaCy https://github.com/explosion/thinc
-
good examples of functional-like python code that one can study?
thinc - defining neural nets in functional way jax, a new deep learning framework puts emphasis on functions rather than tensors, I've tested it for a couple of applications and it's really cool, you can write stuff like you'd write math expressions in papers using numpy. That speeds up development significantly, and makes code much more readable
- thinc - A refreshing functional take on deep learning, compatible with your favorite libraries
jittor
-
VSL; Vlang's Scientific Library
Would it make sense to have a backend support for OpenXLA, Apache TVM, Jittor or other similar to get free GPU, TPU and other accelerators for free ?
- Jittor: High-performance deep learning framework based on JIT and meta-operators
-
Tinygrad: A simple and powerful neural network framework
Very similar idea as Jittor, convolution definitely can be break down: https://github.com/Jittor/jittor/blob/master/python/jittor/n...
-
How do I deal with ML models taking soooo long to train, when I have to optimize results?
-I've found JIT quite useful: https://github.com/Jittor/jittor
What are some alternatives?
quantulum3 - Library for unit extraction - fork of quantulum for python3
Res2Net-PretrainedModels - (ImageNet pretrained models) The official pytorch implemention of the TPAMI paper "Res2Net: A New Multi-scale Backbone Architecture"
jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
shumai - Fast Differentiable Tensor Library in JavaScript and TypeScript with Bun + Flashlight
horovod - Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
vsl - V library to develop Artificial Intelligence and High-Performance Scientific Computations
extending-jax - Extending JAX with custom C++ and CUDA code
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
dm-haiku - JAX-based neural network library
StylizedNeRF - [CVPR 2022] Code for StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D mutual learning
AIF360 - A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
nnabla - Neural Network Libraries