norse
MegEngine
Our great sponsors
norse | MegEngine | |
---|---|---|
6 | 5 | |
611 | 4,719 | |
3.9% | 0.8% | |
6.5 | 8.9 | |
29 days ago | 3 days ago | |
Python | C++ | |
GNU Lesser General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
norse
-
Neuromorphic learning, working memory, and metaplasticity in nanowire networks
This gives you a ludicrous advantage over current neural net accelerators. Specifically 3-5 orders is magnitude in energy and time, as demonstrated in the BranScaleS system https://www.humanbrainproject.eu/en/science-development/focu...
Unfortunately, that doesn't solve the problem of learning. Just because you can build efficient neuromorphic systems doesn't mean that we know how to train them. Briefly put, the problem is that a physical system has physical constraints. You can't just read the global state in NWN and use gradient descent as we would in deep learning. Rather, we have to somehow use local signals to approximate local behaviour that's helpful on a global scale. That's why they use Hebbian learning in the paper (what fires together, wires together), but it's tricky to get right and I haven't personally seen examples that scale to systems/problems of "interesting" sizes. This is basically the frontier of the field: we need local, but generalizable, learning rules that are stable across time and compose freely into higher-order systems.
Regarding educational material, I'm afraid I haven't seen great entries for learning about SNNs in full generality. I co-author a simulator (https://github.com/norse/norse/) based on PyTorch with a few notebook tutorials (https://github.com/norse/notebooks) that may be helpful.
I'm actually working on some open resources/course material for neuromorphic computing. So if you have any wishes/ideas, please do reach out. Like, what would a newcomer be looking for specifically?
-
[D] The Complete Guide to Spiking Neural Networks
Surrogate gradients and BPTT, this is what is implemented in Norse https://github.com/Norse/Norse. It is also possible to compute exact gradients using the Eventprop algorithm.
- [P] Norse - Deep learning with spiking neural networks (SNNs) in PyTorch
- Show HN: Deep learning with spiking neural networks (SNNs) in PyTorch
-
Don't Mess with Backprop: Doubts about Biologically Plausible Deep Learning
That repo is slightly outdated, development now continues at https://github.com/norse/norse.
MegEngine
-
How to speedup 31*31 conv 10 times
The Real Performance in MegEngine
-
[P] Train Model 3x as large with Dynamic Tensor Rematerialization
In Deep Learning you can trade space for compute by recomputing activation in backpropagation phase, known as gradient checkpointing. Classical gradient checkpointing algorithm is great but they dont work for eager execution. Dynamic Tensor Rematerialization(DTR) is a gradient checkpointing algorithm that work with eager execution, and is implemented at Megenine, a deep learning framework. Read this blogpost to learn more!
- Training 3x larger model on the same GPU cards
What are some alternatives?
snntorch - Deep and online learning with spiking neural networks in Python
DALI - A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.
Spiking-Neural-Network - Pure python implementation of SNN
executorch - On-device AI across mobile, embedded and edge for PyTorch
spikingjelly - SpikingJelly is an open-source deep learning framework for Spiking Neural Network (SNN) based on PyTorch.
hyperlearn - 2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
bindsnet - Simulation of spiking neural networks (SNNs) using PyTorch.
taco - The Tensor Algebra Compiler (taco) computes sparse tensor expressions on CPUs and GPUs
ocaml-torch - OCaml bindings for PyTorch
mtensor - a c++/cuda template library for tensor lazy evaluation