notebooks
nn
notebooks | nn | |
---|---|---|
2 | 26 | |
24 | 48,709 | |
- | 5.1% | |
0.0 | 7.7 | |
over 1 year ago | about 2 months ago | |
Jupyter Notebook | Jupyter Notebook | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
notebooks
-
Neuromorphic learning, working memory, and metaplasticity in nanowire networks
This gives you a ludicrous advantage over current neural net accelerators. Specifically 3-5 orders is magnitude in energy and time, as demonstrated in the BranScaleS system https://www.humanbrainproject.eu/en/science-development/focu...
Unfortunately, that doesn't solve the problem of learning. Just because you can build efficient neuromorphic systems doesn't mean that we know how to train them. Briefly put, the problem is that a physical system has physical constraints. You can't just read the global state in NWN and use gradient descent as we would in deep learning. Rather, we have to somehow use local signals to approximate local behaviour that's helpful on a global scale. That's why they use Hebbian learning in the paper (what fires together, wires together), but it's tricky to get right and I haven't personally seen examples that scale to systems/problems of "interesting" sizes. This is basically the frontier of the field: we need local, but generalizable, learning rules that are stable across time and compose freely into higher-order systems.
Regarding educational material, I'm afraid I haven't seen great entries for learning about SNNs in full generality. I co-author a simulator (https://github.com/norse/norse/) based on PyTorch with a few notebook tutorials (https://github.com/norse/notebooks) that may be helpful.
I'm actually working on some open resources/course material for neuromorphic computing. So if you have any wishes/ideas, please do reach out. Like, what would a newcomer be looking for specifically?
-
Event-Based Backpropagation for Exact Gradients in Spiking Neural Networks
We've written some documentation around our neuron equations in Python that explains this: https://norse.github.io/norse/auto_api/norse.torch.functiona...
See also our tutorial on neuron parameter optimization to understand how it's useful for machine learning: https://github.com/norse/notebooks#level-intermediate
Disclaimer: I'm a co-author of the library Norse
Regarding the target audience, it's actually not entirely clear to me. This lies in the intersection between computational neuroscience and deep learning. Which isn't a huge set of people. Meaning, you're questions are valid and we (as researchers) have a lot of communication to do to explain why this is interesting and important.
nn
-
Can't remember name of website that has explanations side-by-side with code
Hey are you talking about https://nn.labml.ai/ ?
- [D] Recent ML papers to implement from scratch
-
[P] GPT-NeoX inference with LLM.int8() on 24GB GPU
Implementation & LM Eval Harness Results
-
[P] Fine-tuned the GPT-Neox Model to Generate Quotes
Github: https://github.com/labmlai/annotated_deep_learning_paper_implementations/tree/master/labml_nn/neox
-
Best resources to learn recent transformer papers and stay updated [D]
Regarding implementations this helps me: https://nn.labml.ai/
- Introductory papers to implement
- How to convert research papers to code?
-
[D] How to convert papers to code?
Dunno if this is directly helpful, but this website has implementation with the math side by side https://nn.labml.ai/
- [D] Looking for open source projects to contribute
- Resource for papers explanation
What are some alternatives?
DeepLearningExamples - State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
GFPGAN-for-Video-SR - A colab notebook for video super resolution using GFPGAN
fastai - The fastai deep learning library
labml - 🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱
NYU-DLSP20 - NYU Deep Learning Spring 2020
functorch - functorch is JAX-like composable function transforms for PyTorch.
Made-With-ML - Learn how to design, develop, deploy and iterate on production-grade ML applications.
ZoeDepth - Metric depth estimation from a single image
onnx-simplifier - Simplify your onnx model
Basic-UI-for-GPT-J-6B-with-low-vram - A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loading takes 12gb free ram.
Behavior-Sequence-Transformer-Pytorch - This is a pytorch implementation for the BST model from Alibaba https://arxiv.org/pdf/1905.06874.pdf
DFL-Colab - DeepFaceLab fork which provides IPython Notebook to use DFL with Google Colab