smarty_pants
neural-network-from-scratch
Our great sponsors
smarty_pants | neural-network-from-scratch | |
---|---|---|
1 | 3 | |
3 | 114 | |
- | - | |
0.0 | 0.0 | |
about 2 years ago | about 2 years ago | |
Rust | Rust | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
smarty_pants
-
I wrote a Neural Network library.
The idea is that you can simply use this crate with your project to easily train a neural network using your project. The library supports creating, training, parsing, and running. It may gain more functionality in the future. As it stands it's quite small and pretty fast with 5 NeuralNetworks taking nano-seconds to train 1000 generations in the example program. I've tried to make sure that it is "complete" and as such, I've documented nearly every function, method, and struct. I've also written an example project and tried to make it relatively easy to use.
neural-network-from-scratch
- Examine individual neurons of a small neural network in the browser
-
Language models can explain neurons in language models
I built a toy neural network that runs in the browser[1] to model 2D functions with the goal of doing something similar to this research (in a much more limited manner, ofc). Since the input space is so much more limited than language models or similar, it's possible to examine the outputs for each neuron for all possible inputs, and in a continuous manner.
In some cases, you can clearly see neurons that specialize to different areas of the function being modeled, like this one: https://i.ameo.link/b0p.png
This OpenAI research seems to be feeding lots of varied input text into the models they're examining and keeping track of the activations of different neurons along the way. Another method I remember seeing used in the past involves using an optimizer to generate inputs that maximally activate particular neurons in vision models[2].
I'm sure that's much more difficult or even impossible for transformers which operate on sequences of tokens/embeddings rather than single static input vectors, but maybe there's a way to generate input embeddings and then use some method to convert them back into tokens.
[1] https://nn.ameo.dev/
[2] https://www.tensorflow.org/tutorials/generative/deepdream
-
Browser-based neural network sandbox built with Rust + WebAssembly
Full source code is on Github: https://github.com/ameobea/neural-network-from-scratch
What are some alternatives?
crates.io - The Rust package registry
shorelark - Simulation of life & evolution
wasm-pdf - Generate PDF files with JavaScript and WASM (WebAssembly)
bhtsne - Parallel Barnes-Hut t-SNE implementation written in Rust.
wasm-learning - Building Rust functions for Node.js to take advantage of Rust's performance, WebAssembly's security and portability, and JavaScript's ease-of-use. Demo code and recipes.
Seed - A Rust framework for creating web apps
minesweeper - Minesweeper game developed with Rust, WebAssembly (Wasm), and Canvas
bitque - A simplified Jira clone built with seed.rs and actix
tsify - A library for generating TypeScript definitions from rust code.
mastermind - A mastermind solver for finding optimal worst-case guesses with SIMD and multithreading support