corgi
DiffSharp
corgi | DiffSharp | |
---|---|---|
2 | 3 | |
23 | 589 | |
- | 0.5% | |
0.0 | 4.6 | |
almost 3 years ago | 8 months ago | |
Rust | F# | |
MIT License | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
corgi
-
Corgi: Rust neural network/dynamic automatic differentiation library I have been working on
Fully-connected neural network: https://github.com/patricksongzy/corgi/blob/main/src/dense.rs
-
[P] Corgi: Rust neural network/dynamic automatic differentiation library I have been working on
Github: https://github.com/patricksongzy/corgi
DiffSharp
-
Why is F# code so robust and reliable?
More expressive nature of F# means you don't have that many files. C# is luckily and finally moving into that direction too. There was no reason for "one file per class" policy anyway, but it was still widely adopted historically.
Here's an example of a worst-case scenario (GUI frameworks have notoriously huge amount of code): https://github.com/fsprojects/Avalonia.FuncUI/blob/master/sr...
But realistically an average project would look closer to this instead: https://github.com/DiffSharp/DiffSharp/blob/dev/src/DiffShar...
Once you have enough files, it might be a good idea to factor out separate concerns into different projects.
- Automatic differentiation in a lot more than 38 lines of F#
-
Neural Networks Fsharp
Yes. You can use TensorFlow.NET or DiffSharp.
What are some alternatives?
autograd-rs - An autograd implementation in Rust
TensorFlow.NET - .NET Standard bindings for Google's TensorFlow for developing, training and deploying Machine Learning models in C# and F#.
Java-Machine-Learning - Deep learning library for Java, with fully connected, convolutional, and recurrent layers. Also features many gradient descent optimization algorithms.
dfdx - Deep learning in Rust, with shape checked tensors and neural networks
burn - Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.
ocaml-torch - OCaml bindings for PyTorch
L2 - l2 is a fast, Pytorch-style Tensor+Autograd library written in Rust
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
camlgrad - Toy autograd engine in OCaml with Apple Accelerate backend
hyperlearn - 2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old.
Soevnn - A neural net with a terminal-based testing program.