VDCNN
Implementation of Very Deep Convolutional Neural Network for Text Classification (by cjiang2)
tchlux
By tchlux
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
VDCNN
Posts with mentions or reviews of VDCNN.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-03-12.
-
[D] Is it possible for us to make fixed-size multilayer perceptrons (MLP's) provably converge?
Code for https://arxiv.org/abs/1502.01852 found: https://github.com/zonetrooper32/VDCNN
tchlux
Posts with mentions or reviews of tchlux.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-03-12.
-
[D] Is it possible for us to make fixed-size multilayer perceptrons (MLP's) provably converge?
But what happens when you increase the number of layers? I verified that the gradient calculation is correct using Jax now, before I had the sum-squared-error gradient instead of the mean, so I just changed it to the mean (see commit history). I don't see convergence for most configurations with more than one layer.
Get the code for these experiments here.
-
[D] MLP's are actually nonlinear ➞ linear preconditioners (with visuals!)
A great exercise! No better way to test your knowledge either. I haven't implemented a SVM, because quadratic programming is still difficult for me to comprehend. Would love to one day though! Not sure if you saw, but all of this work in this post was done with my own Fortran implementation of a neural network (I'm using it for my personal research).