Metrics
neptune-contrib
Metrics | neptune-contrib | |
---|---|---|
2 | - | |
1,617 | 27 | |
- | - | |
0.0 | 0.0 | |
over 1 year ago | over 1 year ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Metrics
-
Model evaluation - MAP@K
Starting with Python we’re going to code the functions from scratch using the values determined from the linear regression model. First we’re going to write a function to calculate the Average Precision at K. It will take in three values, the value from the test set, and value from the model prediction, and finally the value for K. This code can be found in the Github for the ml_metrics Python Library.
-
How to Judge your Recommendation System Model ?
These metrics are straightforward to implement, also can be obtained from here. Happy Learning !
neptune-contrib
We haven't tracked posts mentioning neptune-contrib yet.
Tracking mentions began in Dec 2020.
What are some alternatives?
seqeval - A Python framework for sequence labeling evaluation(named-entity recognition, pos tagging, etc...)
scikit-learn - scikit-learn: machine learning in Python
tensorflow - An Open Source Machine Learning Framework for Everyone
Keras - Deep Learning for humans
xgboost - Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
bodywork - ML pipeline orchestration and model deployments on Kubernetes.
gym - A toolkit for developing and comparing reinforcement learning algorithms.
TFLearn - Deep learning library featuring a higher-level API for TensorFlow.