mlops-v2
recommenders
mlops-v2 | recommenders | |
---|---|---|
3 | 6 | |
451 | 17,980 | |
3.5% | 1.0% | |
3.7 | 9.5 | |
16 days ago | 11 days ago | |
Shell | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mlops-v2
- MLOps: Machine learning model management - Azure Machine Learning | Microsoft Learn
- Need help with Execute Python Script module in Azure ML Designer
-
Create a Managed ML Inference Endpoint and deployment using Terraform
I would create a second step or stage for deploying the endpoint. There is no benefit to try and force terraform to do something it wasn't designed for. You can run AZ CLI commands from terraform, but I don't have experience doing so. Check out this mlops accelerator - https://github.com/Azure/mlops-v2 You can see how they use multiple pipeline for setting up the infrastructure then training and deploying the model.
recommenders
- My kernel dies when I fit my LightFm model from Microsoft Recommenders
- There is framework for everything.
-
This Week in Python
recommenders – Best Practices on Recommendation Systems
-
Input to SVD, SAR, NMF
I would like to do a benchmarking on the Microsoft models SVD, SAR and NMF (available here: https://github.com/microsoft/recommenders) but with this input data I get a precision and recall close to zero. Any ideas how I can improve this? For SVD and NMF (surprise library) the model wants a rating input that is normally distributed, which it not the case for my binary data where the transactions all have a rating of 1.
-
Opinion on choice of model - Recommender System
Then I tried to find some more advanced models and I found this really good list and in there I found the Microsoft one. So it's' where we are now, which a bunch of different models and not a documentation/tutorials out there.
What are some alternatives?
Time-Series-Library - A Library for Advanced Deep Time Series Models.
metarank - A low code Machine Learning personalized ranking service for articles, listings, search results, recommendations that boosts user engagement. A friendly Learn-to-Rank engine
Data-Engineering-Roadmap - Roadmap for Data Engineering
azure-devops-python-api - Azure DevOps Python API
MachineLearningNotebooks - Python notebooks with ML and deep learning examples with Azure Machine Learning Python SDK | Microsoft
python-minecraft-clone - Source code for each episode of my Minecraft clone in Python YouTube tutorial series.
azure - Azure-related repository
TensorRec - A TensorFlow recommendation algorithm and framework in Python.
clearml - ClearML - Auto-Magical CI/CD to streamline your AI workload. Experiment Management, Data Management, Pipeline, Orchestration, Scheduling & Serving in one MLOps/LLMOps solution
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
SynapseML - Simple and Distributed Machine Learning
Google-rank-tracker - SEO: Python script + shell script and cronjob to check ranks on a daily basis