ai-economist
MLflow
Our great sponsors
ai-economist | MLflow | |
---|---|---|
5 | 56 | |
1,060 | 17,284 | |
- | 2.4% | |
0.0 | 9.9 | |
8 months ago | about 11 hours ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ai-economist
-
Agent-based modeling in applied economics?
3 Area of Reinforcement learning, in particular, has demonstrated impressive breakthroughs recently. There were attempts to apply it to economic policy planning and finance
- "The AI Economist: Optimal Economic Policy Design via Two-level Deep Reinforcement Learning", Zheng et al 2021 {Salesforce}
- How to assemble the The AI Economist program in python ?
- IA economista comparou modelos de livre mercado, de taxação maior sobre os ricos, e seu próprio modelo de desenvolvimento para descobrir qual deles melhor promove alta produtividade e igualdade social. Resultado: livre mercado é o pior modelo, e a IA se saiu melhor que o modelo proposto por Saez
-
Improving Equality and Productivity with AI-Driven Tax Policies
They're also on Github -> https://github.com/salesforce/ai-economist
MLflow
-
Observations on MLOps–A Fragmented Mosaic of Mismatched Expectations
How can this be? The current state of practice in AI/ML work requires adaptivity, which is uncommon in classical computational fields. There are myriad tools that capture the work across the many instances of the AI/ML lifecycle. The idea that any one tool could sufficiently capture the dynamic work is unrealistic. Take, for example, an experiment tracking tool like W&B or MLFlow; some form of experiment tracking is necessary in typical model training lifecycles. Such a tool requires some notion of a dataset. However, a tool focusing on experiment tracking is orthogonal to the needs of analyzing model performance at the data sample level, which is critical to understanding the failure modes of models. The way one does this depends on the type of data and the AI/ML task at hand. In other words, MLOps is inherently an intricate mosaic, as the capabilities and best practices of AI/ML work evolve.
-
My Favorite DevTools to Build AI/ML Applications!
MLflow is an open-source platform for managing the end-to-end machine learning lifecycle. It includes features for experiment tracking, model versioning, and deployment, enabling developers to track and compare experiments, package models into reproducible runs, and manage model deployment across multiple environments.
-
Exploring Open-Source Alternatives to Landing AI for Robust MLOps
Platforms such as MLflow monitor the development stages of machine learning models. In parallel, Data Version Control (DVC) brings version control system-like functions to the realm of data sets and models.
-
cascade alternatives - clearml and MLflow
3 projects | 1 Nov 2023
-
EL5: Difference between OpenLLM, LangChain, MLFlow
MLFlow - http://mlflow.org
- Explain me how websites like Dall-E, chatgpt, thispersondoesntexit process the user data so quickly
- [D] What licensed software do you use for machine learning experimentation tracking?
-
Exploring MLOps Tools and Frameworks: Enhancing Machine Learning Operations
MLflow:
-
Options for configuration of python libraries - Stack Overflow
In search for a tool that needs comparable configuration I looked into mlflow and found this. https://github.com/mlflow/mlflow/blob/master/mlflow/environment_variables.py There they define a class _EnvironmentVariable and create many objects out of it, for any variable they need. The get method of this class is in principle a decorated os.getenv. Maybe that is something I can take as orientation.
-
[D] Is there a tool to keep track of my ML experiments?
I have been using DVC and MLflow since then DVC had only data tracking and MLflow only model tracking. I can say both are awesome now and maybe the only factor I would like to mention is that IMO, MLflow is a bit harder to learn while DVC is just a git practically.
What are some alternatives?
Mava - 🦁 A research-friendly codebase for fast experimentation of multi-agent reinforcement learning in JAX
clearml - ClearML - Auto-Magical CI/CD to streamline your AI workload. Experiment Management, Data Management, Pipeline, Orchestration, Scheduling & Serving in one MLOps/LLMOps solution
pymarl2 - Fine-tuned MARL algorithms on SMAC (100% win rates on most scenarios)
Sacred - Sacred is a tool to help you configure, organize, log and reproduce experiments developed at IDSIA.
robo-gym - An open source toolkit for Distributed Deep Reinforcement Learning on real and simulated robots.
zenml - ZenML 🙏: Build portable, production-ready MLOps pipelines. https://zenml.io.
0xDeCA10B - Sharing Updatable Models (SUM) on Blockchain
guildai - Experiment tracking, ML developer tools
maro - Multi-Agent Resource Optimization (MARO) platform is an instance of Reinforcement Learning as a Service (RaaS) for real-world resource optimization problems.
dvc - 🦉 ML Experiments and Data Management with Git
tf2multiagentrl - Clean implementation of Multi-Agent Reinforcement Learning methods (MADDPG, MATD3, MASAC, MAD4PG) in TensorFlow 2.x
tensorflow - An Open Source Machine Learning Framework for Everyone