guildai
zenml
guildai | zenml | |
---|---|---|
16 | 33 | |
858 | 3,682 | |
0.3% | 2.4% | |
8.8 | 9.8 | |
9 months ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
guildai
-
guildai VS cascade - a user suggested alternative
2 projects | 5 Dec 2023
-
[D] Who here are convinced that they have a really good setup that keeps track of their ML experiments?
Experiment tracking in DvC is implemented using git to store snapshots of a project and related artifacts. You might take a look at Guild AI's support for DvC, which is tightly integrated with DvC stages. You can run any of the stages defined for a project and you get a properly isolated run (each run is a project copy to ensure that you're not corrupting the run if you modify files while it's running - as well as properly supporting concurrent runs). Once you have runs in Guild, you can use any number of tools to study, compare, export, etc.
-
[D] Deploying SOTA models into my own projects
I built an experiment tracking tool (Guild AI) that focuses on code/model reuse and so this question is dear to my heart :) Best of luck!
-
[P] I reviewed 50+ open-source MLOps tools. Here’s the result
I'm not aware of experiment tracking in Jupyter notebooks themselves. Guild AI is able to run notebooks as experiments however.
-
[D] What MLOps platform do you use, and how helpful are they?
Disclosure - I'm the author of Guild AI so take this for the biased opinion that it is.
-
[N] Experiment tracking with DvC and Guild AI
I'm the author of Guild AI (open source experiment tracking). For some time now Guild users have asked for DvC support. This is now available as a pre-release.
-
[D] Why doesn’t your team use an experiment tracking tool?
Guild AI now has support for running DvC stages as experiments. DvC uses git under the covers to manage project state for each experiment, along with the experiment results. Guild doesn't touch your git repo and instead copies your project source to a new run directory. This ensures that you have a correct record of your experiment without churning your project state.
-
Data Science toolset summary from 2021
Guild.ai - https://guild.ai/
- [D] How do you ensure reproducibility?
-
[D] I'm new and scrappy. What tips do you have for better logging and documentation when training or hyperparameter training?
Use guild and pytorch-lightning. Make it easy for new contributors to get your data by using dvc as a data access tool.
zenml
- FLaNK AI - 01 April 2024
- What are some open-source ML pipeline managers that are easy to use?
-
[P] I reviewed 50+ open-source MLOps tools. Here’s the result
Currently, you can see the integrations we support here and it includes a lot of tools in your list. I also feel I agree with your categorization (it is exactly the categorization we use in our docs pretty much). Perhaps one thing missing might be feature stores but that is a minor thing in the bigger picture.
-
[P] ZenML: Build vendor-agnostic, production-ready MLOps pipelines
GitHub: https://github.com/zenml-io/zenml
- Show HN: ZenML – Portable, production-ready MLOps pipelines
-
[D] Feedback on a worked Continuous Deployment Example (CI/CD/CT)
Hey everyone! At ZenML, we released today an integration that allows users to train and deploy models from pipelines in a simple way. I wanted to ask the community here whether the example we showcased makes sense in a real-world setting:
-
How we made our integration tests delightful by optimizing our GitHub Actions workflow
As of early March 2022 this is the new CI pipeline that we use here at ZenML and the feedback from my colleagues -- fellow engineers -- has been very positive overall. I am sure there will be tweaks, changes and refactorings in the future, but for now, this feels Zen.
-
Ask HN: Who is hiring? (March 2022)
ZenML is hiring for a Design Engineer.
ZenML is an extensible, open-source MLOps framework to create production-ready machine learning pipelines. Built for data scientists, it has a simple, flexible syntax, is cloud- and tool-agnostic, and has interfaces/abstractions that are catered towards ML workflows.
We’re looking for a Design Engineer with a multi-disciplinary skill-set who can take over the look and feel of the ZenML experience. ZenML is a tool designed for developers and we want to delight them from the moment they land on our web page, to after they start using it on their machines. We would like a consistent design experience across our many touchpoints (including the [landing page](https://zenml.io), the [docs](https://docs.zenml.io), the [blog](https://blog.zenml.io), the [podcast](https://podcast.zenml.io), our social media, the product itself which is a [python package](https://github.com/zenml-io/zenml) etc).
A lot of this job is about communicating complex ideas in a beautiful way. You could be a developer or a non-coding designer, full time or part-time, employee or freelance. We are not so picky about the exact nature of this role. If you feel like you are a visually creative designer, and are willing to get stuck in the details of technical topics like MLOps, we can’t wait to work with you!
Apply here: https://zenml.notion.site/Design-Engineer-m-f-1d1a219f18a341...
-
How to improve your experimentation workflows with MLflow Tracking and ZenML
The best place to see MLflow Tracking and ZenML being used together in a simple use case is our example that showcases the integration. It builds on the quickstart example, but shows how you can add in MLflow to handle the tracking. In order to enable MLflow to track artifacts inside a particular step, all you need is to decorate the step with @enable_mlflow and then to specify what you want logged within the step. Here you can see how this is employed in a model training step that uses the autolog feature I mentioned above:
- ZenML helps data scientists work across the full stack
What are some alternatives?
MLflow - Open source platform for the machine learning lifecycle
aim - Aim 💫 — An easy-to-use & supercharged open-source experiment tracker.
metaflow - :rocket: Build and manage real-life ML, AI, and data science projects with ease!
dvc - 🦉 ML Experiments and Data Management with Git
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
labml - 🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱
Poetry - Python packaging and dependency management made easy
wandb - 🔥 A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.
pulsechain-testnet