hyperparameter
Hyperparameter, Make configurable AI applications.Build for Python hackers. (by reiase)
pytorch-lightning
Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning] (by PyTorchLightning)
hyperparameter | pytorch-lightning | |
---|---|---|
7 | 19 | |
23 | 19,188 | |
- | - | |
6.9 | 9.9 | |
about 1 month ago | almost 2 years ago | |
Rust | Python | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
hyperparameter
Posts with mentions or reviews of hyperparameter.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-04-10.
-
Hyper-parameter Optimization with Optuna and hyperparameter
the full tutorial: https://github.com/reiase/hyperparameter/tree/master/examples/optuna
-
Pythonic configuration framework?
When I was working on my own configuration framework (HyperParameter, previous post), I suddenly realize that what I want is not another configuration framework with some fancy API. All I want is to change my ML experiments without modifying the code and get rid of the configuration handling codes. The right way of configuration is not writing configurable code and wasting time on different frameworks. The best solution is a tool that makes your code configurable.
-
hyperparameter, a lightweight configuration framework
github: https://github.com/reiase/hyperparameter
-
HyperParameter for ML Models and Systems
HyperParameter is a configuration and parameter management library for Python. HyperParameter provides the following features:
-
What is the best practice for injecting configuration into a python application
you can take a look at https://github.com/reiase/hyperparameter, a scoped, thread-safe config object that is lightweight enough. There is no need to modify too much code:
-
[P] Modify Hyperparameters Easily
I'm developing a Hyperparameter tuning toolbox for my machine learning projects. It maps keyword arguments to hyper-parameters, for example:
-
A hyper-parameter toolbox for data-scientists and machine-learning engineers
I'm developing [a toolbox for managing hyper-parameters](https://github.com/reiase/hyperparameter) in my data science and machine learning projects. It provides object-style API for nested dict( which is very common for config files):
pytorch-lightning
Posts with mentions or reviews of pytorch-lightning.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-03-16.
-
Problem with pytorch lightning and optuna with multiple callbacks
def on_validation_end(self, trainer: Trainer, pl_module: LightningModule) -> None: # Trainer calls `on_validation_end` for sanity check. Therefore, it is necessary to avoid # calling `trial.report` multiple times at epoch 0. For more details, see # https://github.com/PyTorchLightning/pytorch-lightning/issues/1391. if trainer.sanity_checking: return
-
Please comment on my planned research project structure
Under the hood, the ModelWrapper object will create a ML model based on the config (so far, an XGBoost model and a PyTorch Lightning model). Each of those will have a wrapper that conducts training and evaluation (since from my understanding of Lightning, Trainers are required to be outside of the class). In lack of a better name, I call these wrappers Fitters. For uniformity, I thought about adding a common interface IFitter, which is inherited by all model wrappers as outlined below.
-
Watch out for the (PyTorch) Lightning
Join their Slack to ask the community questions and check out the GitHub here.
-
[P] Composer: a new PyTorch library to train models ~2-4x faster with better algorithms
Pytorch lightning benchmarks against pytorch on every PR (benchmarks to make sure that it is mot slower.
-
[D] What Repetitive Tasks Related to Machine Learning do You Hate Doing?
There is already a ton of momentum around automating ML workflows. I would suggest you contribute to a preexisting project like, for instance, PyTorch Lightning or fast.ai.
- PyTorch Lightening
-
[D] Are you using PyTorch or TensorFlow going into 2022?
Is the problem the sheer number of options, or the fact that they are all together in one place? Would it be better if they were organized into the different trainer entrypoints (fit, validate, ...)? If that is the case, there was an RFC proposing this which you might find interesting, feel free to drop by and comment on the issue: https://github.com/PyTorchLightning/pytorch-lightning/issues/10444
-
[D] Colab TPU low performance
I wanted to make a quick performance comparison between the GPU (Tesla K80) and TPU (v2-8) available in Google Colab with PyTorch. To do so quickly, I used an MNIST example from pytorch-lightning that trains a simple CNN.
-
[D] How to avoid CPU bottlenecking in PyTorch - training slowed by augmentations and data loading?
We've noticed GPU 0 on our 3 GPU system is sometimes idle (which would explain performance differences). However its unclear to us why that may be. Similar to this issue
-
[P] An introduction to PyKale https://github.com/pykale/pykale​, a PyTorch library that provides a unified pipeline-based API for knowledge-aware multimodal learning and transfer learning on graphs, images, texts, and videos to accelerate interdisciplinary research. Welcome feedback/contribution!
If you want a good example for reference, take a look at Pytorch Lightning's readme (https://github.com/PyTorchLightning/pytorch-lightning) It answers the 3 questions of "what is this", "why should I care", and "how do i use it" almost instantly