hyperparameter
pytorch-lightning
Our great sponsors
hyperparameter | pytorch-lightning | |
---|---|---|
7 | 8 | |
23 | 26,883 | |
- | 2.0% | |
6.9 | 9.9 | |
about 1 month ago | 2 days ago | |
Rust | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
hyperparameter
-
Hyper-parameter Optimization with Optuna and hyperparameter
the full tutorial: https://github.com/reiase/hyperparameter/tree/master/examples/optuna
-
Pythonic configuration framework?
When I was working on my own configuration framework (HyperParameter, previous post), I suddenly realize that what I want is not another configuration framework with some fancy API. All I want is to change my ML experiments without modifying the code and get rid of the configuration handling codes. The right way of configuration is not writing configurable code and wasting time on different frameworks. The best solution is a tool that makes your code configurable.
-
hyperparameter, a lightweight configuration framework
github: https://github.com/reiase/hyperparameter
-
HyperParameter for ML Models and Systems
HyperParameter is a configuration and parameter management library for Python. HyperParameter provides the following features:
-
What is the best practice for injecting configuration into a python application
you can take a look at https://github.com/reiase/hyperparameter, a scoped, thread-safe config object that is lightweight enough. There is no need to modify too much code:
-
[P] Modify Hyperparameters Easily
I'm developing a Hyperparameter tuning toolbox for my machine learning projects. It maps keyword arguments to hyper-parameters, for example:
-
A hyper-parameter toolbox for data-scientists and machine-learning engineers
I'm developing [a toolbox for managing hyper-parameters](https://github.com/reiase/hyperparameter) in my data science and machine learning projects. It provides object-style API for nested dict( which is very common for config files):
pytorch-lightning
- Lightning AI Studios – A persistent GPU cloud environment
-
Como empezar con inteligencia artificial?
https://see.stanford.edu/Course/CS229 https://lightning.ai/ https://www.youtube.com/watch?v=00s9ireCnCw&t=57s https://towardsdatascience.com/
-
Best practice for saving logits/activation values of model in PyTorch Lightning
I've been wondering on what is the recommended method of saving logits/activations using PyTorch Lightning. I've looked at Callbacks, Loggers and ModelHooks but none of the use-cases seem to be for this kind of activity (even if I were to create my own custom variants of each utility). The ModelCheckpoint Callback in its utility makes me feel like custom Callbacks would be the way to go but I'm not quite sure. This closed GitHub issue does address my issue to some extent.
- New to ML, which is easier to learn - Tensorflow or PyTorch?
- PyTorch Lightning – DL framework to train, deploy, and ship AI fast
-
We just release a complete open-source solution for accelerating Stable Diffusion pretraining and fine-tuning!
Our codebase for the diffusion models builds heavily on OpenAI's ADM codebase , lucidrains, Stable Diffusion, Lightning and Hugging Face. Thanks for open-sourcing!
-
An elegant and strong PyTorch Trainer
For lightweight use, pytorch-lightning is too heavy, and its source code will be very difficult for beginners to read, at least for me.
-
[D] Mixed Precision Training: Difference between BF16 and FP16
For the A100 GPU, theoretical performance is the same for FP16/BF16 and both rely on the same number of bits, meaning memory should be the same. However since it's quite newly added to PyTorch, performance seems to still be dependent on underlying operators used (pytorch lightning debugging in progress here).
What are some alternatives?
towhee - Towhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
lnd - Lightning Network Daemon ⚡️
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
Eclair - A scala implementation of the Lightning Network.
Dependency Injector - Dependency injection framework for Python
mmdetection - OpenMMLab Detection Toolbox and Benchmark
streamlit - Streamlit — A faster way to build and share data apps.
composer - Supercharge Your Model Training
keras - Deep Learning for humans [Moved to: https://github.com/keras-team/keras]
umbrel - A beautiful home server OS for self-hosting with an app store. Buy a pre-built Umbrel Home with umbrelOS, or install on a Raspberry Pi 4, Pi 5, any Ubuntu/Debian system, or a VPS.
lance - Modern columnar data format for ML and LLMs implemented in Rust. Convert from parquet in 2 lines of code for 100x faster random access, vector index, and data versioning. Compatible with Pandas, DuckDB, Polars, Pyarrow, with more integrations coming..
Keras - Deep Learning for humans