pytorch-lightning
Keras
Our great sponsors
pytorch-lightning | Keras | |
---|---|---|
17 | 48 | |
18,442 | 55,214 | |
3.3% | 0.7% | |
9.9 | 9.9 | |
4 days ago | 4 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytorch-lightning
-
Watch out for the (PyTorch) Lightning
Join their Slack to ask the community questions and check out the GitHub here.
-
[P] Composer: a new PyTorch library to train models ~2-4x faster with better algorithms
Pytorch lightning benchmarks against pytorch on every PR (benchmarks to make sure that it is mot slower.
-
[D] What Repetitive Tasks Related to Machine Learning do You Hate Doing?
There is already a ton of momentum around automating ML workflows. I would suggest you contribute to a preexisting project like, for instance, PyTorch Lightning or fast.ai.
- PyTorch Lightening
-
[D] Are you using PyTorch or TensorFlow going into 2022?
Is the problem the sheer number of options, or the fact that they are all together in one place? Would it be better if they were organized into the different trainer entrypoints (fit, validate, ...)? If that is the case, there was an RFC proposing this which you might find interesting, feel free to drop by and comment on the issue: https://github.com/PyTorchLightning/pytorch-lightning/issues/10444
-
[D] Colab TPU low performance
I wanted to make a quick performance comparison between the GPU (Tesla K80) and TPU (v2-8) available in Google Colab with PyTorch. To do so quickly, I used an MNIST example from pytorch-lightning that trains a simple CNN.
-
[D] How to avoid CPU bottlenecking in PyTorch - training slowed by augmentations and data loading?
We've noticed GPU 0 on our 3 GPU system is sometimes idle (which would explain performance differences). However its unclear to us why that may be. Similar to this issue
-
[P] An introduction to PyKale https://github.com/pykale/pykale, a PyTorch library that provides a unified pipeline-based API for knowledge-aware multimodal learning and transfer learning on graphs, images, texts, and videos to accelerate interdisciplinary research. Welcome feedback/contribution!
If you want a good example for reference, take a look at Pytorch Lightning's readme (https://github.com/PyTorchLightning/pytorch-lightning) It answers the 3 questions of "what is this", "why should I care", and "how do i use it" almost instantly
- Pytorch Template
-
[D] Advanced Takeaways from fast.ai book
Lower precision training can help and on pytorch lightning is just a simple flag you can set
Keras
-
Should you shuffle the input for a word2vec negative sampling model before or after assigning negative context pairs for each target word?
I may have a few trust issues with the shuffle argument of keras' model.fit(), after experiencing this bug regarding shuffle='batch' first hand.
- Reciclaje 3.0
-
Has anyone ever experienced this ? details in the comments.
it seems like other people have had this issue like another user mentioned when using dropout or BN (https://github.com/keras-team/keras/issues/6977) , the model does use dropout so that maybe it .
-
How to define max_queue_size, workers and use_multiprocessing in keras fit_generator()?
Detailed explanation of model.fit_generator() parameters: queue size, workers and use_multiprocessing
-
Negative dimension size caused by subtracting 3 from 1 for 'Conv2D'
I'm using Keras with Tensorflow as backend , here is my code:
- Python or javascript?
-
How to get reproducible results in keras
Install Keras (http://keras.io/)
-
20+ Free Tools & Resources for Machine Learning
Keras Keras is an API for neural networks that helps doing quick research.
-
Installing Python3 in Linux
According to IBM, Artificial Intelligence (AI) is technology that instructs computers to mimic the human mind in decision-making and problem-solving. Machine Learning (ML) is a subset of AI that consist of procedures that leverage on mathematical data models and algorithms to make predictions. Python implements ML and AI with generally fewer lines of code and pre-built libraries and being a scientific language also comes in support of these technologies. Some of the libraries used in AI and ML include: Tensorflow, Scikit-Learn, Numpy, Keras, Theano
-
How to build your own chatbot NLP engine
At the core of the Xatkit NLU engine we have a Keras/Tensorflow model.
What are some alternatives?
MLP Classifier - A handwritten multilayer perceptron classifer using numpy.
scikit-learn - scikit-learn: machine learning in Python
mmdetection - OpenMMLab Detection Toolbox and Benchmark
detectron2 - Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
tensorflow - An Open Source Machine Learning Framework for Everyone
xgboost - Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
pytorch-grad-cam - Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Examples for classification, object detection, segmentation, embedding networks and more. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
TFLearn - Deep learning library featuring a higher-level API for TensorFlow.
Prophet - Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth.
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
sparktorch - Train and run Pytorch models on Apache Spark.
Sacred - Sacred is a tool to help you configure, organize, log and reproduce experiments developed at IDSIA.