autokeras
tf-keras-vis
autokeras | tf-keras-vis | |
---|---|---|
5 | 1 | |
9,066 | 306 | |
0.1% | - | |
5.3 | 6.9 | |
about 1 month ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
autokeras
- Machine Learning Algorithms Cheat Sheet
-
Ask HN: Which piece of tech is underutilized?
I think the interfaces aren't high level enough for the average programmer to adopt it. It needs what https://autokeras.com is for neural nets.
- Technical documentation that just works
- SVM training taking forever on my local machine. Will using AWS Sagemaker be faster for training SVM (Linear) models?
-
[D] [P] How do you use tools like AutoML?
AutoKeras time_series_forecaster.py
tf-keras-vis
-
Help implementing an attention module in a DCGAN (question in comments)
Hello! I'm trying to implement the Deep convolutional GAN of this paper: Weather GAN: Multi-Domain Weather Translation Using Generative Adversarial Networks by Xuelong Li, Kai Kou, Bin Zhao (arxiv) (architecture in image). The training data the authors used consists of images and correspondingsegmentation masks with labels 0-5 for each pixel (0 being no weather-relatedpixel). I have crudely made the segmentation module G(seg), Initialgenerator module G(init) and the Discriminator D, but I don'tunderstand how to do the attention module G(att). In the paper theymentioned they used pretained weights of VGG19, but very little else is saidabout G(att).I found this https://github.com/keisen/tf-keras-vislibrary, which might help me, as I guess I would want G(att) to extractsomething like the saliency or activation maps multiplied with the image.However, I don't know what kind of layers I should use, or what practicalitiesto use apart from the input and output. Should I transfer-learn the network withthis data, and if so, with the segmentation labels (i.e. if any label 1-5 ispresent in the attention pixel returned)? Or can I use the pre-trainedimagenet?Also, does anyone know if the layer colors in thearchitecture image mean anything, or if they are just selected randomly forvisualization? Especially I'm concerned why G(att)'s last encoderlayer (3rd layer) is colored differently from the first two. I was firstthinking that maybe it means that G(att) is a module inside G(seg),but apparently not. The three middle blocks are apparently residual blocks.
What are some alternatives?
autogluon - Fast and Accurate ML in 3 Lines of Code
pytorch-grad-cam - Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
mljar-supervised - Python package for AutoML on Tabular Data with Feature Engineering, Hyper-Parameters Tuning, Explanations and Automatic Documentation
chitra - A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.
adanet - Fast and flexible AutoML with learning guarantees.
livelossplot - Live training loss plot in Jupyter Notebook for Keras, PyTorch and others
automlbenchmark - OpenML AutoML Benchmarking Framework
explainable-cnn - 📦 PyTorch based visualization package for generating layer-wise explanations for CNNs.
AutoViz - Automatically Visualize any dataset, any size with a single line of code. Created by Ram Seshadri. Collaborators Welcome. Permission Granted upon Request.
horovod - Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
NAS-Projects - Automated deep learning algorithms implemented in PyTorch. [Moved to: https://github.com/D-X-Y/AutoDL-Projects]
easy_explain - A library that helps to explain AI models in a really quick & easy way