MachineLearning-QandAI-book
Fast-Transformer
MachineLearning-QandAI-book | Fast-Transformer | |
---|---|---|
3 | 4 | |
243 | 146 | |
- | - | |
6.4 | 0.0 | |
about 1 month ago | over 2 years ago | |
Jupyter Notebook | Jupyter Notebook | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
MachineLearning-QandAI-book
- Machine Learning and AI Beyond the Basics Book
- Machine Learning Q and AI (Book by Sebastian Raschka)
-
New book explaining more advanced concepts in machine learning, deep learning, and AI
A little note though: it's not a coding book; it's more focused on concepts. I do have supplementary materials with code examples for some chapters where it makes sense, though: https://github.com/rasbt/MachineLearning-QandAI-book
Fast-Transformer
What are some alternatives?
Deep-Learning-With-TensorFlow - All the resources and hands-on exercises for you to get started with Deep Learning in TensorFlow
reformer-pytorch - Reformer, the efficient Transformer, in Pytorch
Perceiver - Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow
Conformer - An implementation of Conformer: Convolution-augmented Transformer for Speech Recognition, a Transformer Variant in TensorFlow/Keras
machine-learning-experiments - 🤖 Interactive Machine Learning experiments: 🏋️models training + 🎨models demo
TimeSformer-pytorch - Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification
Transformer-in-Transformer - An Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches
embedding-encoder - Scikit-Learn compatible transformer that turns categorical variables into dense entity embeddings.
vision_models_playground - Playground for testing and implementing various Vision Models
language-planner - Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"
handwritten-digits-recognizer-webapp - This is my first experience with machine learning
x-transformers - A simple but complete full-attention transformer with a set of promising experimental features from various papers