xfer
Transfer Learning library for Deep Neural Networks. (by amzn)
d2l-en
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge. (by d2l-ai)
xfer | d2l-en | |
---|---|---|
1 | 6 | |
250 | 21,704 | |
0.0% | 1.3% | |
0.0 | 8.5 | |
10 months ago | 11 days ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
xfer
Posts with mentions or reviews of xfer.
We have used some of these posts to build our list of alternatives
and similar projects.
-
[R] Fast Adaptation with Linearized Neural Networks
Abstract: The inductive biases of trained neural networks are difficult to understand and, consequently, to adapt to new settings. We study the inductive biases of linearizations of neural networks, which we show to be surprisingly good summaries of the full network functions. Inspired by this finding, we propose a technique for embedding these inductive biases into Gaussian processes through a kernel designed from the Jacobian of the network. In this setting, domain adaptation takes the form of interpretable posterior inference, with accompanying uncertainty estimation. This inference is analytic and free of local optima issues found in standard techniques such as fine-tuning neural network weights to a new task. We develop significant computational speed-ups based on matrix multiplies, including a novel implementation for scalable Fisher vector products. Our experiments on both image classification and regression demonstrate the promise and convenience of this framework for transfer learning, compared to neural network fine-tuning. Code is available at this https URL.
d2l-en
Posts with mentions or reviews of d2l-en.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-04-10.
- which book to chose for deep learning :lan Goodfellow or francois chollet
- d2l-en: Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 400 universities from 60 countries including Stanford, MIT, Harvard, and Cambridge.
-
How to pre-train BERT on different objective tasks using HuggingFace
There might is bert library for pre-train bert model in huggingface, But I suggestion that you train bert model in native pytorch to understand detail, Limu's course is recommended for you
-
The Transformer in Machine Translation
GitHub's article on Dive into Deep Learning
- D2l-En
-
I created a way to learn machine learning through Jupyter
There are actually some online books and courses built on Jupyter Notebook ([Dive to Deep Learning Book](https://github.com/d2l-ai/d2l-en) for example). However yours is more detail and could really helps beginners.