xfer
Transfer Learning library for Deep Neural Networks. (by amzn)
PaddlePaddle
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署) (by PaddlePaddle)
xfer | PaddlePaddle | |
---|---|---|
1 | 6 | |
250 | 21,625 | |
0.0% | 0.5% | |
0.0 | 10.0 | |
10 months ago | about 4 hours ago | |
Python | C++ | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
xfer
Posts with mentions or reviews of xfer.
We have used some of these posts to build our list of alternatives
and similar projects.
-
[R] Fast Adaptation with Linearized Neural Networks
Abstract: The inductive biases of trained neural networks are difficult to understand and, consequently, to adapt to new settings. We study the inductive biases of linearizations of neural networks, which we show to be surprisingly good summaries of the full network functions. Inspired by this finding, we propose a technique for embedding these inductive biases into Gaussian processes through a kernel designed from the Jacobian of the network. In this setting, domain adaptation takes the form of interpretable posterior inference, with accompanying uncertainty estimation. This inference is analytic and free of local optima issues found in standard techniques such as fine-tuning neural network weights to a new task. We develop significant computational speed-ups based on matrix multiplies, including a novel implementation for scalable Fisher vector products. Our experiments on both image classification and regression demonstrate the promise and convenience of this framework for transfer learning, compared to neural network fine-tuning. Code is available at this https URL.
PaddlePaddle
Posts with mentions or reviews of PaddlePaddle.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-16.
-
List of AI-Models
Click to Learn more...
-
Ask HN: Are there any notable Chinese FLOSS projects?
PaddlePaddle?
https://github.com/PaddlePaddle/Paddle
Also, Baidu have quite a few OSS projects out there in general.
https://github.com/baidu
-
Volcano vs Yunikorn vs Knative
Volcano is a batch scheduler on top of Kube-batch targetting spark-operator, plain old MPI, chinesium paddlepaddle, and Kromwell HPC.
-
Baidu AI Researchers Introduce SE-MoE That Proposes Elastic MoE Training With 2D Prefetch And Fusion Communication Over Hierarchical Storage
Continue reading | Check out the paper, and Github
- I have issue with only __habs for half datatype? Please help!
- Alternatives to google collab?