robust-models-transfer
Official repository for our NeurIPS 2020 *oral* "Do Adversarially Robust ImageNet Models Transfer Better?" (by microsoft)
modelscan
Protection against Model Serialization Attacks (by protectai)
robust-models-transfer | modelscan | |
---|---|---|
1 | 3 | |
239 | 213 | |
- | 19.2% | |
10.0 | 8.6 | |
11 months ago | 17 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
robust-models-transfer
Posts with mentions or reviews of robust-models-transfer.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-09-18.
-
38TB of data accidentally exposed by Microsoft AI researchers
Looks like it was up for 2 years with that old link[1]. Fixed two months ago.
[1] https://github.com/microsoft/robust-models-transfer/blame/a9...
modelscan
Posts with mentions or reviews of modelscan.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-09-18.
-
Malicious AI models on Hugging Face backdoor users' machines
Full disclosure I am head of product at Protect AI. To make this easier for everyone we have an open source tool (friendly licensing) called ModelScan https://github.com/protectai/modelscan/tree/main I wouldn't be shocked if they are using this under the hood, but all the best if they are!
-
38TB of data accidentally exposed by Microsoft AI researchers
Disclosure I work for the company that released this: https://github.com/protectai/modelscan but we do have a tool to support scanning many models for this kind of problem.
That said you should be using something like safe-tensors.
- ModelScan – open-source scanning for unsafe code in ML models
What are some alternatives?
When comparing robust-models-transfer and modelscan you can also consider the following projects:
onnx - Open standard for machine learning interoperability