KitanaQA
auto-attack
Our great sponsors
KitanaQA | auto-attack | |
---|---|---|
1 | 3 | |
57 | 607 | |
- | - | |
0.0 | 0.0 | |
9 months ago | 3 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
KitanaQA
-
Ask HN: Who is hiring? (February 2021)
Searchable.ai | Full Stack Engineer | Full Time | Remote U.S.
We help our users find their stuff wherever it's stored as we build the future of enterprise search.
We are hiring a full-stack engineer with Rails experience to join our growing engineering team. Our stack includes: Rails, Electron, Webpack, PostgreSQL, Elasticsearch & Kubernetes.
This position will also have the opportunity to help integrate our research teams' SOTA work into our product to help users ask questions across their files (see: https://github.com/searchableai/kitanaqa).
Full description here: https://www.searchable.ai/full-stack-engineer/ and drop us a line at careers at searchable dot ai if you're interested!
auto-attack
-
DARPA Open Sources Resources to Aid Evaluation of Adversarial AI Defenses
I'm less familiar with poisoning, but at least for test-time robustness, the current benchmark for image classifiers is AutoAttack [0,1]. It's an ensemble of adaptive & parameter-free gradient-based and black-box attacks. Submitted academic work is typically considered incomplete without an evaluation on AA (and sometimes deepfool [2]). It is good to see that both are included in ART.
[0] https://arxiv.org/abs/2003.01690
[1] https://github.com/fra31/auto-attack
[2] https://arxiv.org/abs/1511.04599
-
[D] Testing a model's robustness to adversarial attacks
A better method is to use the AutoAttack from Croce et al. https://github.com/fra31/auto-attack which is much more robust to gradient masking. It's actually a combination of 3 attacks (2 white-box and 1 black box) with good default hyper-parameters. It's not perfect but it gives a more accurate robustness.
What are some alternatives?
TextAttack - TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
adversarial-robustness-toolbox - Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
FinBERT-QA - Financial Domain Question Answering with pre-trained BERT Language Model
akvo-flow - A data collection and monitoring tool that works anywhere.
DeepRobust - A pytorch adversarial library for attack and defense methods on images and graphs
ozone - Scalable, redundant, and distributed object store for Apache Hadoop
alpha-beta-CROWN - alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, and 2023)
eClaire - Trello card printer
OpenAttack - An Open-Source Package for Textual Adversarial Attack.
inltk - Natural Language Toolkit for Indic Languages aims to provide out of the box support for various NLP tasks that an application developer might need
bertviz - BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)