DARPA Open Sources Resources to Aid Evaluation of Adversarial AI Defenses

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • auto-attack

    Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"

  • I'm less familiar with poisoning, but at least for test-time robustness, the current benchmark for image classifiers is AutoAttack [0,1]. It's an ensemble of adaptive & parameter-free gradient-based and black-box attacks. Submitted academic work is typically considered incomplete without an evaluation on AA (and sometimes deepfool [2]). It is good to see that both are included in ART.

    [0] https://arxiv.org/abs/2003.01690

    [1] https://github.com/fra31/auto-attack

    [2] https://arxiv.org/abs/1511.04599

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • [D] Testing a model's robustness to adversarial attacks

    2 projects | /r/MachineLearning | 30 Jan 2021
  • Show HN: Times faster LLM evaluation with Bayesian optimization

    6 projects | news.ycombinator.com | 13 Feb 2024
  • Looking for contributors to an AI security project

    1 project | /r/opensource | 7 Dec 2023
  • [P] Plexiglass: a toolbox for testing against adversarial attacks in DNNs and LLMs.

    1 project | /r/MachineLearning | 7 Dec 2023
  • Safety in Deep Reinforcement Learning

    1 project | /r/programming | 6 Dec 2023