Python adversarial-attacks

Open-source Python projects categorized as adversarial-attacks

Top 18 Python adversarial-attack Projects

  • adversarial-robustness-toolbox

    Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

  • TextAttack

    TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/

  • Project mention: Preprocessing methods besides stop words, regular expressions, lemmatization and stemming for an NLP classification problem | /r/MLQuestions | 2023-06-09

    Could have a look at what's available in the augmentor here https://github.com/QData/TextAttack. I'm not experienced with NLP so I may be way off here

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • foolbox

    A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

  • Project mention: More snake-oil | /r/DefendingAIArt | 2023-06-26

    Go ahead, play with any adversarial attacks from https://github.com/bethgelab/foolbox you will not find an attack that is both robust to perturbations and almost visually imperceptible

  • promptbench

    A unified evaluation framework for large language models

  • Project mention: Show HN: Times faster LLM evaluation with Bayesian optimization | news.ycombinator.com | 2024-02-13

    Fair question.

    Evaluate refers to the phase after training to check if the training is good.

    Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!

    So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.

  • DeepRobust

    A pytorch adversarial library for attack and defense methods on images and graphs

  • llm-guard

    The Security Toolkit for LLM Interactions

  • Project mention: llm-guard: The Security Toolkit for LLM Interactions | /r/blueteamsec | 2023-09-19
  • OpenAttack

    An Open-Source Package for Textual Adversarial Attack.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • auto-attack

    Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"

  • natural-adv-examples

    A Harder ImageNet Test Set (CVPR 2021)

  • TextFooler

    A Model for Natural Language Attack on Text Classification and Inference

  • aegis

    Self-hardening firewall for large language models (by automorphic-ai)

  • Project mention: Show HN: Firewall for LLMs–Guard Against Prompt Injection, PII Leakage, Toxicity | news.ycombinator.com | 2023-06-28

    Hey HN,

    We're building Aegis, a firewall for LLMs: a guard against adversarial attacks, prompt injections, toxic language, PII leakage, etc.

    One of the primary concerns entwined with building LLM applications is the chance of attackers subverting the model’s original instructions via untrusted user input, which unlike in SQL injection attacks, can’t be easily sanitized. (See https://greshake.github.io/ for the mildest such instance.) Because the consequences are dire, we feel it’s better to err on the side of caution, with something mutli-pass like Aegis, which consists of a lexical similarity check, a semantic similarity check, and a final pass through an ML model.

    We'd love for you to check it out—see if you can prompt inject it!, and give any suggestions/thoughts on how we could improve it: https://github.com/automorphic-ai/aegis.

    If you want to play around with it without creating an account, try the playground: https://automorphic.ai/playground.

    If you're interested in or need help using Aegis, have ideas, or want to contribute, join our [Discord](https://discord.com/invite/E8y4NcNeBe), or feel free to reach out at [email protected]. Excited to hear your feedback!

    Repository: https://github.com/automorphic-ai/aegis

  • Anti-DreamBooth

    Anti-DreamBooth: Protecting users from personalized text-to-image synthesis (ICCV'23)

  • plexiglass

    A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).

  • Project mention: Looking for contributors to an AI security project | /r/opensource | 2023-12-07
  • GreatX

    A graph reliability toolbox based on PyTorch and PyTorch Geometric (PyG).

  • Project mention: RAG Using Structured Data: Overview and Important Questions | news.ycombinator.com | 2024-01-10

    Ok, using ChatGPT and Bard (the irony lol) I learned a bit more about GNNs:

    GNNs are probabilistic and can be trained to learn representations in graph-structured data and handling complex relationships, while classical graph algorithms are specialized for specific graph analysis tasks and operate based on predefined rules/steps.

    * Why is PyG it called "Geometric" and not "Topologic" ?

    Properties like connectivity, neighborhoods, and even geodesic distances can all be considered topological features of a graph. These features remain unchanged under continuous deformations like stretching or bending, which is the defining characteristic of topological equivalence. In this sense, "PyTorch Topologic" might be a more accurate reflection of the library's focus on analyzing the intrinsic structure and connections within graphs.

    However, the term "geometric" still has some merit in the context of PyG. While most GNN operations rely on topological principles, some do incorporate notions of Euclidean geometry, such as:

    - Node embeddings: Many GNNs learn low-dimensional vectors for each node, which can be interpreted as points in a vector space, allowing geometric operations like distances and angles to be applied.

    - Spectral GNNs: These models leverage the eigenvalues and eigenvectors of the graph Laplacian, which encodes information about the geometric structure and distances between nodes.

    - Manifold learning: Certain types of graphs can be seen as low-dimensional representations of high-dimensional manifolds. Applying GNNs in this context involves learning geometric properties on the manifold itself.

    Therefore, although topology plays a primary role in understanding and analyzing graphs, geometry can still be relevant in certain contexts and GNN operations.

    * Real world applications:

    - HuggingFace has a few models [0] around things like computational chemistry [1] or weather forecasting.

    - PyGod [2] can be used for Outlier Detection (Anomaly Detection).

    - Apparently ULTRA [3] can "infer" (in the knowledge graph sense), that Michael Jackson released some disco music :-p (see the paper).

    - RGCN [4] can be used for knowledge graph link prediction (recovery of missing facts, i.e. subject-predicate-object triples) and entity classification (recovery of missing entity attributes).

    - GreatX [5] tackles removing inherent noise, "Distribution Shift" and "Adversarial Attacks" (ex: noise purposely introduced to hide a node presence) from networks. Apparently this is a thing and the field is called "Graph Reliability" or "Reliable Deep Graph Learning". The author even has a bunch of "awesome" style lists of links! [6]

    - Finally this repo has a nice explanation of how/why to run machine learning algorithms "outside of the DB":

    "Pytorch Geometric (PyG) has a whole arsenal of neural network layers and techniques to approach machine learning on graphs (aka graph representation learning, graph machine learning, deep graph learning) and has been used in this repo [7] to learn link patterns, also known as link or edge predictions."

    --

    0: https://huggingface.co/models?pipeline_tag=graph-ml&sort=tre...

    1: https://github.com/Microsoft/Graphormer

    2: https://github.com/pygod-team/pygod

    3: https://github.com/DeepGraphLearning/ULTRA

    4: https://huggingface.co/riship-nv/RGCN

    5: https://github.com/EdisonLeeeee/GreatX

    6: https://edisonleeeee.github.io/projects.html

    7: https://github.com/Orbifold/pyg-link-prediction

  • KitanaQA

    KitanaQA: Adversarial training and data augmentation for neural question-answering models (by searchableai)

  • vibraniumdome

    LLM Security Platform.

  • Project mention: Show HN: The first open source LLM Applications Firewall | news.ycombinator.com | 2024-02-03
  • LBGAT

    Learnable Boundary Guided Adversarial Training (ICCV2021)

  • dnnf

    Deep Neural Network Falsification

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

Python adversarial-attacks related posts

Index

What are some of the best open-source adversarial-attack projects in Python? This list will help you:

Project Stars
1 adversarial-robustness-toolbox 4,447
2 TextAttack 2,754
3 foolbox 2,655
4 promptbench 1,969
5 DeepRobust 940
6 llm-guard 821
7 OpenAttack 652
8 auto-attack 607
9 natural-adv-examples 570
10 TextFooler 465
11 aegis 241
12 Anti-DreamBooth 180
13 plexiglass 98
14 GreatX 81
15 KitanaQA 57
16 vibraniumdome 41
17 LBGAT 33
18 dnnf 7

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com