Skip to content
@ethz-spylab

SPY Lab

Secure and Private AI research at ETH Zürich

SPY Lab (ETH Zurich)

The Secure and Private AI (SPY) Lab conducts research on the security, privacy and trustworthiness of machine learning systems. We often approach these problems from an adversarial perspective, by designing attacks that probe the worst-case performance of a system to ultimately understand and improve its safety.

💡 Learn more about our work and read our publications on our website.

🖥️ Check the code for our projects in this repository.

Popular repositories Loading

  1. agentdojo agentdojo Public

    A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.

    Python 133 18

  2. rlhf_trojan_competition rlhf_trojan_competition Public

    Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.

    Python 111 9

  3. rlhf-poisoning rlhf-poisoning Public

    Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"

    Python 52 9

  4. diffusion_denoised_smoothing diffusion_denoised_smoothing Public

    Certified robustness "for free" using off-the-shelf diffusion models and classifiers

    Python 40 5

  5. robust-style-mimicry robust-style-mimicry Public

    Python 38 1

  6. superhuman-ai-consistency superhuman-ai-consistency Public

    Python 29 2

Repositories

Showing 10 of 24 repositories
  • agentdojo Public

    A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.

    Python 133 MIT 18 2 2 Updated Apr 23, 2025
  • jailbreak-tax Public
    Python 10 0 0 0 Updated Apr 16, 2025
  • llm_lab Public
    Python 0 0 0 0 Updated Apr 15, 2025
  • Blind-MIA Public

    This is the official code for Blind Baselines Beat Membership Inference Attacks for Foundation Models

    Python 0 0 1 0 Updated Mar 29, 2025
  • Python 27 0 1 0 Updated Mar 4, 2025
  • 0 0 0 0 Updated Feb 6, 2025
  • vmi-retreat-workshop-2024 Public

    Repository for the VMI Summer Retreat Workshop on Hacking AI Agents

    Python 1 MIT 0 0 0 Updated Jan 18, 2025
  • non-adversarial-reproduction Public

    Official code for "Measuring Non-Adversarial Reproduction of Training Data in Large Language Models" (https://arxiv.org/abs/2411.10242)

    Jupyter Notebook 7 0 0 0 Updated Nov 18, 2024
  • Python 21 3 0 0 Updated Oct 6, 2024
  • .github Public
    0 0 0 0 Updated Jul 5, 2024