Skip to main content

Abstract

The adoption of Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) systems introduces novel security risks, making sensitive data vulnerable to leakage through new attack vectors.

In this workshop, we will first introduce a general LLM/RAG architecture and pipeline. We will then discuss a systematic approach to identifying and mitigating security risks.

In the hands-on lab, you will conduct jailbreak attacks on a RAG system and implement effective mitigation strategies to protect your data.

Objectives

– Gain a clear understanding of standard RAG architecture and workflows
– Identify vulnerabilities in RAG systems and learn to mitigate risks
– Obtain hands-on experience by performing attacks and applying mitigation strategies

Target Group

ML engineers, IT architects, IT security consultants, and professionals with a general interest in IT security, as well as those with a technical background interested in LLM/RAG systems

Prerequisites

Laptop, basic IT knowledge and experience with LLMs (e.g., ChatGPT usage), experience with RAG or IT security is not required, but participants with relevant backgrounds can explore deeper insights during the hands-on lab

Organizers