Skip to main content

Abstract

As Large Language Models (LLMs) and Generative AI (GenAI) technologies become integral to our digital landscape, they bring unprecedented capabilities alongside significant risks. The LLM Red Teaming Workshop offers an immersive experience to equip you with the knowledge and skills needed to anticipate and counteract potential threats in this evolving domain.

By adopting Red Teaming, a concept rooted in cybersecurity, this workshop delves into the vulnerabilities of LLMs and the implications of adversarial attacks. You will gain insights into various risk scenarios and engage in hands-on exercises, simulating attack strategies and defense mechanisms to safeguard LLM systems effectively.

Objectives

– Understand LLM Red Teaming concepts
– Gain practical knowledge to perform Red Teaming attacks
– Understand common mitigation and defense strategies

Target Group

Individuals from all backgrounds, including those new to LLMs and Red Teaming

Prerequisites

Laptop, prior knowledge of Python programming may enhance the experience but is not a prerequisite for participation

Organizers