This intensive two-day course explores the security risks and challenges introduced by Large Language Models (LLMs) as they become embedded in modern digital systems. Through AI labs and real-world threat simulations, participants will develop the practical expertise to detect, exploit, and remediate vulnerabilities in AI-powered environments. The course uses a defence-by-offence methodology, helping learners build secure, reliable, and efficient LLM applications.
Interested in attending? Have a suggestion about running this event near you?
Register your interest now
Description
Prompt engineering
- Fundamentals of writing secure, context-aware prompts
- Few-shot prompting and use of delimiters
- Prompt clarity and techniques to reduce injection risk
Prompt injection
- Overview of prompt injection vectors (direct and indirect)
- Practical exploitation scenarios and impacts
- Detection, mitigation, and secure design strategies
- Lab activities:
- The Math Professor (direct injection)
- RAG-based data poisoning via indirect injection
ReACT LLM agent prompt injection
- Introduction to the Reasoning-Action-Observation (RAO) model
- Vulnerabilities in frameworks such as LangChain
- Agent behaviour manipulation and plugin exploitation
- Lab activities:
- The Bank scenario using GPT-based agents
Insecure output handling
- AI output misuse leading to privilege escalation or code execution
- Front-end exploitation via summarisation and rendering
- Lab activities:
- Injection via document summarisation
- Network analysis and arbitrary code execution
- Internal data leaks through stock bot interactions
Training data poisoning
- Poisoning training or fine-tuning datasets to alter LLM behaviour
- Attack simulation and defence strategies
- Lab activities:
- Adversarial poisoning
- Injection of incorrect factual data
Supply chain vulnerabilities
- Security gaps in third-party plugin, model, or framework usage
- Dependency risk, plugin sandboxing, and deployment hygiene
Sensitive information disclosure
- How LLMs can inadvertently leak personal or proprietary data
- Overfitting, filtering failures, and context misinterpretation
- Lab activities:
- Incomplete filtering and memory retention
- Overfitting and hallucinated disclosure
- Misclassification scenarios
Insecure plugin design
- Misconfigured plugins leading to execution or access control flaws
- Securing LangChain plugins and sanitising file operations
Prompt engineering
- Fundamentals of writing secure, context-aware prompts
- Few-shot prompting and use of delimiters
- Prompt clarity and techniques to reduce injection risk
- Prompt injection
- Overview of prompt injection vectors (direct and indirect)
- Practical exploitation scenarios and impacts
- Detection, mitigation, and secure design strategies
- Lab activities:
- The Math Professor (direct injection)
- RAG-based data poisoning via indirect injection
ReACT LLM agent prompt injection
- Introduction to the Reasoning-Action-Observation (RAO) model
- Vulnerabilities in frameworks such as LangChain
- Agent behaviour manipulation and plugin exploitation
- Lab activities:
- The Bank scenario using GPT-based agents
- Insecure output handling
- AI output misuse leading to privilege escalation or code execution
- Front-end exploitation via summarisation and rendering
- Lab activities:
- Injection via document summarisation
- Network analysis and arbitrary code execution
- Internal data leaks through stock bot interactions
Training data poisoning
- Poisoning training or fine-tuning datasets to alter LLM behaviour
- Attack simulation and defence strategies
- Lab activities:
- Adversarial poisoning
- Injection of incorrect factual data
Supply chain vulnerabilities
- Security gaps in third-party plugin, model, or framework usage
- Dependency risk, plugin sandboxing, and deployment hygiene
Sensitive information disclosure
- How LLMs can inadvertently leak personal or proprietary data
- Overfitting, filtering failures, and context misinterpretation
- Lab activities:
- Incomplete filtering and memory retention
- Overfitting and hallucinated disclosure
- Misclassification scenarios
Insecure plugin design
- Misconfigured plugins leading to execution or access control flaws
- Securing LangChain plugins and sanitising file operations
- Lab activities:
- Exploiting the LangChain run method
- File system access manipulation
Excessive agency in LLM systems
- Over-privileged agents and unintended capability exposure
- Agent hallucination, plugin misuse, and permission escalation
- Lab activities:
- Medical records manipulation
- File system agent abuse and command execution
Overreliance in LLMs
- Cognitive, technical, and organisational risks of AI overdependence
- Legal liabilities, compliance gaps, and mitigation frameworksExploiting the LangChain run method
- File system access manipulation
- Excessive agency in LLM systems
- Over-privileged agents and unintended capability exposure
- Agent hallucination, plugin misuse, and permission escalation
- Lab activities:
- Medical records manipulation
- File system agent abuse and command execution
Overreliance in LLMs
- Cognitive, technical, and organisational risks of AI overdependence
- Legal liabilities, compliance gaps, and mitigation frameworks
Audience
This course is ideal for:
- Security professionals securing LLM or AI-based applications
- Developers and engineers integrating LLMs into enterprise systems
- System architects, DevSecOps teams, and product managers
- Prompt engineers and AI researchers interested in system hardening
Prerequisites
Participants should have:
- A basic understanding of AI and LLM concepts
- Familiarity with basic scripting or programming (e.g., Python)
- A foundational knowledge of cybersecurity threats and control