Intermediate

Mastering LLM Integration Security: Offensive & Defensive Tactics

NotSoSecure

This intensive two-day course explores the security risks and challenges introduced by Large Language Models (LLMs) as they become embedded in modern digital systems. Through AI labs and real-world threat simulations, participants will develop the practical expertise to detect, exploit, and remediate vulnerabilities in AI-powered environments. The course uses a defence-by-offence methodology, helping learners build secure, reliable, and efficient LLM applications.

Who Should Attend

  • Security professionals securing LLM or AI-based applications
  • Developers and engineers integrating LLMs into enterprise systems
  • System architects, DevSecOps teams, and product managers
  • Prompt engineers and AI researchers interested in system hardening

Prerequisites

  • A basic understanding of AI and LLM concepts
  • Familiarity with basic scripting or programming (e.g., Python)
  • A foundational knowledge of cybersecurity threats and controls

What You Will Learn

  • Fundamentals of writing secure, context-aware prompts
  • Detecting and mitigating prompt injection attacks (direct and indirect)
  • Understanding ReACT LLM agent vulnerabilities and exploitation
  • Securing output handling to prevent privilege escalation
  • Identifying and defending against training data poisoning
  • Managing supply chain vulnerabilities in LLM systems
  • Preventing sensitive information disclosure and data leaks
  • Securing plugin design and preventing execution flaws
  • Controlling excessive agency in LLM systems
  • Understanding risks of overreliance on AI systems

Course Outline

Labs & Practical Exercises

This course includes extensive hands-on AI labs and real-world threat simulations covering prompt injection attacks, ReACT agent exploitation, insecure output handling, training data poisoning, sensitive information disclosure, insecure plugin design, and excessive agency scenarios. Participants will use professional security tools to exploit and remediate vulnerabilities in LLM-powered environments.

Certification & Assessment

Certificate of Completion. This course uses a defence-by-offence methodology based on real-world engagements and offensive research, ensuring participants can apply their learning immediately to secure LLM applications.

Cookie Consent

We use cookies to enhance your browsing experience, analyse site traffic, and personalise content. By clicking "Accept All", you consent to our use of cookies. You can manage your preferences or learn more in our Privacy Policy.