
The evolving world of artificial intelligence (AI) brings both opportunities and risks. To protect assets, organizations must understand how to secure their AI systems. This in-depth course delves into the AI security landscape, addressing vulnerabilities like prompt injection, denial of service attacks, model theft, and more. Learn how attackers exploit these weaknesses and gain hands-on experience with proven defense strategies and security APIs.
Interested in attending? Have a suggestion about running this event near you?
Register your interest now
Description
- Introduction to AI Security
- Types of AI Systems and Their Vulnerabilities
- Understanding and Countering AI-specific Attacks
- Ethical and Reliable AI
- Prompt Injection
- Model Jailbreaks and Extraction Techniques
- Visual Prompt Injection
- Denial of Service Attacks
- Secure LLM Integration
- Training Data Manipulation
- Human-AI Interaction
- Secure AI Infrastructure
Participants attending this course will
- Gain a comprehensive understanding of AI technologies and the unique security risks they pose
- Learn to identify and mitigate common AI vulnerabilities
- Gain practical skills in securely integrating LLMs into applications
- Understand the principles of responsible, reliable, and explainable AI
- Familiarize themselves with security best practices for AI systems
- Stay updated with the evolving threat landscape in AI security
- Engage in hands-on exercises that simulate real-world scenarios
Day 1
- Introduction to AI security
- Using AI for malicious intents
- The AI Security landscape
- Attacks on AI systems
Day 2
- Attacks on AI systems
- Visual Prompt Injection
- Denial of Service
- Model theft
Day 3
- LLM integration
- Training data manipulation
- Human-AI interaction
- Secure AI infrastructure
Audience
Developers
Prerequisites
AI Fundamentals, Security Fundamentals, Software development