About The Training
Berlin 2025 | Trainings
- AI Security: Terminating The Terminator
- AdversaryOps: Engineering Red Team Tradecraft
- Application Security Tool Stack - How to Discover Vulnerabilities in Software
- Building Secure Firmware: Best Practices and Labs
- Cloud Red Team Tactics for Attacking and Defending Azure
- Cyber Threat Intelligence Bootcamp: Hands-on Labs & Real-World Scenarios
- Hacking Android Applications
- Hacking Modern Web & Desktop apps: Master the Future of Attack Vectors
- Slaying the RE Dragon: Mastering Reverse Engineering
< Training Title />
AI Security: Terminating The Terminator
< Training Schedule />
Start Date: Feb 25, 2026
End Date: Feb 27, 2026
< Training Objectives />
In an era where AI is reshaping industries and daily life, the security of AI systems has never been more critical. This comprehensive training program delves into the fascinating world of AI, providing a robust understanding of how these advanced technologies operate and how their inherent vulnerabilities can be identified and mitigated.
Identifying and mitigating vulnerabilities in AI applications is a critical aspect of ensuring their security and reliability. AI systems can be susceptible to a range of attack vectors, including adversarial attacks where malicious inputs are designed to deceive the AI, and data poisoning, where the training data is manipulated to produce inaccurate models. Other vulnerabilities include model inversion, which allows attackers to infer sensitive information from the AI model, and algorithmic biases that can lead to unfair or unethical outcomes. The new age of GenAI brings its own vulnerabilities on the table such as Prompt Injection and jailbreak attacks that target AI systems to perform unintended actions.
By examining real-world case studies and engaging in hands-on exercises, participants will learn how these vulnerabilities manifest and the impact they can have on AI systems. They will also explore best practices and defense mechanisms to safeguard AI applications. This comprehensive approach ensures that participants are equipped not only to identify potential threats but also to implement effective strategies to protect AI systems in an ever-evolving digital landscape.
< Training Level />
Basic;Intermediate
< Training Outlines />
Part 1: Understanding how AI works
This section is aimed at understanding how AI applications are built and deployed. Make sense of underlying algorithms and their use cases. Hands-on model-building exercises will strengthen our intuition behind the algorithms and prepare us to be introduced to AI Security vulnerabilities in Part 2.
- Foundations of AI:
- Basic terminologies, techniques, and introduction to frameworks to get us started with learning them
- Neural Networks: Understanding how deep learning works
- Neural network for classification: spam filter, WAF, and much more
- Convolutional Neural Networks (CNN): to understand how the following works
- Object detection
- Image classification
- Face recognition
- GenAI application: aah! This is what the hype is about
- The LLM architecture: Transformers and Attention
- RAG: Systems that perform Q&A on user documents
- Agentic AI and MCPs
- Knowing how everything is deployed in various applications: end-to-end pipeline, MLOps, etc
Part 2:
This section explores potential loopholes in AI applications. Lab exercises will help us deeply understand AI security vulnerabilities. Thus helping us to plan and implement effective mitigation strategies for AI-specific vulnerabilities.
- Exploring vulnerabilities in AI applications and *real-world examples of the same
- Adversarial Learning attack
- Fooling Image classifiers and object detection systems
- Generating Adversarial patches to target the physical domain
- Fooling the face recognition systems
- Model Stealing Attack
- Extracting models
- Model Skewing and Data poisoning: Hacking our way through feedback loops to control the information that trains the AI models. Prompt injection and Jailbreak attacks: Knowing how an attacker can own the LLMs and get them to do (predict) whatever you want
- Attacking the RAG and Agentic AI applications
- Knowing what could go wrong with systems like your company's internal chatbot
- Insecure serialization: Understanding how malicious actors can inject executable code in publicly distributed ML models.
- Issues with MLOps frameworks
- Breaking Agentic AI applications
- Understanding what could go wrong with MCPs
- Exploring existing security frameworks for AI: MITRE and OWASP
< WHAT TO BRING? />
- A laptop that can access public IP over HTTP eg: http://<public-ip>:5000
- A Gmail account to run python notebooks on Google Colab
- An open mind made up for some intense mathemagic
< Training PREREQUISITE />
- Basic knowledge of Python and Machine Learning is good to have but not required
- Understanding of basic Linux commands to navigate the lab environment
< WHO SHOULD ATTEND? />
- Cyber Security professionals responsible for the security of AI systems
- Developers looking forward to designing, implementing, and maintaining secure AI Applications
- Students with a Computer Science background and a taste for AI and infosec
- AI enthusiasts and professionals
< WHAT TO EXPECT? />
- Understanding the fundamentals of AI development
- Hands-on practice in specially crafted labs for ML and Infosec enthusiasts
- Intuitive understanding of AI algorithms
- Actionable insights on AI vulnerabilities and how to mitigate them
- Lab material and references for post-training practice
< WHAT ATTENDEES WILL GET? />
- Course slides and notes
- Precooked lab exercises to practice AI development and security
- Post-training reference material
< WHAT NOT TO EXPECT? />
- Being an AI Security expert in 3 days
- Heavy mathematical background of AI algorithms concepts
< About the Trainer />
With 7+ years of experience in AI and Cyber Security Nikhil has orchestrated methodologies to pen-test AI applications against AI-specific vulnerabilities and loves to explore new ways to hack them. Parallelly Nikhil’s research is focused on security implications in Deep Learning applications such as Adversarial Learning, Model stealing attacks, Data poisoning, etc. Nikhil is an active member of local Data Science and Security groups and has delivered multiple talks and workshops. He has spoken at HITB Amsterdam, PhDays Russia, and IEEE conferences. And trainer at the Nullcon and Troopers. Being an Applied Mathematics enthusiast, recent advances in Machine Learning and its applications in security, behavioral science, and telecom are of major interest to Nikhil.