AI is under exponential growth these days. Businesses, Academia, and tech enthusiasts are really hyped about trying out Deep learning to solve their problems. Professionals, researchers, and students are driven to taste the potential of this new tech. Just like every other technology, AI comes with awesome applications topped with some serious implications.
So, join the 3 days expedition specially designed for security professionals to understand, build and hack AI applications. The course is divided into two parts, ML4SEC & SEC4ML. ML4SEC will focus on nitty-gritties of building ML applications. Then learn to hack them in the SEC4ML section.
Training level: Basic; Intermediate
ML4SEC
Considering no prior knowledge of mathematics and ML, we will try to build the intuition behind algorithms. Attendees will go through hands-on experience in building ML-powered defensive and offensive security tools. An in-depth understanding of the entire ML pipeline is provided. Which consists of pre-processing data, building ML models, training and evaluating them, and using trained models for prediction. Well-known machine learning libraries like Tensorflow, Keras, Pytorch, sklearn, etc. will be used. In the end, you will be ready with end-to-end and ready to apply ML Gyan for security professionals.
SEC4ML
This part will address the vulnerabilities (like Adversarial learning, Model stealing, Data poisoning, Model Inference, etc) in state-of-the-art machine learning methodologies. We will not spare the new shiny toy in the market, known as *GPT*. Lab material will consist of Vulnerable AI applications that can be exploited to provide a thorough understanding of discussed vulnerabilities and mitigation.
ML4SEC:
In this session, we will build our understanding of basic yet state-of-the-art machine learning algorithms. Discuss mathemagic behind why these models work the way they do. Build some smart Machine Learning applications and evaluate them. By the end, we will get an idea of how to solve a real-world problem using machine learning.
Introduction to Machine Learning
SEC4ML
In this session, we will look deeper at different flaws in how ML/DL algorithms are implemented. Hands-on examples explaining and attacking such vulnerable implementations. Also, discussion on possible mitigation.
Nikhil Joshi is AI Security Researcher. He is currently working on implementations of ML in offensive and defensive security products. He has orchestrated methodologies to pen-test Machine Learning applications against ML-specific vulnerabilities and loves to explore new ways to hack ML-powered applications. Parallelly Nikhil’s research is focused on security implications in Deep Learning applications such as Adversarial Learning, Model stealing attacks, Data poisoning, etc.
Nikhil is an active member of local Data Science and Security groups and has delivered multiple talks and workshops. He has spoken at HITB Amsterdam, PhDays Russia, and IEEE conferences. And trainer at the Nullnon and Troopers. Being an Applied Mathematics enthusiast, recent advances in Machine Learning and its applications in security, behavioral science, and telecom are of major interest to Nikhil.