Machine learning / Deep learning is under exponential growth these days. Businesses, Academia, and tech enthusiasts are really hyped about trying out Deep learning to solve their problems. A lot of students, professionals, and researchers are driven to learn this new cool tech. Just like every other technology, ML comes with awesome applications topped with some serious implications.
So, join the 3 days expedition specially designed for security professionals to understand, build and hack Machine Learning applications. The course is divided into two parts, ML4SEC & SEC4ML. ML4SEC will focus on nitty-gritties of building ML applications. Then learn to hack them in the SEC4ML part.
Training level: Intermediate; Basic
Considering no prior knowledge of mathematics and ML, we will try to build the intuition behind algorithms. Attendees will go through hands-on experience in building ML-powered defensive and offensive security tools. An in-depth understanding of the entire ML pipeline is provided. Which consists of pre-processing data, building ML models, training and evaluating them, and using trained models for prediction. Well-known machine learning libraries like Tensorflow, Keras, Pytorch, sklearn, etc. will be used. In the end, you will be ready with end-to-end and ready to apply ML Gyan for security professionals.
This part will address the vulnerabilities (like Adversarial learning, Model stealing, Data poisoning, Model Inference, etc) in state-of-the-art machine learning methodologies. Lab material will consist of Vulnerable Machine Learning applications that can be exploited to provide a thorough understanding of discussed vulnerabilities. Possible mitigation to these vulnerabilities will also be discussed.
In this session, we will build up our understanding of basic yet state-of-the-art machine learning algorithms. Discuss mathemagic behind why these models work the way they do. Build some smart Machine Learning applications and evaluate them. By the end, we will get an idea of how to solve a real-world problem using machine learning.
In this session, we will have a deeper look at different flaws in how ML/DL algorithms are implemented. Hands-on examples explaining and attacking such vulnerable implementations. Also, discussion on possible mitigation.
Nikhil Joshi is AI Security Researcher. He is currently working on implementations of ML in offensive and defensive security products. He has orchestrated methodologies to pen-test Machine Learning applications against ML-specific vulnerabilities and loves to explore new ways to hack ML-powered applications. Parallelly Nikhil’s research is focused on security implications in Deep Learning applications such as Adversarial Learning, Model stealing attacks, Data poisoning, etc.
Nikhil is an active member of local Data Science and Security groups and has delivered multiple talks and workshops. He has spoken at HITB Amsterdam, PhDays Russia, and IEEE conferences. And trainer at the Nullnon and Troopers. Being an Applied Mathematics enthusiast, recent advances in Machine Learning and its applications in security, behavioral science, and telecom are of major interest to Nikhil.