About the Speaker
< Talk Title />
< Talk Category />
< Talk Abstract />
As AI systems increasingly drive critical decisions from healthcare diagnostics to fraud detection, understanding why a model makes a particular prediction has become as important as the prediction itself. While Large Language Models (LLMs) dominate today’s AI discourse, the majority of real-world systems still rely on traditional classification, regression, and neural network models, each of which remains largely opaque to developers, auditors, and security teams.
This session demystifies Explainable AI (XAI) through the lens of practical, hands-on techniques such as SHAP, Counterfactual Explanations, and Layer-wise Relevance Propagation (LRP). Attendees will learn how to visualize and interpret predictions from image classifiers and regression models, uncovering which features drive outcomes, a crucial step toward ensuring fairness, robustness, and accountability in AI pipelines.
Finally, the session bridges these foundational explainability methods with emerging paradigms in LLM-based systems, where chain-of-thought reasoning and contextual transparency become essential for responsible and secure deployment. As organizations increasingly embed LLMs within multi-agent architectures and connect them through Model Context Protocol (MCP) servers, explainability becomes central to auditing decisions, detecting bias, and maintaining governance. This discussion highlights how explainability principles extend from classical models to modern agentic ecosystems, ensuring AI remains transparent, trustworthy, and aligned with the principles of Responsible AI.
< Speaker Bio />
Alex Neelankavil Devassy is a seasoned Cyber Security Consultant with over 5+ years of extensive experience in penetration testing, security consultancy, and cybersecurity training. With a strong background in conducting security assessments, Alex specializes in penetration testing of various systems, including commercial off-the-shelf Web Applications, Network, Mobile, SAP, and Thick client applications. With a focus on emerging technologies, Alex is dedicated to developing methodologies, tools, presentations, and learning materials for security assessments of Blockchain and AI systems. His expertise extends to automating pen testing activities using Azure Serverless modules, PowerShell, Nodejs, Docker, and other cutting-edge technologies. Alex's achievements include co-authoring the chapter "Safeguarding Blockchains from Adversarial Tactics" in the book "Blockchain for Industry 4.0: Emergence, Challenges, and Opportunities." He has also shared his knowledge as a speaker at various security conferences including Seasides Infosec Conference 2023, c0c0n 15th edition(2022), OWASP Tunisia, and Kerala chapters.