About the Speaker
< Talk Title />
< Talk Category />
< Talk Abstract />
Pentesters today deal with an overwhelming volume of HTTP traffic, yet most AI-assisted tools sit outside the real workflow. VISTA is an open-source Burp Suite extension built to fix that by bringing context-aware AI reasoning directly into Proxy and Repeater. With a simple right-click — “Send to VISTA” — the tool extracts the request, strips sensitive headers when enabled, and applies a structured template engine to generate targeted guidance: potential attack paths, payload ideas, and analysis tailored to that specific request.
This talk walks through how VISTA works under the hood: its request-scoped chat memory model, the template selection logic, how traffic is normalized before being sent to an LLM, and the safeguards added to prevent accidental data leakage. We’ll demonstrate how customizable templates allow teams to encode their methodology, enforce consistency, and even create vulnerability-specific workflows. Real-world testing results will show where LLMs genuinely enhance coverage—and where they still fail without human judgment.
Rather than claiming AI “finds” vulnerabilities, this session focuses on the concrete technical engineering behind safely integrating AI into offensive tooling. Attendees will leave with an understanding of the architecture, the design choices, and practical lessons learned from building VISTA to augment (not replace) human pentesters.
Github : https://github.com/Adw0rm-sec/VISTA
< Speaker Bio />
I am a security researcher, AppSec tinkerer, and the kind of person who opens Burp Suite “just to check one thing” and resurfaces three hours later with 47 tabs, two existential questions, and a new tool idea. He built VISTA, an AI-powered Burp extension designed to help pentesters figure out what to try next before their brain melts from JSON overload.
When he’s not reverse-engineering APIs or arguing with LLM prompts until they behave, he enjoys automating anything that takes more than five seconds—even if building the automation takes five days. His research focuses on using AI responsibly in offensive security tools, keeping humans firmly in the driver's seat, and making sure nothing accidentally hacks the wrong thing (including himself).
Despite spending most of his time staring at HTTP requests, he promises he is fun at parties—especially if the party has Wi-Fi.