Comprehensive security analysis of large language models — identifying attack surfaces, designing automated red-teaming pipelines, and evaluating resilience against sophisticated multi-turn manipulation attacks and prompt injection at scale.
Security analysis of multi-agent systems and agentic protocols — applying the MAESTRO framework to model threats in agent-to-agent communication, task execution integrity, authentication, and the integration of A2A and Model Context Protocol deployments.
Independent safety evaluations of frontier LLMs across multiple risk dimensions — CBRNE misuse potential, cybersecurity capabilities, and behavioral harmlessness. Developing reproducible protocols for assessing open-weight and proprietary models.
Development of the Temporal Context Awareness (TCA) framework as a defense mechanism against time-sensitive adversarial attacks — addressing vulnerabilities where models are exploited through manipulation of their temporal reasoning and context window.
The central challenge of our time is not building capable AI — it is securing capable AI against the adversaries who will probe every seam.
Research sits at the boundary between offensive security and AI safety — finding the vulnerabilities in large language models and multi-agent systems before they are exploited in the wild, and building the frameworks needed to evaluate and defend against them at scale.
Currently a Lead AI Security Research Engineer at Google focusing on LLM security, and an Astra Research & Redwood Research Fellow (AI Safety) at Constellation Research Center. Graduate work in Applied Data Science at the University of Chicago.
Previously: Sun Microsystems, Oracle.
Open to research collaborations, advisory roles, speaking engagements, and discussions about LLM security and AI safety. Particularly interested in connecting with engineers working on frontier systems and agentic AI deployments.
View LinkedIn Profile