When Trust Meets AI
Deepfakes, AI Governance, and the Rise of the GRC Engineer with Mike Britton
February 24, 2026
Trust is the currency of modern security, and AI is about to stress test it. In this episode of When Trust Meets AI, host and CEO of Drata, Adam Markowitz, sits down with Mike Britton, Chief Information Officer at Abnormal AI, to unpack what trust really means when deepfakes can blur reality, SaaS vendors ship surprise AI features overnight, and governance has to move at the speed of product. Mike also shares how Abnormal AI is becoming AI-native internally without touching customer data, why they built AI transformation pods, and how lightweight governance can still enforce real controls.
Trust is the currency of modern security, and AI is about to stress test it. In this episode of When Trust Meets AI, host and CEO of Drata, Adam Markowitz, sits down with Mike Britton, Chief Information Officer at Abnormal AI, to unpack what trust really means when deepfakes can blur reality, SaaS vendors ship surprise AI features overnight, and governance has to move at the speed of product. Mike also shares how Abnormal AI is becoming AI-native internally without touching customer data, why they built AI transformation pods, and how lightweight governance can still enforce real controls.


What You’ll Learn:







Mike Britton is the Chief Information Officer at Abnormal AI, where he leads enterprise IT, cybersecurity, and the company's AI-native transformation initiatives. With nearly five years at Abnormal and a 30-year career in cybersecurity, including previous roles at Fortune 500 companies and financial services, Mike brings deep expertise in building trust through responsible AI adoption, third-party risk management, and modernizing GRC from a compliance burden into a competitive advantage.


Episode resources: 







Highlights:

















Quotes:


  1. “Trust is one of these attributes where anytime you've broken trust, it's always so much harder to regain trust. It can be lost in a second, and it takes years to regain. Even small things that damage trust, the level of effort that it takes to regain that is monumental versus how easy it is to lose it.”
  2. “Right now, we want AI tools to be assistance and facilitators, but if that's only where we go, then we've missed the mark of really the age of AI and the true potential of it. We're looking at where we can identify routine mundane tasks and expand a role's potential through context, automation, and agentic things to help them see more and pull in more context faster.”
  3. “Four and a half, five years ago, customers weren't really asking AI questions because they probably didn't understand it. Now, we have an AI addendum and an AI council that we have to go through. The market has swung too far in one direction, but I look at SOC 2 Type II and ISO certification as a minimum playing field, not necessarily a seal of approval.”
  4. “I want every single GRC person to be a GRC engineer. The beauty of Claude Code and ChatGPT is it democratized a skill set that wasn't there or was exclusively reserved to developers. You don't have to be a Python developer. You just have to have an idea, know what bad is and what good is, and AI can help you solve it.”


When Trust Meets AI is handcrafted by our friends over at: fame.so