Deepfakes, AI Governance, and the Rise of the GRC Engineer with Mike Britton
February 24, 2026
Trust is the currency of modern security, and AI is about to stress test it. In this episode of When Trust Meets AI, host and CEO of Drata, Adam Markowitz, sits down with Mike Britton, Chief Information Officer at Abnormal AI, to unpack what trust really means when deepfakes can blur reality, SaaS vendors ship surprise AI features overnight, and governance has to move at the speed of product. Mike also shares how Abnormal AI is becoming AI-native internally without touching customer data, why they built AI transformation pods, and how lightweight governance can still enforce real controls.
Trust is the currency of modern security, and AI is about to stress test it. In this episode of When Trust Meets AI, host and CEO of Drata, Adam Markowitz, sits down with Mike Britton, Chief Information Officer at Abnormal AI, to unpack what trust really means when deepfakes can blur reality, SaaS vendors ship surprise AI features overnight, and governance has to move at the speed of product. Mike also shares how Abnormal AI is becoming AI-native internally without touching customer data, why they built AI transformation pods, and how lightweight governance can still enforce real controls.
What You’ll Learn:
- Why trust collapses faster than it builds
- How to govern AI tools without killing innovation
- The shift in third-party risk evaluation for AI vendors
- Why you should embed AI pods inside business functions
- How to democratize GRC engineering without hiring software engineers
Mike Britton is the Chief Information Officer at Abnormal AI, where he leads enterprise IT, cybersecurity, and the company's AI-native transformation initiatives. With nearly five years at Abnormal and a 30-year career in cybersecurity, including previous roles at Fortune 500 companies and financial services, Mike brings deep expertise in building trust through responsible AI adoption, third-party risk management, and modernizing GRC from a compliance burden into a competitive advantage.
Episode resources:
Highlights:
- 02:54 Defining Trust and Why It’s Expensive to Regain
- 04:37 How AI Pushes Trust Into a New Frontier
- 07:35 AI-Native Operations
- 09:35 AI Transformation Without Touching Customer Data
- 12:57 Governance That Doesn’t Block
- 16:36 The Emerging Third-Party Risk
- 19:22 Why Trust Centers Don’t Replace Human Trust
- 27:27 Hiring for the AI Era
- 29:58 The Rise of the GRC Engineer
- 32:28 SecOps vs. GRC Divide
- 35:04 What CEOs Should Ask Their CIO/CISO
- 35:49 Books That Shaped Mike’s Approach
Quotes:
- “Trust is one of these attributes where anytime you've broken trust, it's always so much harder to regain trust. It can be lost in a second, and it takes years to regain. Even small things that damage trust, the level of effort that it takes to regain that is monumental versus how easy it is to lose it.”
- “Right now, we want AI tools to be assistance and facilitators, but if that's only where we go, then we've missed the mark of really the age of AI and the true potential of it. We're looking at where we can identify routine mundane tasks and expand a role's potential through context, automation, and agentic things to help them see more and pull in more context faster.”
- “Four and a half, five years ago, customers weren't really asking AI questions because they probably didn't understand it. Now, we have an AI addendum and an AI council that we have to go through. The market has swung too far in one direction, but I look at SOC 2 Type II and ISO certification as a minimum playing field, not necessarily a seal of approval.”
- “I want every single GRC person to be a GRC engineer. The beauty of Claude Code and ChatGPT is it democratized a skill set that wasn't there or was exclusively reserved to developers. You don't have to be a Python developer. You just have to have an idea, know what bad is and what good is, and AI can help you solve it.”
When Trust Meets AI is handcrafted by our friends over at:
fame.so