The Bridgecast with Scott Kinka
The CIO’s Guide to AI Governance and Risk
March 10, 2026
In this episode of The Bridgecast, host Scott Kinka sits down with Paul Lekas, Executive Vice President of Global Public Policy at the SIIA, to demystify the complex world of AI regulation. As states move faster than the federal government to regulate technology, Paul provides a roadmap for how businesses can stay ahead of shifting laws without stifling innovation. From data mapping to navigating "high-risk" AI use cases, this conversation is a must-listen for any IT leader trying to operationalize policy into strategy. To find out how Bridgepointe Technologies helps businesses make IT decisions faster with world-class engineering support and ongoing guidance, head to https://bridgepointetechnologies.com/
In this episode of The Bridgecast, host Scott Kinka welcomes Paul Lekas, Executive Vice President of Global Public Policy and Government Affairs at the Software and Information Industry Association (SIIA), to explore the intersection of tech policy, national security, and operational strategy. With a career spanning three presidential administrations and senior roles at the Department of Defense, Paul offers a rare, high-level view of the legislative forces currently shaping the AI landscape.

The conversation dives into the growing tension between federal and state governments, where over 1,500 AI-related bills were introduced last year alone. Paul explains why the SIIA is advocating for federal oversight of AI models to prevent a "confusing field" of 50 different sets of rules that could hinder economic growth and innovation. For mid-market leaders, he provides a pragmatic framework for AI governance: map your data, categorize your systems by risk, and ensure your engineers are in constant dialogue with your legal team.


What you will learn:


Paul Lekas
is the Executive Vice President of Global Public Policy and Government Affairs at the SIIA, representing nearly 400 companies across the technology and information sectors. A graduate of Harvard Law, Paul has a distinguished background in public service, having co-authored recommendations for the National Security Commission on Artificial Intelligence and served as senior counsel at the Department of Defense. His work now focuses on bridging the gap between fast-moving innovation and the educational needs of government enforcers and lawmakers. 

To find out how Bridgepointe Technologies helps businesses make IT decisions faster with world-class engineering support and ongoing guidance, head to https://bridgepointetechnologies.com/

Episode Highlights:

Paul’s advice for CIOs is the clearest playbook you’ll hear: any company using AI needs a governance mechanism where engineers, lawyers, and business stakeholders are in active conversation with each other—not in silos. That means mapping your data, mapping your AI, and categorizing your systems by risk level. Under currently enacted laws, “high-risk” AI consistently shows up in the same categories: healthcare, employment eligibility, and access to financial services. If your AI systems touch any of those areas, focus your audit practices and explainability requirements there first, because that’s where legal liability is most likely to materialize. The companies building this infrastructure now aren’t just complying—they’re creating the organizational muscle that will let them innovate faster and more confidently when the regulatory environment hardens further.

SIIA’s position is direct: frontier AI model oversight—the major models like ChatGPT, Gemini, and Claude—should be the exclusive domain of the federal government, not fragmented across 50 separate state frameworks. Paul’s reasoning is practical rather than ideological. Evaluating national security risks, unintended model behavior, and public safety events requires specialized expertise and institutional resources that no individual state government can reasonably be expected to replicate. The alternative—fifty separate state-level investigations into frontier model risk—would be “incredibly complicated and resource-intensive” and would almost certainly not produce better outcomes. This doesn’t mean no regulation. It means regulation in the right place, by the right institutions, with the right expertise. Paul is careful to distinguish this from deployer-level regulation, where state and sector-specific oversight can play a meaningful role.

For a mid-market business without a dedicated policy or legal team, Paul’s starting point is deceptively straightforward: know what data you have and know what AI you’re using. From that inventory, work with legal to conduct impact assessments, update terms and conditions, and identify near-term compliance obligations. If you operate across borders, think carefully about data localization requirements and consider building modular approaches to your services that can flex as local rules diverge. Paul frames it simply: if you’re already handling PII, medical records, or financial data with care, you’re most of the way there. “You’re already taking care of certain data; keep doing that, and then layer the AI piece on top.” The companies that will be caught off guard are the ones that haven’t done this inventory before regulators arrive asking for it.

Episode Resources:


The Bridgecast is handcrafted by our friends over at: fame.so