The Bridgecast with Scott Kinka
Beyond the Hype: Building Ethical AI for Business
April 8, 2026
In this episode of The Bridgecast, host Scott Kinka sits down with Dr. Eva-Marie Muller-Stuler, founder and chief AI officer at the Hummingbird Group, to discuss the transition from the "Wild West" of early data science to the modern era of responsible AI. With over 25 years of experience leading AI initiatives at IBM and EY and advising global bodies like the UN and UNESCO, Dr. Eva-Marie reveals why the biggest mistake enterprises make is treated AI as a technology project rather than a business strategy. This conversation explores the critical importance of data quality, the reality of algorithmic bias, and why keeping humans in the loop is essential for long-term success. To find out how Bridgepointe Technologies helps businesses make IT decisions faster with world-class engineering support and ongoing guidance, head to https://bridgepointetechnologies.com/
In this episode of The Bridgecast, host Scott Kinka welcomes Dr. Eva-Marie Muller-Stuler, a literal rock star in the world of data science and AI ethics. As the former leader of IBM’s first AI center of excellence and a top advisor to the European Parliament, Dr. Eva-Marie brings a wealth of experience in turning emerging technologies into measurable business outcomes. She challenges the current corporate obsession with "agentic AI" and "autonomous workers" by highlighting the foundational work required in data governance and strategy.

Dr. Eva-Marie shares a seminal moment from her career where a major telecom company offered her an intern's worth of raw data without an MOU, illustrating just how far the industry has come—and how many ethical pitfalls remain. The discussion dives deep into "data archaeology," explaining why data for AI requires a much more rigorous standard than data for traditional analytics.

What you will learn:

Dr. Eva-Marie Muller-Stuler is the Founder and Chief AI Officer at The Hummingbird Group, a firm dedicated to responsible, high-performance AI deployment. With over 25 years of experience helping global organizations turn emerging technologies into measurable business outcomes, she has led major AI initiatives at KPMG, Ernst & Young, and IBM—where she built their first AI center of excellence. She has also advised the United Nations, UNESCO, and the European Parliament on AI governance and risk mitigation. Recognized as a Top 10 Most Influential Woman in Technology, World’s Best Data Scientist, and Top Brilliant Women in AI Ethics, Dr. Eva is a leading voice on the intersection of technical rigor, ethical AI, and real-world business outcomes. She is based in Dubai and is currently writing a book on how to correctly build AI systems.

Episode Highlights:

Dr. Eva’s pivotal career moment came in 2013 at KPMG, when her team was building “always-on machines for decision support.” With almost no data governance norms in place, they contacted one of the world’s largest telecom companies and asked for all their customer data—who called whom, from where, which bills were paid. The company’s response was essentially: help yourself. They sent an intern with flash drives to pick it up. No MOU. No payment. No legal framework.

Sitting with the data, Eva and her team had a sobering realization: they didn’t need AI models at all. They could simply filter behavioral patterns to identify individuals’ religion, health status, and financial situation—with no model required. “We had so much knowledge about individual people,” she recalls. That moment became the catalyst that drove her into AI governance work with the UK government, EU, United Nations, and NGOs worldwide. Even today—with GDPR and the EU AI Act in place—most large language models are not compliant, and enforcement remains dangerously weak. The wild west isn’t over. It’s just better dressed.

When Nike rolled out GitHub Copilot, leadership assumed flipping the switch would be enough. It wasn't. Adoption stalled. What unlocked it? Training, targeted use-case discovery, and — most critically — a "champions program" that activated the engineers who were already experimenting on their own. Elaine connects this directly to the shadow IT problem: fear of displacement drives people to go rogue. Organizations that surface those early adopters and give them a sanctioned role in the rollout don't just solve a security problem — they create a self-sustaining adoption engine. Her formula: top-down mandate gives permission, bottom-up champions give momentum. "Build community structures that understand and operate. Then, all of a sudden, you're not just pushing a literacy program; it's creating itself. That's sustainable."

Elaine lands one of the episode's most counterintuitive insights: it's not Gen Z leading AI adoption in the enterprise — it's the seasoned operators with years of context. AI tools are only as powerful as the prompts fed into them, and great prompts require deep domain knowledge. "Intelligence lives outside the tool," she says. "You can't just apply an agent and think you're diving into the workflow. You have to learn about all the tribal knowledge gained over the years." This reframes the entire upskilling conversation. The goal isn't to teach experienced workers to be tech-savvy; it's to help them see that what they already know is the most valuable input to any AI system — and then invest in developing that capability intentionally.

Episode Resources:


The Bridgecast is handcrafted by our friends over at: fame.so