Shielded: The Last Line of Cyber Defense
Why AI is accelerating both attackers and defenders: From MWC Barcelona
March 12, 2026
Artificial intelligence is reshaping cybersecurity on both sides of the battlefield. In this episode of Shielded: The Last Line of Cyber Defense, host Jo Lintzen speaks with two experts at Mobile World Conference in Barcelona. Amidst the buzz of the industry being in one place, they discuss the security landscape in the AI era. The guests are working at different layers; Geri Revay, Principal Security Researcher at Fortinet’s FortiGuard Labs, explains how cybercrime has evolved into a structured ecosystem where attackers specialize in different roles and services. Later in the episode, Haon Park, Co-Founder and CTO of AIM Intelligence, focuses on the emerging risks around AI systems themselves. As organizations rapidly deploy AI models, agents, and autonomous technologies, these systems introduce an entirely new category of attack surface. Together, the conversation highlights a critical shift in the security landscape. Attackers are moving faster through automation and specialization, while defenders must adapt to new forms of risk created by the technologies they are deploying.
Cybersecurity threats have evolved significantly from the early days of individual hackers experimenting independently. In their conversation at Mobile World Conference 2026, Geri Revay explains how cybercrime has matured into a structured and profitable ecosystem that resembles a business supply chain. Instead of one attacker performing every step of an intrusion, the work is now divided across specialized groups. Some actors focus on gaining initial access to corporate networks and then sell that access to others. Other groups build ransomware tools, while separate teams manage ransom negotiations or distribute stolen data.


This division of labor dramatically lowers the barrier to entry for cybercriminals. Attackers no longer need deep technical expertise to carry out an operation. Many tools and services can now be purchased directly from underground marketplaces. As a result, cybercrime has become more opportunistic, more scalable, and more accessible than it was even a few years ago.


However, defenders also have access to AI driven capabilities. Security teams already collect enormous amounts of telemetry through logs, network monitoring, and endpoint detection tools. AI systems can analyze this data to detect anomalies, identify emerging threats, and automate parts of the defensive workflow. Over time, this access to large datasets may give defenders a strategic advantage.


The conversation also explores how cybersecurity challenges differ between traditional IT environments and operational technology environments. Industrial systems often prioritize operational availability and safety above all else. Many devices run for decades and cannot easily be patched or modified. This creates a different security model where monitoring, segmentation, and deception technologies play a more important role than frequent system updates.


Haon’s work focuses on automated AI red teaming. Instead of relying only on human testers, AI driven attacker agents can simulate thousands of potential attacks against an AI model or service. This allows organizations to identify vulnerabilities earlier and test whether guardrails and policies are functioning correctly.


One of the most significant emerging risks involves physical AI systems. Autonomous vehicles, drones, and robotics rely on multimodal inputs such as images, audio, and sensor data to interpret their environment. If attackers manipulate these inputs, they may influence how the system behaves. As AI systems move from digital environments into the physical world, the consequences of security failures could extend beyond data breaches and into real world harm.


Across both conversations, a consistent theme emerges. The cybersecurity landscape is expanding in both scale and complexity. Attackers are accelerating their operations through automation and specialization, while defenders must also learn how to secure the new technologies they are building. Organizations that fail to address AI related risks early may discover vulnerabilities that traditional security frameworks were never designed to handle.


What You’ll Learn:


Your Roadmap to Understanding the Next Phase of Cybersecurity


[07:12] Step 1: Cybercrime Has Become a Supply Chain


Cybercrime has evolved from isolated attackers into a structured ecosystem. Initial access brokers focus on gaining entry into networks and selling that access. Ransomware developers create tools and services. Other groups handle negotiations and payment collection. Individuals no longer need to build tools or conduct complex research themselves. They can purchase the components they need and focus only on one stage of the attack chain. As a result, cybercrime has become more scalable and more opportunistic.


Key Question: If cybercrime now operates like a supply chain, are organizations preparing for attacks that can be launched faster and at greater scale?


[08:03] Step 2: AI Is Accelerating the Speed of Attacks


Artificial intelligence allows attackers to automate tasks that previously required time and expertise. The result is not necessarily more sophisticated attacks, but faster ones. AI enables threat actors to iterate quickly and scale their operations. This speed advantage allows attackers to experiment and adapt before defenders have time to respond.


Key Question: If attackers can move faster with AI, how quickly can your security teams detect and respond?


[10:34] Step 3: Data Gives Defenders a Long Term Advantage


While AI gives attackers speed, defenders may hold the long term advantage because of data. Security operations centers collect vast volumes of telemetry from networks, endpoints, and infrastructure. This data provides the foundation for AI driven detection and analysis. When AI systems analyze behavioral patterns across these datasets, they can identify anomalies and emerging threats earlier than manual processes. Over time, this combination of large scale telemetry and AI driven analysis may strengthen defensive capabilities.


Key Question: Are organizations using the data they collect to strengthen detection, or simply storing it without extracting insight?


[15:31] Step 4: Operational Technology Requires a Different Security Approach


Industrial and operational technology environments operate under different priorities than traditional IT systems. Many devices run for decades and cannot be patched frequently. Because of this, security teams must rely on monitoring, segmentation, and deception techniques rather than constant updates. Security practices that work in IT environments often require significant adaptation in OT systems.


Key Question: Are security strategies designed specifically for operational technology environments, or are IT security practices being applied without adjustment?


[39:55] Step 5: AI Systems Introduce a New Category of Risk


As enterprises deploy AI systems across their operations, these systems introduce new attack surfaces. AI models may have access to internal company data, business processes, and automated workflows. If attackers manipulate inputs or exploit vulnerabilities, they may influence how these systems behave. AI systems can affect business decisions, automate internal processes, and interact with users. Without proper guardrails and testing, vulnerabilities in these systems may lead to operational or reputational damage.


Key Question: How are organizations validating the security of AI systems before deploying them at scale?


[57:09] Step 6: Physical AI May Be the Next Major Security Incident


The next phase of AI deployment will involve physical systems such as autonomous vehicles, drones, and robotics. These systems rely on multimodal inputs such as visual data, audio signals, and sensor information to interpret their environment. If attackers manipulate these inputs, they may influence how the system behaves. Unlike traditional cybersecurity incidents, failures in physical AI systems could result in real world harm. As AI becomes embedded in physical infrastructure, cybersecurity risks may extend beyond digital environments.


Key Question: Are organizations preparing for security risks that affect both digital systems and the physical world?


Episode Resources:



Shielded: The Last Line of Cyber Defense is handcrafted by our friends over at: fame.so