Frank Schwaak, Field CTO EMEA Rubrik
Frank Schwaak, Field CTO EMEA at Rubrik, explains how CISOs can stay in control: with five concrete recommendations—ranging from “secure by design” to recovery planning and AI forensic capabilities. His conclusion: Those who embrace innovation while remaining prepared for failure will emerge stronger from the agent era.

They book travel, analyze contract clauses, monitor networks, and escalate support tickets – all without human oversight: AI agents have firmly established themselves in German companies. According to a 2025 study by Rubrik Zero Labs, which surveyed 1,625 IT security decision-makers, 84 percent of German organizations have already fully or partially integrated AI agents into their identity infrastructure. Another 14 percent plan to do so in the near future. Penetration is nearly complete – and for many organizations, the point of no return has long been passed.

But autonomy comes at a price. The more AI agents intervene directly in business-critical processes, the greater the potential for damage when things go wrong. And they will go wrong – not maybe, but inevitably. Research firm Gartner predicts that by the end of 2027, more than 40 percent of all agentic AI projects will be canceled because the risks outpace the original promise, or costs spiral out of control. For CISOs, this means: those who don’t act proactively now will lose control – or may have already lost it.

The good news: course correction is possible. The key is not to slow down AI agents, but to embed them correctly from the start – with clear boundaries, continuous monitoring, and a security architecture that absorbs errors before they escalate. Here are five recommendations on how to achieve this:

1. Build in Security and Recovery Capabilities from Day One

Anyone who treats AI agents as an afterthought for security has already lost. The right approach follows the principle of ‘Secure by Design’: cybersecurity is not a patch applied at the end, but an integral part of the architecture from the very first line of code. Every agent action should be explainable – and ideally reversible. Resilience is not a feature you retrofit; it must be factored in from day one.

In practice, this means development teams and security professionals must collaborate from the outset. Threat modeling belongs in the early design phase, not the final sprint before go-live. Only then can attack surfaces be minimized and undesired behaviors structurally excluded.

2. Prepare for Failures – Recovery as a Requirement, Not an Option

AI agents will make mistakes. It is not a question of if, but when. And when an agent goes off the rails – executing unintended or harmful actions – organizations must be able to understand what happened in the shortest possible time and restore the system to its prior state. Traditional log files are not sufficient for this. What is required are genuine data recovery capabilities: the ability to roll back the system state to a point before the error occurred and restore business continuity.

Organizations that do not yet have this capability should build it as a priority – ideally before the first serious incident occurs.

3. Design for Multi-Failure Tolerance

AI agents rarely operate in isolation. They work in concert, exchange data, and delegate tasks to one another. This makes them powerful – and simultaneously vulnerable to cascading failures. A single miscalibrated agent can quickly trigger downstream errors in other agents, which in turn affect critical systems. Therefore: systems must be designed so that a single failure does not immediately lead to widespread operational disruption.

The solution lies in consistent isolation of agent actions: wherever possible, agents should operate in sandboxed environments with defined interfaces and clear escalation paths. This prevents a localized error from escalating into an enterprise-wide incident.

4. Treat Agent Permissions as Privileged Access

Many organizations make the mistake of generously provisioning AI agents with access rights – on the assumption that more access means more utility. The opposite is true. AI agents do not automatically stop when something goes wrong – they continue executing actions at superhuman speed until someone intervenes. The consequences can be severe: from technical disruptions and compliance violations to the accidental deletion of entire production databases – a scenario that has already been documented in practice.

The Zero Trust principle therefore applies to agents as well: minimal access, maximum control. Each agent should only have the permissions strictly necessary for its specific task – nothing more. Strict access controls are especially critical when agents interact with customer data or business-critical processes.

5. Ensure Full Transparency and Auditability

According to Rubrik, fewer than ten percent of organizations have AI forensics capabilities – the ability to comprehensively track and analyze the behavior of AI agents. That is an alarming gap. Organizations that cannot see what an agent is doing cannot intervene when it does something wrong.

What is needed is complete transparency: from the initial prompt to the final output, every intermediate step documented and auditable. This requires robust data transparency infrastructures that go far beyond traditional log files. Only those who can trace every step are also able to intervene precisely, reverse erroneous actions, and learn from incidents.

Integrating AI agents into business processes is no longer a question of whether – it is already reality. The decisive question is how. CISOs who act proactively now, embed security structurally, and build recovery as a core competency will be able to harness the benefits of agentic AI – without losing control. Everyone else is playing for time. And time is running out.

By Jakob Jung

Dr. Jakob Jung is Editor-in-Chief of Security Storage and Channel Germany. He has been working in IT journalism for more than 20 years. His career includes Computer Reseller News, Heise Resale, Informationweek, Techtarget (storage and data center) and ChannelBiz. He also freelances for numerous IT publications, including Computerwoche, Channelpartner, IT-Business, Storage-Insider and ZDnet. His main topics are channel, storage, security, data center, ERP and CRM. Contact via Mail: jakob.jung@security-storage-und-channel-germany.de

Leave a Reply

Your email address will not be published. Required fields are marked *

WordPress Cookie Notice by Real Cookie Banner