How leaders can protect AI workloads by making security collaborative (Q&A)
Archana Ramamoorthy
Senior Director, Product Management, Google Cloud Security
Just as leaders are getting a handle on the security risks of generative AI, the ground is shifting again. The emergence of AI agents — which can execute tasks on your company's behalf — is demanding a new security paradigm: agent security.
Cyber threat actors have been using AI to amplify existing attack paths, and are probably also looking for new pathways. The bottom line is that we need to be prepared, and the whole organization should invest in keeping systems secure.
To share insights into how business leaders can balance existing security challenges with a new security paradigm, we sat down with Archana Ramamoorthy, senior director, Product Management.
What follows is an edited transcript of the conversation.
As organizations embrace AI so rapidly, what are some of the primary security concerns and challenges you're seeing?
Archana Ramamoorthy: The rulebook is being written as we go. Regulations are changing rapidly, which puts security teams in a really tough position. They need to learn how their products and their roadmap can comply with the rules that might not even exist yet.
Another concern is the non-deterministic nature of AI. This means AI doesn't exactly give the same answer every single time. If your AI isn’t secured, it could lead to brand embarrassment, in addition to security risk.
Plus, it’s difficult to keep up with all the innovation happening around us. It’s not just the major players. It’s also the open-source tools that developers are making, and new AI features that are sneaking into software almost every day. This might give security teams and their organizations a shadow AI problem–if you don't know what's there, you actually can't protect it. It’s why at Google, we’re working so hard to invest in ways to give enterprises built-in security that help you anticipate potential issues, especially for agents.
How do multi-agent systems make securing AI more complicated?
Archana Ramamoorthy: In a multi-agentic system, one agent could pass on certain information to another agent, and then you have a cascading event, triggered by other agents in the process. So if one agent in this process decides to collude with another agent, or one agent decides to do something malicious, tracing it back to the particular agent becomes really hard.
What kinds of solutions would you recommend to help secure AI?
Archana Ramamoorthy: I hear often, and I completely understand why, that organizations are worried about risks that can come with rapid adoption. That’s why we have AI Protection — it's designed to do exactly what it says. Protect your AI.
A key part of this is implementing strong guardrails. For instance, Google Cloud offers Model Armor, providing a crucial set of safety and security filters. It acts as a gatekeeper for both the prompts going into the AI and the responses coming out.
But building the defenses is only half the battle; you have to battle-test them. This is where AI red teaming becomes absolutely integral. Think of it as hiring ethical hackers to try and break your AI before the bad guys do. It provides concrete proof of whether your security investments are paying off.
For this, the real-world experience of security experts like Mandiant is invaluable. Because they’re on the front lines, they can simulate the latest attack tactics being used against AI in the wild, helping you understand and patch your weaknesses.
How does this approach change for agentic AI?
Archana Ramamoorthy: To me, agentic AI is a subsection of AI security in general. I would start by discovering all the agents within the ecosystem for a given customer, ensuring that there are very clear measures to secure those agents.
Then, clearly identify any runtime risks that we perceive with these agents. For example, at Google we aim to have a holistic view of the attacks that might be opened up to each of these agents, and ensure that we're working constantly with organizations and customers to make sure that these agent scopes are very clearly defined. You can read more about our approach here.
A collaborative approach across the entire organization is key
The key to stronger security is to foster a collaborative culture focused on three essential actions: identifying all AI systems across the business to eliminate blind spots, implementing purpose-built security guardrails, and proactively testing those defenses against emerging threats.
To learn more about how security is getting tougher in the AI era, download our 2025 AI Trends report.