Google Cloud Next ’26  ·  Las Vegas  ·  April 22–25, 2026

Who governs
your AI?

Agents are already inside your enterprise — executing workflows, processing decisions, and operating at machine speed. The question isn't whether to deploy them. It's whether you've built the governance to lead them.

Based on
260 announcements
Primary focus
Agent Governance
Published
April 28, 2026
Author
Alex S. Moy
The moment we're in

AI has moved from advisor to actor.

For years, AI was advisory — surfacing insights, suggesting next steps. Now it's operational. Agents take actions, process decisions, and run multi-step workflows autonomously. That shift changes everything about how enterprise leaders need to govern, align, and scale this technology. AI should be a tool, not a takeover.

Context 01

The shift from advisor to actor

An AI agent doesn't wait to be asked. It reasons through a problem, selects the right tools, and executes multi-step workflows — booking meetings, processing orders, writing code — without a human directing every move.

Context 02

The governance gap is already open

The average enterprise now runs 12 AI agents. 89% of business teams are already using them. Most have no unified framework to govern them. That gap — between adoption speed and governance readiness — is where risk accumulates.

Context 03

Governance is how we build trust at scale

Governance isn't a compliance exercise. It's the operating system that lets you deploy AI broadly, hold it accountable, and build the coalition of leaders and teams who trust it enough to actually use it. Alignment is the currency of enterprise success.

“The era of the pilot is over. The era of the agent is here.”

Thomas Kurian, CEO Google Cloud — April 22, 2026  ¹
89%
of business teams already using AI agents
Google AI Agent Trends Report, 2026
12
average AI agents per enterprise organization
Google AI Agent Trends Report, 2026
40%
of enterprise apps will embed AI agents by end of 2026
Gartner, March 2026
260
product announcements at Google Cloud Next ’26
Google Cloud Blog, April 2026
The core framework

The five layers of agent governance

Alignment is the currency of enterprise success. Google's strategy for the agentic enterprise is built on five decisions every leader must make before AI can deliver at scale. Each layer builds on the one before it — and each one is a leadership decision, not just a technical configuration.

🪪

Who is the agent?

Identity & authentication

Every agent operating inside your enterprise needs a verified, auditable identity. Before any action is taken, before any system is touched, the agent must be known — not assumed. Identity is what separates a governed fleet from an unaccountable one.

Google's Agent Identity system assigns each agent a unique cryptographic ID — a tamper-proof digital fingerprint tied to defined authorization policies. Every action traces back to that identity, creating an audit trail that compliance and leadership teams can actually trust.

This matters most as organizations scale. Agents can spawn sub-agents. Those sub-agents can create more. Without identity at every layer, accountability disappears faster than the value you were trying to create.

The governance principle

You cannot hold AI accountable for its actions unless you know which agent took them. Identity is the non-negotiable foundation. Every other layer depends on it.

🧠

What does the agent know?

Memory & context

An agent without context is an agent without judgment. The ability to recall past interactions, understand organizational meaning, and connect current tasks to prior history is what separates a capable agent from a genuinely valuable one.

But memory without governance is a liability. An agent with unrestricted recall across teams and data estates can expose information it was never meant to access. The answer isn't less memory — it's better-scoped memory.

Google's Agent Memory Bank provides controlled, long-term memory governed by Memory Profiles that keep context accurate and appropriately bounded. The Knowledge Catalog gives agents a semantically rich map of your enterprise's data — so they understand what your organization's language actually means, not just what the words say.

The coalition dimension

When agents understand your organizational context, they become part of the team — not just a tool running in the background. That shared understanding is what builds the human-AI coalition that drives real adoption and real ROI.

🔑

What is the agent allowed to do?

Permissions & tool access

Access without accountability is exposure. The principle of least privilege — granting agents only the access they need for the specific task at hand — is one of the oldest and most reliable rules in enterprise security. It becomes more important, not less, when the actor in question can execute thousands of operations per minute.

Permission governance isn't about slowing AI down. It's about giving your organization the confidence to scale it broadly. Leaders deploy AI aggressively when they trust its boundaries. They stall when they don't.

Google's Agent Gateway serves as the enterprise's unified control point — a governed passthrough for every agent request before it reaches an external tool, API, or data system. It enforces consistent policies across your entire agent fleet, regardless of how many agents are running or where they're deployed.

The leadership principle

Broad AI deployment requires narrow AI permissions. When your organization knows exactly what its agents can and cannot do, you can build the cross-functional coalition that drives adoption at enterprise scale.

🛡️

Is the agent being attacked?

Security & threat detection

The attack surface for enterprise AI is fundamentally different from traditional software. Adversaries don't need to breach a firewall — they can compromise an agent by manipulating the data it reads. A prompt injection attack embeds malicious instructions inside content the agent processes, redirecting its behavior without ever touching the underlying system.

When agents have the authority to take real actions — sending communications, modifying records, executing transactions — a compromised agent isn't a security incident. It's a business operations failure. The security layer must match the scope of the authority being granted.

Google's Model Armor, integrated with the Wiz security platform, provides inline protection across the full agent lifecycle — monitoring every input, sanitizing every output, and flagging behavioral anomalies before they propagate into enterprise systems.

The $32 billion commitment

Google's acquisition of Wiz is the largest cybersecurity acquisition in history. That investment signals a clear conviction: in the agentic era, security and governance are not separate disciplines. They are the same one.

📊

How well is the agent performing?

Oversight, evaluation & optimization

Governance without measurement is policy without accountability. The organizations that scale AI successfully are not just the ones that deploy it — they're the ones that know, with precision, how well it's performing and why. Oversight is what transforms AI from a cost center into a compounding strategic asset.

Most enterprises today have no systematic framework for evaluating agent performance. They deploy, they hope, and they react when something breaks. That's not transformation — it's speculation. The organizations building durable AI advantage are the ones closing the feedback loop.

Google's Agent Evaluation system scores agents against live production traffic using multi-turn assessment that evaluates complete reasoning chains — not just isolated responses. The Agent Optimizer translates those evaluations into specific, actionable improvements that compounds performance over time.

The accountability imperative

I architect Centers of Excellence that turn potential into performance. Oversight is the layer that makes that possible — because you cannot improve what you cannot measure, and you cannot scale what you cannot defend.

Proof points

Governance in production — what the outcomes say

Theory becomes strategy when it delivers results. These organizations moved past the pilot stage, built governance into their deployments, and produced outcomes that are now redefining performance benchmarks across their industries.

Deutsche Telekom
95%
reduction in network event response time
Built MINDR — a multi-agent system that autonomously detects and resolves network failures before customers experience them. This is not AI augmenting a team. It's AI running a mission-critical operation with governance embedded at every layer. ²
Highmark Health
$27.9M
in documented value delivered in a single year
Deployed Sidekick, an AI assistant that automates research and provides intelligent information access across their workforce. The financial figure is audited value — not projected savings or modeled estimates. That distinction matters. ²
Danfoss
~42 hrs
→ near real-time order processing
Automated 80% of transactional decisions in email-based order processing. A workflow that once required nearly two days now executes in minutes — with governance controls that escalate anything outside defined parameters to human review. ³
Citadel Securities
faster AI workloads at 30% lower cost
Built a TPU-powered research environment on Google Cloud. Analysis that once required days executes in minutes. In financial markets, that compression isn't an efficiency gain — it's a structural competitive advantage. ²
GE Appliances
800+
AI agents deployed across live operations
Running a governed fleet of specialized agents across manufacturing, logistics, and supply chain. Each agent carries a defined role, verified identity, scoped permissions, and performance oversight. All five layers, in production. ²
Tata Steel
300+
agents deployed in nine months
Scaled from zero to 300 specialized agents in under a year using a low-code deployment platform. The speed was possible because governance was built in from the start — not retrofitted after the fact. ²
Strategic landscape

Five platforms competing to own the agent layer

Understanding the competitive dynamics isn't optional for enterprise leaders making platform decisions today. Each player brings a structurally different position — and each one carries a structural vulnerability. Here's the honest map.

CompanyStructural advantageStructural riskMarket position
Google CloudFull Stack
Controls the full stack from silicon to application — hardware, models, runtime, and distribution under one architecture. The most defensible long-term position in the field.Third in overall cloud market share. Enterprise sales execution must match engineering ambition to convert the platform advantage into market leadership.50% year-on-year growth in Q4 2025 — fastest of the three major cloud providers. ⁴
MicrosoftDistribution
Embedded inside virtually every Fortune 500 organization through Office 365. Copilot has the shortest path to the enterprise desktop of any AI product in the market.Deep reliance on OpenAI's models — a dependency that became a strategic liability when OpenAI's Azure exclusivity ended in April 2026. The distribution moat is real. The model moat is narrowing.Copilot deployed across the large enterprise market. Unmatched last-mile distribution advantage globally.
AWS / AmazonInfrastructure
The largest cloud infrastructure base in the world. The addition of OpenAI's models to Amazon Bedrock on April 28, 2026 is among the most consequential cloud-model partnerships since the launch of ChatGPT. ⁵AI model strategy has shifted rapidly from internal development to external partnership. The question is whether infrastructure scale alone is a sufficient moat when your model providers serve your competitors equally.AWS represented 18% of Amazon's total revenue and more than half of its operating income in 2025. ⁵
OpenAIBrand & Models
The most recognized AI brand in the world. Enterprise revenue now at 40% of total. Three million weekly active Codex users. Consumer brand gravity that none of the hyperscalers have matched. ⁴Trust concerns from the 2023 governance instability remain an active consideration in enterprise procurement. The transition to a fully for-profit structure continues to raise questions among risk-focused buyers in regulated industries.Enterprise LLM API spend: approximately 27% in late 2025, down from roughly 50% in 2023. ⁶
AnthropicSafety & Trust
The most credible safety positioning in the industry. Model Context Protocol has reached 10,000 servers and 97 million monthly SDK downloads — establishing MCP as the emerging interoperability standard for agent-to-tool connectivity. ⁴Infrastructure scale is a real constraint relative to cloud hyperscalers. Building on top of AWS and Google Cloud infrastructure rather than owning it creates a structural dependency the company continues to navigate.Enterprise LLM API spend: approximately 40% in late 2025 — the largest share of any provider in the market. ⁶
How we got here

The agentic era was years in the making

This didn't happen quickly and it didn't happen by accident. These are the inflection points that built the strategic landscape enterprise leaders are navigating today.

2017
Google publishes "Attention is All You Need"
The research paper that introduced the Transformer architecture — the engineering foundation underlying every major AI system in production today, including GPT, Gemini, and Claude. Every subsequent AI development traces a direct line back to this contribution.
November 2022
ChatGPT launches — enterprise AI becomes a board-level conversation
OpenAI releases ChatGPT, which reaches one million users in five days. For the first time, enterprise leaders experience the capability directly rather than through analyst briefings. The question shifts from "should we explore AI?" to "why haven't we deployed it yet?"
March 2023
GPT-4 integrates tools — the first true agentic capability
AI connects to external tools for the first time at scale — web search, code execution, data retrieval. The shift from language model to agent begins. With it comes the first serious question enterprises haven't yet built answers for: what are we willing to let this do on its own?
November 2024
Anthropic releases the Model Context Protocol (MCP)
A universal open standard for agent-to-tool connectivity. Later donated to the Linux Foundation's Agentic AI Foundation. Now supporting 10,000 servers and 97 million monthly SDK downloads — emerging as the interoperability layer that makes multi-vendor agent architectures possible at enterprise scale.
Early 2026
Enterprise AI moves from pilot to production
For the first time, major enterprises — Deutsche Telekom, GE Appliances, Tata Steel, Highmark Health, and hundreds of others — deploy AI agents into live production environments. The experimental phase ends. Governance stops being a future consideration and becomes an immediate operational requirement.
April 22–25, 2026
Google Cloud Next ’26 — 260 announcements define the governed agentic enterprise
Google presents a unified governance architecture across all five layers: Identity, Memory, Permission, Security, and Oversight — built on the Gemini Enterprise Agent Platform, 8th-generation TPUs, Wiz's security infrastructure, and a $750 million partner ecosystem. The blueprint for the governed agentic enterprise becomes public. ¹
April 28, 2026
AWS + OpenAI — the exclusivity era ends, the platform race accelerates
AWS adds OpenAI's models to Amazon Bedrock the same day OpenAI ends its exclusivity agreement with Microsoft. For enterprise leaders, the signal is clear: the agent platform market is now genuinely competitive. Platform loyalty is a strategic choice, not a default. ⁵

AI should be a tool, not a takeover.

The organizations building durable AI advantage aren't the ones running the most agents. They're the ones with the clearest governance, the most aligned teams, and the frameworks to hold both accountable. When leaders are equipped and organizations are united, AI delivers more than growth — it creates legacy.

Review the 5 layers
Sources and references

Where this comes from

Every claim in this report is sourced from primary announcements, official company publications, and independent analyst research. All figures are cited as reported. Click any source to read the original material.