Winning the AI Race Starts with the Right Security Platform

Dec 09, 2025
8 minutes

Every CIO and CISO we speak with describes the same paradox: AI is now central to their transformation agenda, yet the fastest way to derail that agenda is to lose control of AI. As generative AI, agentic systems and embedded AI features spread across the enterprise, leaders are no longer asking if they need AI security; they’re asking what kind of AI security strategy will actually scale.

Gartner® has published two recent reports that validate this reality and outline the strategic direction enterprises must take to secure their AI:

Why AI Security Is a Platform Game

Point products can plug individual gaps, but they can’t keep up with the speed, complexity and interconnected nature of AI adoption. And more importantly, they struggle to deliver the trust, consistency or scale AI transformation requires.

Many organizations are already experiencing AI adoption outpacing traditional security tools. Security teams are under pressure on three fronts:

  • Risk – Shadow AI, unmanaged agents and custom LLMs create new pathways for data loss, intellectual property exposure and model misuse.
  • Cost – Each new AI use case brings yet another tool, driving up license, integration and operations costs.
  • Complexity – Fragmented controls across network, data, identity and application stacks create blind spots exactly where AI is moving fastest.

From a CIO or CISO’s perspective, this isn’t just a technical concern but the fault line beneath their entire AI agenda. CIOs are under pressure to deliver productivity gains, cost efficiencies and new AI-powered capabilities faster than ever before.

CISOs, on the other hand, see a parallel reality: custom-built AI applications that may be insecure by default, agents that can act unpredictably, and a constant risk that company secrets or customer data could leak into third-party GenAI tools.

If AI moves forward without security, the enterprise is exposed. If AI slows down because security can’t keep up, the business misses its transformation goals. This is why AI security isn’t a feature; it’s the determining factor in whether AI becomes a competitive advantage or a strategic setback.

Gartner recommends the path forward as “an integrated modular AI security platform (AISP) with a common UI, data model, content inspection engine and consistent policy enforcement.”

Gartner further recommends prioritizing investments in two phases.

Phase 1

Start with AI usage control to secure the consumption of third-party AI services.

Phase 2

Expand into AI application protection to securely develop and run AI applications.

Phase 1: Securing Generative AI Usage Is the “Right Now” Challenge

Before enterprises can secure how AI is developed, they must first understand how it is already being used across the organization. The earliest risks often emerge not from the AI-enabled apps built in-house, but from the external generative AI tools and copilots employees adopt, and often without the IT teams’ knowledge.

That’s why we think the report identifies AI usage control as phase one and why we recommend IT leaders start with these immediate questions to assess their organization’s AI usage.

  • Where is AI actually being used in my organization?
  • Which tools, copilots and agents are in play, and on what data?
  • How do I enable productivity without losing control?

Phase 2: Securing AI Development Early Into the AI Lifecycle

Once public generative AI use is understood, the harder challenge emerges: Securing the AI apps and tools that your organization creates for itself. As models, agents and pipelines move into production, the questions shift from visibility to integrity, safety and scale.

Key questions that organizations must answer in phase two include:

  • What AI applications, models and agents are my teams building, and where do they live?
  • How do I manage the integrity, safety and compliance of AI apps before they reach production?
  • How do I protect models and AI applications from prompt injection, misuse or agentic threats?
  • How do I scale AI innovation without creating security bottlenecks for developers?

Palo Alto Networks Delivers the AI Security Platform

Although organizations can separate the work around securing AI usage and AI development, they are not two separate problems. The same organization that needs visibility into employees using public GenAI apps also needs to protect the AI applications and agents they’ve built as they move into production. A platform approach is what allows shared policies, shared guardrails and shared context across both sides of the AI usage and development equation.

That is exactly the philosophy behind our Secure AI by Design approach:

  • Secure how GenAI is used with Prisma® Browser™ and Prisma SASE to discover AI tools in use, govern access and prevent sensitive data from flowing into public models, all while keeping users productive with GenAI and enterprise copilots.
  • Secure how AI is built with capabilities of Prisma AIRS™, such as model and agent security, AI security posture management, runtime protection, automated testing with AI Red Teaming, as well as coverage for agentic protocols, like MCP, securing custom AI applications, agents and pipelines.

Gartner identifies Palo Alto Networks as “the company to beat” in their newly released report as of December 8, 2025: “AI Vendor Race: Palo Alto Networks Is the Company to Beat in AI Security Platforms.”

We believe we are the AI Security Platform to beat because:

  • Palo Alto Networks product portfolio across network, edge, cloud and data provides a strong foundation for AI usage visibility and control.
  • The acquisition of Protect AI integrated industry-leading AI talent and products resulting in the recently announced Prisma AIRS 2.0, which delivers comprehensive end-to-end AI security, seamlessly connecting deep AI agent and model inspection in development with real-time agent defense at production runtime. The platform, continuously validated by autonomous AI red teaming, secures all interactions between AI models, agents, data and users. This gives enterprises the confidence to discover, assess and protect their entire AI ecosystem, accelerating secure innovation.
  • Complementing the platform, Unit 42®’s deep expertise and Huntr’s bug bounty program, provide security thought leadership that directly improves product effectiveness and threat intelligence. These programs help us continuously uncover new attack patterns, misconfigurations and supply chain risks unique to AI systems, as well as feed those insights directly back into the product roadmap.
  • Our large installed base and distribution channels create a flywheel for AI security platform adoption and learning from our customers and partners.

We also believe that underneath the technical requirements is a deeper truth: CIOs and CISOs want to move fast on AI, but they only feel safe doing so with a partner who has the scale, signal and staying power. This is where our breadth, research depth and ecosystem matter.

Leading Responsibly Means Listening, Innovating and Evolving

Being early is an advantage, but staying ahead requires humility and continuous learning. Leading means seeing what comes next, and Gartner’s insights accelerate our own roadmap as we continue to evolve.

  • Simplifying the Experience: We are integrating capabilities across Prisma AIRS, Prisma SASE and Prisma Browser to make AI security easier to adopt, operate and scale through Strata™ Cloud Manager as the single entry point.
  • Going Deeper into the AI Engineering Pipeline: We recognize that securing AI must start early in the developing environment and ML pipeline, not just at runtime. Our integrations with AI development tools and code repositories will continue to expand.
  • Keeping Pace with a Fast-Moving Market: We are investing in open standards, partnerships and research, so our customers don’t have to chase every point solution that appears. Palo Alto Networks is also a contributing member to OWASP Standards and Threat analysis to help create an industry standard on AI security.
  • Working Along Native AI Controls: Cloud providers and AI platforms are adding their own security features. We aim to complement, not replace, those controls, providing unified visibility, advanced protection and consistent policies across a fragmented AI landscape.

For us, being “the company to beat” is not a finish line. It’s a responsibility to listen carefully to customers, adapt as AI evolves, and keep delivering practical, integrated outcomes rather than isolated features.

If you are a GM, CIO, CISO or AI leader trying to make sense of a rapidly crowding AI security landscape, we believe “GMs: Win the AI Security Battle With an AI Security Platform”​​ is essential reading.

In the end, the real race isn’t about features; it’s about who helps enterprises accelerate transformation safely, reduce risk and compete better with AI they can trust.

 

Disclaimer: Gartner does not endorse any company, vendor, product or service depicted in its publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner publications consist of the opinions of Gartner’s business and technology insights organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this publication, including any warranties of merchantability or fitness for a particular purpose.

Gartner, AI Vendor Race: Palo Alto Networks is the Company to Beat in AI Security Platforms, By Mark Wah, Neil MacDonald, Marissa Schmidt, Dennis Xu, Evan Zeng, 8 December 2025. 

Gartner, GMs: Win the AI Security Battle With an AI Security Platform, By Neil MacDonald, Tarun Rohilla, 6 October 2025.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.


Subscribe to the Blog!

Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more.