In a recent episode of the Threat Vector podcast, host David Moulton chatted with Noelle Russell, founder and chief AI officer at the AI Leadership Institute. She shared a powerful metaphor for artificial intelligence implementation that every security professional should heed: the "baby tiger."
As Russell explains, organizations often begin their AI journeys with an adorable, novel model that excites everyone. Teams gather around this cute innovation, marveling at its capabilities without asking the critical questions: How large will it grow? What will it eat? Where will it live? What happens when you don't want it anymore?
The reality is stark – baby tigers become adult tigers, and adult tigers can be dangerous. As AI systems scale from prototypes to production environments, the risks scale with them. Without proper guardrails, governance and security measures, what begins as an exciting innovation can quickly become an organizational liability. When your organization adopts AI, are you planning for the cute baby tiger or preparing for the full-grown predator it will inevitably become?
The Three-Headed Risk Monster: Accuracy, Fairness and Security
According to Russell, AI risks typically fall into three distinct buckets that security leaders must address.
- Accuracy: Too often, organizations accept "pretty good" answers from AI systems rather than demanding precision. As these models scale and their ground truth begins to shift through machine learning, monitoring for model drift becomes essential, yet many companies fail to implement proper accuracy checks outside of research environments.
- Fairness: Your AI system might not only provide inaccurate information but could inadvertently harm the people it's meant to help. For instance, financial services that AI trained on biased data could disenfranchise certain demographic groups or zip codes, perpetuating existing inequalities.
- Defense and Security: Every AI implementation increases your attack surface. Without proper security controls, this expansion can exponentially increase threat exposure across your organization.
What makes addressing these risks particularly challenging is that they often require different levels and domain expertise. As Russell notes, "The people that care about accuracy... they're the guys that plan their vacations on an Excel spreadsheet." Meanwhile, inclusion specialists typically concern themselves with fairness, while security remains the domain of cybersecurity professionals. The crucial task is bringing these diverse perspectives together from the outset of any AI initiative.
Building AI Security Into Your DNA, Not Bolting It On
When asked about the biggest blind spots in AI deployments, Russell highlighted a familiar struggle for security professionals: alignment and inclusion. Too often, AI initiatives are driven either by technologists solving interesting problems or by business leaders seeking productivity and profit – neither of whom naturally invite security or legal teams to the conversation early enough.
I do a lot of executive education where I just tell the executives it starts with legal, security, DevSecOps. Those people need to be their first number one.
The critical insight here mirrors a longstanding security maxim – security must be part of the DNA of any AI system, not something bolted on at the end. Russell uses vivid imagery to reinforce this point: "It has to be more like water in a wave" rather than "a raisin in a bun or a chocolate chip in a muffin." Security considerations must permeate every aspect of AI development and deployment.
The Case for AI Governance Integrated with Cybersecurity
For organizations looking to scale AI responsibly, integrating AI governance with existing cybersecurity programs isn't just advisable, it's essential. Russell advocates a practical approach – leverage what's already working in your organization.
Data governance is ultimately AI governance. They are the same thing. It is an evolution of the same process. Organizations should expand the scope of existing data governance teams to include AI systems.
This integration requires resources, which Russell creatively secures by preallocating benefits from AI projects: "For every net new dollar – 25% – I've been able to sell that to executives to preallocate" toward cybersecurity. By framing this as investing future profits rather than current budget, she's found executives more receptive.
Cultivating Curiosity, the Missing Ingredient in AI Security Culture
Beyond technical controls and governance frameworks, Russell identifies a crucial cultural element for AI security – curiosity. Security professionals must foster an environment where people continually question AI systems with the right skepticism:
- Where is this data coming from?
- How is it governed?
- How did the system reach this conclusion?
- Is this information trustworthy?
- Who else should be involved in reviewing this?
This culture of curiosity extends to red teaming practices, which take on new dimensions in the AI context. Russell describes running "break your bot challenges" where employees across the organization, "from the boardroom to the whiteboard to the keyboard," build and then attempt to break AI systems.
What makes AI red teaming unique is that it's not just about adversarial attacks, but also benign interactions that could accidentally produce harmful outcomes. The solution is what Russell calls a "symphony of talent" – diverse perspectives testing the system from different angles. This approach helps patch not just against security vulnerabilities but also against the biases and blindness that developers inherently bring to their work.
Preparing for the Regulatory Wave Without Reinventing the Wheel
With the EU AI Act and US executive orders creating a rapidly evolving regulatory landscape, security leaders must prepare their organizations for compliance. Russell offers pragmatic advice: "Don't start from scratch."
She points to several valuable resources:
- U.S. State Department Enterprise AI Strategy FY2024-FY2025: Empowering Diplomacy through Responsible AI
- State and local government AI guidance principles (like those from Maricopa County, Arizona)
- OpenAI's preparedness framework and Anthropic’s Responsible Scaling Policy
These resources represent significant investments in AI governance – "$40 million was invested to create these documents for the federal government" – that organizations can leverage as starting points for their own compliance efforts. At minimum, they establish a floor that no organization should fall below.
AI Auditing AI
As AI becomes increasingly central to business operations, Russell sees AI audits becoming as standardized as financial audits, particularly in regulated industries, like finance and healthcare. Interestingly, she envisions AI systems themselves playing a role in these audits.
"When you build a model, it's completely different," she explains, dispelling concerns about AI auditing itself. "It's not the student grading their own homework," but more like having "another faculty member, Nobel Prize winner actually grading a student."
From Baby Tigers to Secure, Mature Systems
The journey from AI enthusiasm to responsible execution requires security professionals to play a central role from day one. By addressing the triple threat of accuracy, fairness and security concerns, embedding security into the DNA of AI systems, and fostering a culture of curiosity and diverse perspectives, organizations can successfully scale AI while managing risks.
As Russell succinctly puts it, become "a doer, not a talker." Start building models, not just using them to understand the security implications. The key is bringing security expertise to the AI development table early and ensuring these powerful tools serve your organization's goals without becoming the tiger that bites back.