<img alt="" src="https://secure.meet3monk.com/218648.png" style="display:none;">

AI security banking: why vigilance must catch up with innovation

A woman with long hair is overlaid with colorful lines of computer code and binary numbers, creating a futuristic and technological visual effect.

Katherine Yeung, Chief Risk & Compliance Officer at 10x Banking, joined a roundtable hosted by The Banker to explore a vital question: how can banks match AI innovation with uncompromising vigilance? Here, she shares her reflections.

If 95% of international banks are already using or planning to use AI within a year – and they are, according to the Bank of England – then how are they going to match that innovation with vigilance? 

Last week, I joined a roundtable of tech leaders from across financial services designed to discuss that very question. 

The discussion, hosted by Chris Newlands, Deputy Editor of The Banker, in partnership with Salesforce, coalesced around a critical truth: as banks accelerate into AI-powered transformation, the security landscape is fundamentally shifting, and old paradigms are no longer enough. 

Here are my insights from our spirited and enlightening debate on AI in financial services. 

Innovation vs. risk: AI’s double-edged sword 

AI is amplifying the complexity of risk management for banks. “Vibe coding” – where software developers use AI to generate code without deep code review – is rapidly removing friction from delivery cycles. However, my peers who took part in the discussion flagged how these advances can in some cases outpace the governance frameworks originally designed for more linear software deployment. In some cases, automated code generation cuts visibility for compliance teams, introducing fresh concerns as the velocity of innovation climbs. 

Group-Pink

“Vibe coding” is rapidly removing friction from delivery cycles. However, advances can in some cases outpace the governance frameworks originally designed for more linear software deployment.

The challenge is compounded by third-party platforms with AI features enabled by default, sometimes without granular user awareness or consent. For highly regulated environments, this creates new exposure pathways for sensitive data, requiring fast and robust recalibration of security operations. 

Risk taxonomy: clarity and control for every AI use case

A standout takeaway was the way leading banks are developing granular risk taxonomies and risk assessment methodologies for AI, classifying use cases by potential impact. For example, deploying generative AI to create or modify core banking systems is categorised as high risk, demanding intensive oversight and controls.  

In contrast, using AI for simple data search or summarization carries lower risk. Some institutions even automate this risk assessment, establishing clear process guardrails and ensuring regulatory and security teams remain plugged in at every stage. This approach brings speed without sacrificing control - a crucial principle as AI gets embedded deeper into financial workflows. 

New approaches in training and accountability 

An encouraging theme emerged on the response side: firms are investing heavily in AI & data academies, aiming to build broad-based understanding that bridges business and technical teams. It’s not just about technical skills; regulatory alignment (such as the EU AI Act) and operational nuance are now essential ingredients. 

Alongside training, new attestation mechanisms are being put in place. These processes compel direct accountability, making stakeholders explicitly confirm and accept AI-driven changes before they are promoted into live environments. Reinforcing the “human in the loop” principle is critical in safeguarding trust and transparency.  

Group-Pink

Reinforcing the “human in the loop” principle is critical in safeguarding trust and transparency.  

And more broadly, participants emphasised the need to foster a culture of responsible AI use, supported by broader skillsets across business and technology teams.  

Architecting for secure AI: agility with rigour 

Speed is essential in today’s financial sector, but so is clarity about roles and risk ownership. Many participants highlighted how secure-by-default architectures, combined with accelerated governance, are becoming key success factors in safe AI deployment. 

There’s tension here: traditional risk approval cycles can struggle to keep pace with dynamic AI adoption. The answer lies in adaptive frameworks, balancing controls with the agility needed to innovate without compromise

Secure, confident AI adoption: lessons from 10x Banking 

Notably, there’s still debate about measuring AI’s return on investment. While efficiency gains are easier to quantify, the indirect benefits – strategic agility, futureproofing, resilience – are equally important. 

At 10x, these industry challenges are familiar, and they shape our approach to enabling AI securely, at scale. Our DNA is rooted in delivering compliant, real-time data access as the baseline for resilient financial operations. We believe that: 

  • Data integrity and provenance are non-negotiable. Our platform ensures core banking data remains transparent, traceable and instantly accessible, all prerequisites for responsible AI. 
  • We’ve embedded role-based controls, detailed audit trails, and granular access management in every layer of our stack. This means clients can adopt new AI-powered workflows with the assurance that risk and compliance teams are always in the loop. 
  • We champion adaptive governance. By enabling clients to automate risk classification for AI use cases, 10x Banking empowers banks to innovate at speed, without losing sight of regulatory or security guardrails. 

Ultimately, real-time data and flexible control are what set safe AI apart from the rest, especially in high-stakes environments. 

Closing thoughts 

If there’s one constant in financial services, it’s change – yet the breakneck pace of AI adoption makes this a whole new ballgame. The takeaway from the FT roundtable is clear: there’s no room for complacency. Forward-thinking banks are getting comfortable with discomfort, turning risk visibility and rapid response into superpowers rather than stumbling blocks. 

At 10x Banking, our mission is to enable clients to harness the benefits of AI securely and confidently, whatever challenges the future brings. 

 

Talk to 10x about how we help you transform core banking