(Flikr, Davide D'Amico)
Antony Jenkins
Tuesday, October 10, 2017

Myth busting the future of AI in financial services

Artificial intelligence is one of the most hyped technologies of recent years. There is hardly an area of our lives that its proponents haven’t said it will revolutionise.

 

Some of that hype is justified. In an industry like financial services that is driven almost entirely by data, machine learning could allow banks to analyse huge amounts of information, spot patterns people may not have noticed, and make much more fine-grained predictions than humans ever could.

 

The areas of potential are massive. On the customer side, intelligently assessing risk profiles, spending habits and income patterns could mean getting much more personalised services. As I’ve already outlined, that could lead to truly innovative products like dynamic mortgages that with personalised, flexible repayment plans that can be approved in minutes.

 

And on the institutional side, the proliferation of intelligent monitoring and screening software can help stress-test investments and portfolios, significantly speed up and reduce the cost burden of compliance, and monitor trades and high-risk activities for potential illegalities.

 

Yet AI is also deeply misunderstood. I see three myths that we need to debunk if we’re going to be able to unlock the significant potential that artificial intelligence technologies could have to create the safer, cheaper, and more transparent financial services that we all deserve:

 

Myth one: AI is about creating artificial people

 

There are actually two assumptions here: one claiming that AI is about to replace us, and another that AI is nothing but hype.

 

Both are wrong. On the one hand, AI is about solving specific problems using data and fancy maths, and isn’t even close to abilities like self-awareness, common sense and creativity. We can’t devise computers that learn how we do for all kinds of reasons – cost, complexity, and the lack of millions of years of evolutionary pressures, to name just a few.

 

Yet on the other hand, AI systems are here. Machines now beat humans at incredibly complex games, predict the stock market, manage investments, diagnose disease, identify fraudsters, direct resources and solve engineering problems in an ever more fields.

 

These myths arise because too many of us think AI means trying to create artificial people. Yet framing the debate like this leaves us either panicked that ‘true’ AI is around the corner, or disappointed that a single algorithm can’t ‘behave like a human’.

 

Myth two: AI is a black box

 

This is the myth that AI algorithms are closed systems whose workings we can’t see. According to this view, because an AI teaches itself rules to solve a problem, we can never be sure what those rules – how it makes decisions – are.

 

This is important because machines can be biased, so we need to correct them before they creep into their decision-making. Deploying AI models that discriminate on sex or ethnicity, for example, would lead to an outcry from customers and regulators.

 

Yet solutions are available. For example, you can change the input to an AI system in such a way as to analyse what this does to the outcome. Produce a “fancy average” from the resulting predictions, and you can then work out how the machine makes its decisions.

 

Black boxes, then, aren’t necessarily a technological failure. These days, they’re a failure of oversight – the result of not paying enough attention to regulatory and ethical obligations.

 

Myth three: banks will lead the AI revolution

 

If you’re an incumbent, theoretically you’re in a very strong position. You have significant resources, a large customer base and brand recognition. You have a wealth of information about your customers’ spending habits, income and attitudes to risk, which many FinTechs, who are technology rich but often data-poor, lack.

 

But, despite what people think, incumbent banks can’t do much with that data. In reality, your typical global bank has thousands of databases that have been built independently of each other, sometimes over decades, merged awkwardly together in a patchwork of systems over years of acquisitions and staggered technology spend. Many of them are far more technology-poor than they let on.

 

Most significantly, banks aren’t culturally attuned to innovation. Incumbents are practically hard-wired not to make mistakes and only to change things incrementally and reactively, but this new world of technology is all about drawing from learnings, constantly iterating on ideas and making improvements.

 

Here startups have the advantage – they thrive on agility and speed in ways that incumbents do not, and can be much more experimental about their technologies and the business models supporting them.

 

So how can banks and FinTechs collaborate?

 

To create that agility and speed, you need to have a clear sense of purpose, streamlined organisations that encourage people to take responsibility for their own projects, and finally people who thrive in a flexible, changing environment.

 

Yet it’s not an easy match. Think of it as like an elephant and a mouse dancing – beautiful in theory, yet challenging to realise in practice. If it’s to work, we need to see this from the wider ecosystem perspective. If I’m a big bank, who do I need to partner with? If I’m a FinTech, who could most use my technology, and how can they help me scale? I believe tackling these questions collaboratively is likeliest to lead to AI that transforms financial services for customers, providers and wider society.

Return to Insights