Using AI in wealth management: Best practices

Oxford Risk’s Greg Davies on the opportunities and pitfalls of AI

Pressing Pound Symbol AI

|

Some form of AI has been in use within wealth management for more than a decade. The recent acceleration of AI capabilities has greatly expanded the opportunities for application… as well as misapplication.

Characteristic of the use of AI within wealth management over the past decade has been a picture of a lot of shiny new AI hammers looking for nails. That is, ‘solutions’ looking for applications, rather than the careful analysis of real pressing problems concluding that a particular type of AI is best placed to solve them.

The pace of both its advancement and adoption means that making the most of AI in the coming years will require some clear thinking about what AI is and isn’t well-placed to help with, and a solid strategy for remembering this when it needs to be remembered.

What are some of the general rules of best practice?

1.  First, define the problem: is AI really better suited to solving the problem than, say, better UX, or a deterministic algorithmic decision tree? Or it is simply shinier?

2.  Use AI to diagnose, not prescribe: AI should guide you towards the right decision, not make it for you. AI is far better suited to number-crunching analysis than dealing with the ambiguity inherent in its real-world application.

3.   Use AI to work with humans, not replace them: AI is better thought of as solving different categories of problems to enhance humans, not trying to sub-in as a low-cost alternative for expensive humans, regardless of the type of task to be done.

Three ways AI can be useful in wealth management

Three categories of tasks technology is particularly well-suited to are:

1. Machine-learning analytics – For example: looking for costly behavioural patterns; isolating relevant variables in arriving at suitable investment recommendations; testing behavioural interventions; and testing for relevant personality traits to better match investors to interventions and products.

2. Live model optimisation – AI can analyse simultaneous behavioural interventions and optimise model parameters in real time to boost effectiveness.

3. ‘Decision prosthetics’ for diagnosis and prescription – The best use of machines is not to make decisions, but as ‘decision prosthetics’ to help support, guide, and improve human decision-making through complex situations in pursuit of suitable solutions.

Adopting a ‘centaur model’ of AI-human hybrid decision making

Being clear on what machines are good at and what humans are good at is key to an effective division of labour.

If a job involves data processing, pattern recognition, consistency and low-error rates, it’s the place for a robot.

If it involves empathy, unstructured problems, creativity, coping with dynamic environments and multiple objectives, and generating insights from association across completely different problems, it’s the place for a human.

Crucially, because these skill sets are different there is value to combining the two – a ‘centaur’ model that applies AI to the right parts of the right problems.

See also: PIMCO: Multi asset investors should expect regional divergence

In wealth management, technology is best used as ‘decision prosthetics’: an artificial add-on that helps humans with what they’re bad at, while recognising that ultimately humans are required to make the decisions and navigate the deep uncertainty of both a changing environment, and their own unstable preferences.

Only when systems are designed to enable both parts of the centaur to work together will AI change financial decision making for the better.

One important example, especially in the context of Oxford Risk’s suite of decision-support tools is how AI unlocks enormous potential for applying behavioural finance to deliver personalised behavioural interventions at scale.

One of the major problems with tailoring an individual’s investing experience based on insights from behavioural science is that insights from academic studies apply to populations, not to individuals.

Something may ‘work’ on average, but how do you know what will work for a given individual in a given set of circumstances at a given time?

With the right feedback loops, an AI-enhanced system could dynamically adjust a message’s content and tone based on real-time analysis of a client’s financial personality and financial situation, based on what’s been shown to work better or worse for similar investors in similar circumstances.

When shouldn’t wealth firms use AI?

Unfortunately, the excitement around the capability of an AI tool can cloud thinking around its applicability.

The prime examples of this are in assessing suitability in general, and risk tolerance in particular.

Determining the best investment solution for each investor based on their individual circumstances involves a lot of moving – and some constantly changing – parts.

You need to account for an investor’s risk tolerance, their balance sheet (including likely future changes to it, and their emotional attachment to items on it), current and future income and expenditure, relevant behavioural traits, knowledge and experience, and so on. Humans struggle to comprehend how these details interact, let alone integrate them all flawlessly. You could think, therefore, that AI should take over.

However, suitability merely requires mapping a (relatively small) set of information defining each client’s circumstances and preferences, to a (relatively small) set of different investment solutions. You need a well-designed decision tree, not AI. An AI system that picked solutions probabilistically, and which potentially evolve over time to provide unpredictably different solutions for the same client characteristics, would be a compliance nightmare, and potentially dangerous for the client.

See also: Adviser exodus looms with half planning to retire in the next five years

Where AI can help in suitability is in monitoring and reviews. While the framework to determine suitability shouldn’t change, AI can be used to assist with dynamic suitability – constantly updating suitability in response to changing client circumstances and preferences. If the client’s balance sheet, circumstances, and goals and preferences are continually in flux, then so should be the suitable solution.

It’s a similar story with risk tolerance. Risk tolerance, correctly understood and measured, is a simple, stable psychometric trait, and best assessed with simple psychometric questions.

These could be presented in slick, tech-enabled formats – but this is UX, not AI.

Attempts to assess risk tolerance with real-time processing of investors’ behaviour, social media footprint, or video monitoring of their facial expressions (all real examples!) are misguided.

Such ‘revealed preferences’ are extremely unstable and reflect all sorts of short-term behavioural biases and influences of context that should not be used as a foundation for the long-term risk profile of an investor’s portfolio.

Greg B Davies, PhD, is head of behavioural finance at Oxford Risk