Firms thinking AI will be a “silver bullet” for identifying vulnerability is “very dangerous”, warned Andrew Gething, managing director at Morgan Ash.
Speaking at the launch of the CII’s vulnerability guide, Gething emphasised the need for firms to think about API rather than AI.
Application programming interface is a set of rules and protocols that allows different software applications to communicate with each other.
Gething did believe technology would have its place for helping firms with vulnerable customers but the basics needed to be mastered first.
“In a bank, for example, you can take transaction data and you can look for patterns in terms of people looking for gambling sites and these sorts of things.
“Making leaps from that to saying this person’s got cancer is just really scary. So with regard to vulnerability, we need to get the basics right to start with and actually just take [AI] out of the question,” he said.
Use cases
Vanessa Riboloni, head of research and insight at the CII, described recent roundtables discussing AI and vulnerability as “fragmented”.
Some use cases Riboloni had heard of was AI being used in a “proactive” way by using transactional data to identify early warning signs of vulnerability among customers.
While others used it in a more “reactive” way to identify vulnerable customers through voice analytics.
In both scenarios, even though AI was being used it was being done “with a lot of caution”, Riboloni said.
We deliberately did not include AI within the guidance because it is still not mature and it is evolving quickly
“There are AI solutions popping up all over the place and people are just jumping on the bandwagon without actually questioning whether the issue at hand requires an AI solution.
“We deliberately did not include AI within the guidance because it is still not mature and it is evolving quickly,” she added.
Eddie Grant, director at the PFS, felt it was important for firms to have the right people in the room when developing AI solutions.
He said: “What often happens with development is someone creates an AI solution, then a problem is identified, then everyone has to find a fix for the problem.”
The FCA was also in attendance at the CII event, with Jonathan Pearson, head of consumer policy and outcomes, adding if firms were introducing AI as part of its customer processes they needed to think about how that fits in with identifying vulnerability.
He said whether AI would be good at helping firms to identify vulnerabilities depended on its design and implementation.
“It is not a simple yes or no. It could be [good] but it all depends on how it’s used,” Pearson explained.
alina.khan@ft.com
Have your say in the comments section below or email us: ftadviser.newsdesk@ft.com
