With his breadth of experience, Minesh Tanna, solicitor-advocate and AI lead at Simmons & Simmons, discusses the most exciting developments to date, the difficulties in establishing liability for AI models, and its impact on the jobs market.
In what ways can AI help the insurance market?
AI uses increasingly powerful computer processing power to review and analyse large datasets. This combination allows machine learning (the most common type of AI) to assist the insurance market in various ways, such as allowing more accurate pricing of risk and resolving claims quicker and more accurately.
AI can also be used to prevent fraud (which costs the insurance industry billions of pounds each year) and improve customer relations – through chatbots for example.
What AI developments do you see having the biggest impact on the industry?
The interaction between AI and IoT (Internet of Things) will allow insurers to have access to more of our personal data, which will allow for more accurate risk profiling and pricing.
Cross-sharing of data about individuals on an international scale.
Visual recognition AI, which could transform sectors like the automotive insurance industry.
Have AI developments caused any problems for the industry?
I think insurers will struggle to price AI risks (the decision-making of a complex neural network can be inherently unpredictable).
I also think the increasing focus by regulators on the ethical use of AI, particularly its transparency, will force insurers to have to explain exactly how they are using AI, which may not be straightforward or desirable. There are also data security risks in holding large volumes of personal data.
When it comes to self-driving cars and robotics, where does liability sit when faults/accidents occur?
This is a difficult question, which lawyers are grappling with, for two reasons:
1. Where complex AI models are becoming increasingly unpredictable and almost autonomous, can we attribute liability to any humans at all?
2. Where there are often various entities involved in the deployment of an AI system (the developer, the entity that trained the system, and the user), who is at fault when the AI system goes wrong and causes harm? The answer will depend on the particular case, but this will be a challenge for insurers, particularly in subrogated claims.
Where can you see AI developing further in the future?
We’re still in the so-called ‘narrow’ AI phase, where AI can do one task very well. The next phase is ‘general’ AI, where AI behaves more like humans. We’re getting closer to that stage (existing complex neural networks are modelled on the human brain), but we’re probably still a few decades away from true ‘general’ AI.
In the meantime, we can expect to see increasingly complex and accurate deep learning AI systems in the insurance industry e.g. an increase in the use of visual recognition (to assess damage) and more personalised insurance policies.
Do you see AI developments impacting the jobs market? If yes, what roles in particular?
Yes, AI will inevitably impact jobs, but I don’t think it’ll be the doomsday scenario that some people think it will be. AI will also create new jobs and governments will hopefully commit to re-training employees in new roles. Some people even suggest that there may be a net increase in jobs, but I think that’s optimistic.
I think AI will hit manufacturing roles due to the rise in robotic process automation, and also clerical jobs. It will also have an impact, but perhaps less so, in sales and customer relations jobs, and professional services roles.
What are the most interesting AI technologies you have seen in the industry?
I’ll defer to my colleague Alex Gabriel to answer this one. Alex is an associate at Simmons & Simmons, acts for insurers in a variety of disputes and has a particular interest in the impact of technology - including AI - on insurers and their clients:
“I have seen some very interesting AI uses on services which can make insurance cheaper based on the user’s everyday behaviour. We are already seeing that through telematics in car insurance, instantly-available insurance products based on ad hoc uses such as flying a drone or delivering a package, and AI visual recognition that can be used to take photos of damage to a vehicle in order to assess the damage.
“In general, we are also seeing insurers combine AI with other technologies (such as blockchain) to improve claims management.”
When it comes to reviewing insurance claims, can AI ever be truly fair and accurate in making decisions?
Can a human ever be truly fair and accurate? We tend to expect a higher standard from AI, but I’m not sure why that is. If the AI system is quicker and cheaper (and just as accurate) than a human, isn’t that good enough?
Ultimately, I think AI systems can and will be more accurate and consistent in their decision-making. Their ‘fairness’ will be determined by the humans who develop the system, provide the data and train the system.
An important challenge is ensuring that these humans do not ‘infect’ the AI system with any unconscious bias or intrinsic unfairness. Otherwise, that’s an issue, and one which may be difficult to identify and remedy.