Inji Charkani, Innovation Officer AXA Partners, shared some thoughts on how machine learning and AI affect insurance industry
Many insurers are investing in AI beyond MLwhich is one of its subfield. Opportunities range from an enhanced Customer Experience (reduced cycle time, personalized advisors through chatbots, fast track Claims management), to productivity efficiency, pricing sophistication, churn risk anticipation and accurate Fraud detection patterns.
Insurers can either build internal capabilities, partner with start-ups on these fields or do both to accelerate time to market impacts.
AI is a great enabler. Nevertheless, the right balance between human contacts and AI is key. Some customers may decide to interact with a Virtual Personal Assistant whilst others may prefer a video call with an insurance representative or even speak to an agent and that’s fine.
However, there are some threats and challenges:
- Data: Garbage in, Garbage out
AI relies on the quality of the Data and insurance companies do have abundant data. Yet, there needs to be skills and assets to collect, clean, consolidate, store and analyse the datato ensure AI will truly help Human make educated decisions. Understanding the strengths and limitations behind each cognitive algorithm is necessary to derive reliable outputs.
Also making sure employees are well trained at the various level of the organization is another element to consider.
Talking about AI impacts, it is more than likely that overtime some jobs may disappear whilst new ones will appear. Managing this transition is a challenge. For example, AI platforms in the legal field, are impacting much more than low value tasks. AI engines could consider a new contract, learn and compare it to a database of similar contracts and establish a diagnosis that will help Human take decisions. This is powerful but job impactful at the same time.
- Malicious Use
With malicious use of AI comes the emergence of new risks. For example, security Risks, forged images or videos, spear phishing but also tomorrow cyber-attacks on autonomous cars or drones. And many more we did not yet anticipate ….
Without being over dramatic, threats are real and hence regulation is necessary but over regulating AI can slow down innovation and the countries where AI will be less controlled could build a competitive advantage. Now how should AI be regulated? As it seems unlikely to regulate science or mathematical models it should be more the purpose and consequences of AI use.
Transportation- driver-less vehicles etc.
Driverless cars are a good example. Take an accident caused by an autonomous car. Liability normally falls under the owner of the car at fault but what about the sequence of parties involved? Will we end up in series of recovery actions towards car manufacturers, equipment provider, AI source supplier? What about the data demonstrating the cause of the accident? Who will own it and share it to demonstrate how the circumstances occurred? How long will it take to establish it?
All these questions are complex. The balance will probably be found through consortiums and assembling different point of views from private and public sectors.
Will AI/Machine learning complement or replace humans?
This debate has been going on for a while. AI is not a discontinuity to the way human kind has been progressing. Like any innovation, it’s the way we use it that makes it valuable or malicious. We should not fear it but rather control it thoughtfully. What matters most in a world of services, is that added human touch, warmth and empathy on top of a frictionless execution.
Associating both intelligence (Human and AI) is ultimately a global matter that touches deep societal transformations. We move towards an augmented human reality. Making sure it doesn’t become an elite society is our collective responsibility.