Barry Duffy | CalypsoAI’s Director of Commercial Product
Recently, the insur-tech Lemonade has been in the news for its use of AI as part of the claims process. In focus was their use of facial recognition to assist in fraud detection. News outlets and the wider public were alerted when Lemonade posted an ill-worded series of tweets. Lemonade deleted the tweets and issued a robust clarification and so the story ends. Before we consign this to history though, we took a look at some of the learnings we can take in the insurance industry.
“We have never, and will never, let AI auto-reject claims.”
While 84% of insurance executives believe AI will make a major difference to insurance processing in the next number of years (particularly in claims and underwriting); human in the loop systems will remain critical to robust adoption of AI. Claims, in particular, will remain a carrier activity with half of today’s activities continuing to require human assistance over the next decade.
Customers are alert to the dangers of AI. The basis of this incident was a lack of clarity on what their AI system was doing in this context. It was inferred from the content of the tweets that facial recognition was being used to sort people into potential fraudsters and those who were less likely to commit fraud. Once the right people in Lemonade were in the loop, it was clear this was not the case. Instead, a very simple and acceptable explanation was put forward. Clearer language about the use of AI would have prevented the incident altogether.
In order to be clear and confident in what your AI is doing, you need to build explainability, fairness, and robustness into your AI. In knowing how your AI works, that it has fairness built-in, and that it’s tested for real-world conditions, you can make confident declarations on what it is and is not. Across industries, we see very high levels of project failure and poor ROI on AI projects. We believe this is partly due to low quality metrics and governance. Perfectly good and not-so-good AI initiatives are sitting on shelves in many organisations because it’s hard to tell them apart.
Notwithstanding the minor difficulties here, it’s clear that Lemonade and other leading insurers are applying innovative applications of image, video, and other AI to reduce risk and enhance the claims journey. In reality, Lemonade does employ facial recognition technology to detect fraud. However, they are not attempting to classify individuals as fraudsters based on a single image. Rather, they use the technology to detect cases where an individual is filing multiple fraudulent claims on the platform. This isn’t simply automating an existing process, but rather, introducing a net new capability to the organisation.
It would appear from the outside that Lemonade has well-formed ideas on what they use AI for and what they will not. They were able to resolve this potential controversy before it became a more serious issue for the company. It’s imperative that all insurers are in the same place. This shouldn’t mean slowing down AI investment but rather ensuring that investment is well managed and has the correct controls in place.
At CalypsoAI, we’ve built a platform for model validation that features an industry-leading AI/ML test harness, human-in-the-loop workflow, and easy-to-understand reporting. Our customers have better insights and higher confidence in what their AI is doing as part of well-coordinated AI strategies.
Barry has designed and delivered software solutions for some of the world’s largest insurers. For over 15 years, his work has spanned early engagement, pre-sales, design, delivery, and go-live support. Before joining CalypsoAI in 2021, he was the Global Product Manager for the FINEOS Claims system.
Stay up-to-date on our latest developments
with our monthly newsletter.