AI provides a new look at old problems
Leveraging the power of data science to radically rethink your business
Quick summary:
Having the same discussions about disappointing results? Try using AI!
Rely on help from data scientists to frame old problems in terms AI can solve
Start with a naïve approach - have your experts provide “just enough” guidance
Once you build rudimentary models, socialize and see where the discussion goes
As you iterate through building better AI models, you also build a better business
Rethinking how weather affects insurance claims
In my 25 years working in the insurance industry, I have been in a lot of meetings and involved in many conversations discussing business results. As you might expect, most of these discussions occur when the results are below plan for the year. This can take the form of negative trends (concerning), underperforming segments (bad), or outright missing objectives that affect year-end bonuses (worse). Insurance is somewhat unique as an industry in that carriers do not know the cost of goods sold until after prices have already been set. Why is this? Because the number of claims and the amount of losses paid is not known up front at the time rates are set. An entire discipline known as actuarial science has been established to use statistics to get around this fundamental issue. Based on the law of large numbers, carriers gain confidence in making predictions about claims and losses. A firm may not know precisely which homes in a particular neighborhood will have a plumbing leak, for instance, but they know the overall probability with a fair degree of accuracy based on factors such as the age of the home, location, number of bathrooms, etc.
Insurance losses in the property space can be roughly classified into two broad categories: weather related and non-weather related. In my working experience with a large property insurer, roughly 50% of the losses were non-weather and 50% were weather related. Non-weather related claims typically follow predictable patterns: you rarely if ever see a sudden spike in water leaks. Because they are so steady, this means non-weather claims are never the reason a carrier’s losses are the driving force behind worse than expected profits. This meant that the 50% of claims that were weather-related drove almost 100% of the variance in losses, for good or bad. Of course, most firms don’t think about their good fortune too much when the weather is more favorable; instead, they might credit higher than expected profits to good strategy or execution. On the other hand, when poor weather drove profits that were worse than expected, I heard a lot of excuses about “getting unlucky” and “we can’t do anything about bad weather”. But is this actually true?
One of the most satisfying projects I ever worked on was a R&D effort to better understand the connection between severe weather and claims for homeowners insurance. Superficially, the connection is obvious: bad weather occurs in a given area and damage is caused to homes that result in claims being filed and losses being paid. Surprisingly, with some digging, you’ll find there is more nuance to this relationship: the same weather event in a particular region, even down at the neighborhood level, does not cause claims to be filed by every policyholder. This is evident by looking at GIS data tracking severe weather metrics such as rainfall totals, wind speeds, and observed hail sizes on top of property locations and characteristics. So what is the “pathology of claims” - the exact causal link between severe weather and claims?
Start with little guidance, then iterate & discuss
This was the fundamental question I sought to answer along with my team of underwriters, actuaries, claims adjusters, and data scientists. To examine this issue, we partnered with a weather forecasting and intelligence firm called StormGeo which had recently built up its data science team. At the time, StormGeo had recently created new deep learning techniques to develop weather forecasts based strictly on large data sets from ground truth: observational data rather than the traditional atmospheric models. Then, the data science team compared the results from their AI models to the traditional forecasts put out by their trained meteorologists and found that the deep learning models added predictive power; in some cases, the deep learning models were superior to atmospheric models that have been refined over decades. StormGeo took their initial findings and developed a new product called DeepStorm to help their large clients in the shipping and energy sectors better manage risk associated with weather events. DeepStorm required a major investment: according to the firm, the data used was six times larger than the traditional data sets. To calculate the deep learning models, StormGeo had to invest in petaflop supercomputers that can complete a thousand-trillions calculations per second.
The project team I led focused on combining our insurance data sets on property owners, home characteristics, and reported claims with StormGeo’s detailed weather observations and forecast data to see if we could accurately predict which homes would file a claim and how much the loss payout would be for each property - before a claim was ever filed. By building AI models based on two objective functions (the binary presence of a claim - yes or no - as well as the lower-bound constrained continuous claim payout amount) and feeding in hundreds of data fields related to weather, property, and demographics on the policyholder, we were looking to gain new insights into the pathology of claims.
One key to the success of the effort was to work collaboratively to identify as many data points as possible to train the AI model and have it generate predictions without artificially constraining it by imposing our human “expert” opinions on it. This naïve approach worked well as StormGeo’s data scientists were not insurance experts: they were able to build models and look at summary statistics to judge the accuracy of the models without using their judgment to unduly influence the creation of the output. Another key to the success of the effort was the iterative approach taken in generating models, reviewing the results as a team, and discussing refinements to make. In particular, there were a number of key decisions to make in terms of how to structure the data to achieve the best results. A small example was what time increment to use - we settled on daily rather than hourly or weekly - and then how to connect claims with weather as there is usually a lag between when severe weather occurs and when a claim is reported and filed in the system. Finally, having a diverse team of expertise was critical, including a core team that was managing the daily tasks and an extended team of subject matter experts to review results periodically and provide input on refining the models.
Look for leaps, not incremental improvements
In the corporate world, so much of the focus of AI efforts is on automation and incremental improvements over time. Gaining small efficiencies at scale makes sense: these small wins can add up to big savings over time. These efforts are also more straightforward to justify based on a solid cost-benefit analysis (CBA), helping to accelerate the approval process and prioritization in the project portfolio. By contrast, the type of work I described above was very preliminary and exploratory: there was no ability to create any sort of meaningful or persuasive CBA because there were simply too many unknowns. The rationale for making the investment both in time and money was more strategic: for a large insurance carrier whose largest expense was paying losses, if the ability to better predict 50% of claims related to weather events existed, it was valauble. Rather than make excuses, a moderate investment in developing a more detailed understanding of how weather drove claims was worth it due to the sume of money involved, even if the outcome was unknown.
We sought to make a bold leap by reducing claims, not just an incremental improvement. At worst, by exploring new approaches we could be assured that our current methods were the best available. A better outcome would be to improve the prediction capabilities to better anticipate weather-related losses so that we could set more realistic plans and have a higher probability of achieving our objectives. The best result would be to not only have more accurate predictions, but also to translate that knowledge into action by seeking to prevent some fraction of these losses in the first place. For example, if a homeowner knew with a fair degree of certainty that a significant wind and rain event was going to occur next week, it might motivate them to properly seal up old windows to prevent water intrusion and lower the damage and cost of repair to their home. A prompt from their insurance carrier might help.
As I’ve written in the past, from an innovation perspective it is valuable to build upon small wins to achieve larger victories as well as pursue a mix of offensive and defensive innovation initiatives. Automation remains a major force that is receiving significant investment and attention, causing firms to re-examine what the role of humans and machines will be going forward in their organizations. But AI can do much more than augment the ability of humans, allow them to shift to higher skilled jobs, or replace people entirely if they do not become “futureproof”. AI has massive potential to use its powerful predictive capabilities to reduce losses and even prevent claims from occurring in the first place. Any reduction in economic losses from severe weather events will be welcome in an era of climate change and larger catastrophic losses such as the estimated $74B of damages caused by Hurricane Ian.
Is your organization making significant investments in AI? If so, how would you categorize its impact? Are projects focused more on incremental improvements (defensive innovation) or large leaps (offensive innovation)? Where do you see overlooked opportunities for AI?