Can AI be a force for good in insurance?
Exploring a sci-fi world where insurance algorithms shape risky behaviors
Most narratives around AI come from sci-fi, media, big tech, and influencers
No single source of information can accurately predict “the real story of AI”
As with all other technologies, AI has benefits and drawbacks for society
The critical insight is that we have a lot of control over shaping AI’s future
A sci-fi story about insurance helps illuminate the pros and cons of AI today
Going beyond the bleak picture of our AI future
In the last edition of Forestview, I summarized the 9 Rules for Humans in the Age of Automation from tech journalist Kevin Roose along with my reactions. While many of the concerns that Roose identifies are valid, my feeling was that the overall tone of his book Futureproof was a bit pessimistic. Encoded in Roose’s nine rules was a sense of inevitability and the need for humans to prepare themselves to remain relevant in an economy increasingly dominated by AI. Roose isn’t alone in spinning a cautionary tale about automation and AI; there has been a broader trend toward negativity in most books and media stories over the years.
Thanks for reading Forestview! Subscribe for free to receive new posts like this twice weekly.
According to noted author and VC investor Kai-Fu Lee, this pessimistic tone isn’t surprising. In the book AI 2041, co-authored with former Google colleague turned sci-fi author Chen Quifan, Kai-Fu asserts that in the last five years AI has become the world’s hottest technology after 50+ years of research, including periods of advancement and hype followed by setbacks or AI “winters”. Kai-Fu credits the commercial breakthrough of AI applications to the development of deep learning and the associated requirements of large data sets, massive storage, and major leaps in processing power.
AI is now at a tipping point. It has left the ivory tower. The days of slow progress are over. - Kai-Fu Lee
Kai-Fu asserts that speculation about how the progression of AI will play out in the future varies widely because AI appears complex and opaque to many and the main sources of information to non-technical people are sci-fi writers, the media, and influencers. (I would add tech companies to this list.) While tech evangelists often promise unfettered good from a world dominated by AI, most others have a desire to “drive clicks” and do so through dystopian images. Kai-Fu states in AI 2041 that “their predictions often lack scientific rigor…it is no wonder that the general view about AI - informed by half-truths - has turned cautious and even negative”. AI is like most technologies - neither inherently good nor evil - and in Kai-Fu’s view, AI will evolve over time such that the societal benefits outweigh the negatives, no different than other major technological breakthroughs.
A sci-fi story about insurance and AI in 2041
Kai-Fu and Chen collaborated together on AI 2041 to paint a picture of how they see AI developing and becoming part of our daily lives in two decades. The book is structured as a series of fictional stories based on current technological trends and highlights critical questions that humans will be wrestling with in the intervening period. Their first chapter is about a fictional insurance firm called Ganesh Insurance based in India. In Chen’s story, Ganesh has created an ecosystem of apps and partnerships that capture and share large behavioral data sets on customers (with their permission) called “the Golden Elephant”. By collecting granular data on its customers, Ganesh can use this detailed information to build powerful predictive algorithms around the likelihood of filing a claim and having to pay losses, and Ganesh provides discounts to families for engaging in less risky behaviors. This model is similar to many of the Internet of Things (IoT) programs such as telematics, smart home devices, and wearables that insurance carriers offer today to customers. Ganesh Insurance also relies on Aadhaar, India’s unique identification system that relies on biometric information and ties together fingerprints, retina signatures, genetic histories, family background, occupations, credit scores, home-buying history, and tax records. Ganesh also uses social media data as well as minors’ data with proper authorization from customers in order to personalize its products and services.
Nayana is the protagonist in the story about Ganesh Insurance. She is a teenage girl living in Mumbai with her parents, grandparents, and brother. Nayana has a crush on a boy in her class named Sahej and interacts with social media platforms to engage with him and attempt to discern the likelihood that he is also interested in her. In her online interactions, Nayana notices that the Golden Elephant appears omnipresent across a range of applications. She also notices that the company uses small nudges to encourage healthier behaviors in her family members. For example, Ganesh sends reminders to her grandparents to take their medicines and schedule doctors’ appointments. She also notes Ganesh convinced her father to give up smoking by showing him how much premium he would save by quitting, and the insurer helped prompt her mother to reduce the number of sweets her brother eats to avoid diabetes. All of the data points provided by her family are fed into Ganesh’s proprietary algorithms, which are then used to encourage healthier behaviors by the Golden Elephant. In this example, AI is a force for good helping Nayana’s family stay healthy.
Perhaps more troubling, Ganesh Insurance is also following Nayana’s pursuit of Sahej, who is a descendant of the old Dalit caste. While the constitution of India outlawed the caste system in 1950, caste discrimination remains prevalent in the country according to a recent survey. Millions are Dalits are part of a group of people considered “untouchable” and still feel that they are at the bottom of the caste ladder. Many of the social challenges Dalits face were reinforced during the recent COVID-19 pandemic. In the fictionalized story, Nayana eventually arranges a meeting with Sahej, who warns her to stay awake so as to not be harmed by closely associating with him. Nayana’s love for Sahej is strong, and she chooses to follow him to visit the slum in Mumbai where he lives. Along the way, she is continually pinged with notifications from Ganesh Insurance that her family’s premiums are rising by the minute because she is walking closer and closer to an area with high poverty, disease, and pollution.
Find balance in understanding how AI benefits us
How does Ganesh Insurance know that Sahej is a Dalit? It doesn’t directly know, but based on his darker skin tone determined through facial recognition and the location of his activity, Ganesh’s algorithms associate Sahej with “unhealthy” traits - purely from a claims perspective. People who “look” like Sahej to the AI algorithms based on his profile have more claims, which Ganesh knows. Thus, to keep premiums lower for Nayana’s family, Ganesh is attempting to keep her “healthy” by nudging her away from Sahej. The AI that Ganesh is relying upon has not been tuned to avoid incorporating bias and discrimination into its data and recommendations. There is a clear concern from an ethical standpoint here; Ganesh has gone too far in its attempts to encourage “healthy” behaviors by its customers.
Kai-Fu points out in summarizing the key points in Chen’s story about Ganesh Insurance that deep learning is mathematically trained to maximize the value of an objective function. According to Kai-Fu, deep learning is “an omni-use technology, meaning it could be applied to almost any domain for recognition, prediction, classification, decision-making, or synthesis”. He goes on to say, “the advent of deep learning pushed AI capabilities from unusable to usable for many domains”. Critically, in the context of insurance, the major leap that deep learning provided was going beyond previous attempts to encode rules based on human judgments to perform quantitative optimization on a massive data set where humans only provide the outcome: whether a claim occurred or not. Kai-Fu shares that:
“…a deep learning algorithm trained on an ocean of information will discover correlations between obscure features of the data that are too subtle or complex for we humans to comprehend…”
By contrast, humans are much better at optimizing by drawing on a wide range of experiences, abstract concepts, and common sense. So relying on race, gender, ethnicity, or other such demographic characteristics that may have predictive power in accurately estimating the probability of losses may be judged off-limits by humans seeking to balance fairness and social justice considerations with profitability. As Kai-Fu explains, deep learning AI models struggle to balance multiple objective functions: AI tends to have “a maniacal focus on that one corporate goal, without regard to the users’ well-being”.
So how do we achieve a better balance between the benefits and drawbacks of AI? Kai-Fu outlines some considerations in AI 2041:
Teach AI to have complex objective functions, such as lowering insurance premiums while maintaining fairness
Ensure every objective function must be beneficial to humans (Stuart Russell advocates that humans always be “in the loop” in designing objective functions)
Incentivize companies to develop more holistic objectives, either through approaches such as corporate social responsibility or government regulation
Make use of third-party AI auditors or “watchdogs” a best practice for firms
For firms seeking to root out bias and discrimination in their AI efforts to ensure fairness, Kai-Fu outlines these steps:
Firms that use AI should disclose which systems are used and for what purpose
AI engineers should be trained with a set of ethical principles and guidelines
Rigorous testing should be required and embedded in AI training tools
New guidelines and regulations requiring AI audits should be adopted
AI decisions must be accompanied by an explanation of how it was determined and transparency in AI needs to become a higher priority to aid its interpretation
What examples can you think of where AI is being used for good in insurance? Are there any examples where you have seen it used irresponsibly or in ways that raise concerns? What steps should companies take to ensure they are using AI responsibly? What steps should regulators and rating agencies take, if any? What else can be done in your view to help shape AI to be a positive force for good over the next two decades? Share your thought below in the comments.