This website requires JavaScript.

A fair pricing model via adversarial learning

Grari VincentCharpentier ArthurLamprier SylvainDetyniecki Marcin
At the core of insurance business lies classification between risky andnon-risky insureds, actuarial fairness meaning that risky insureds shouldcontribute more and pay a higher premium than non-risky or less-risky ones.Actuaries, therefore, use econometric or machine learning techniques toclassify, but the distinction between a fair actuarial classification and"discrimination" is subtle. For this reason, there is a growing interest aboutfairness and discrimination in the actuarial community Lindholm, Richman,Tsanakas, and Wuthrich (2022). Presumably, non-sensitive characteristics canserve as substitutes or proxies for protected attributes. For example, thecolor and model of a car, combined with the driver's occupation, may lead to anundesirable gender bias in the prediction of car insurance prices.Surprisingly, we will show that debiasing the predictor alone may beinsufficient to maintain adequate accuracy (1). Indeed, the traditional pricingmodel is currently built in a two-stage structure that considers manypotentially biased components such as car or geographic risks. We will showthat this traditional structure has significant limitations in achievingfairness. For this reason, we have developed a novel pricing model approach.Recently some approaches have Blier-Wong, Cossette, Lamontagne, and Marceau(2021); Wuthrich and Merz (2021) shown the value of autoencoders in pricing. Inthis paper, we will show that (2) this can be generalized to multiple pricingfactors (geographic, car type), (3) it perfectly adapted for a fairness context(since it allows to debias the set of pricing components): We extend this mainidea to a general framework in which a single whole pricing model is trained bygenerating the geographic and car pricing components needed to predict the purepremium while mitigating the unwanted bias according to the desired metric.