It has been more than a year since AI’s commercial breakout, and its biases continue to surprise. Although businesses, legislators and consumers are increasingly united in their understanding of AI biases, many technologies remain far from foolproof.
This is clear from the occasional odd trend and cautious think-piece, which correctly warn that AI is just as capable of affirming biases as it is uncovering new information.
However, many organisations who have built AI into their risk evaluation and decision-making processes have found the opposite to be true. That, in fact, embracing AI has the potential to reduce bias in decisioning. But only when certain approaches are taken.
Here, we explain why AI-supported decisioning-making is something to embrace, and the essential features of ethical models.
Financial Services and Insurance providers are keenly focused on preventing bias.
Access to their products and services can significantly affect a customer’s life. This is precisely why the FCA’s Consumer Duty requires firms to evidence the fair and equitable treatment of customers – from speed of decisioning to preferential products and deals.
Today, most firms generally ensure fairness (and compliance) by checking against confirmed fraud and risk intelligence databases. Always at the point of application, but ideally at regular intervals via an on-booking screening programme.
Effective as “pre-AI” measures are, the potential for unintended bias is ever present. This is for three key reasons:
1. A snapshot may not be 100% accurate.
Due to economic change, “normal” consumer behaviour is evolving quickly. Confirmed fraud aside, this could mean applicants are excluded from preferential products because data snapshots are increasingly challenging to risk assess, and as a result, a consumer may exceed an organisation’s risk appetite.
In this context, an AI co-pilot – which continually learns about and adjusts the influence of risk markers – becomes a compelling option.
2. Even best intentions can be biased.
Presented with more grey areas (see above), Financial Services and Insurance organisations may undertake more investigations to ensure fair decisioning.
But the best informed and balanced investigator can experience bias. In these cases, bias is usually unconscious, but the outcome for a consumer is the same.
It is important to note that ineffective AI models can reflect the biases of their human designers. But, when ethically modelled and maintained, AI-powered predictive analytics can help overcome the decision biases inherent to human nature.
Consortium fraud checks will always be a critical layer in counter-fraud strategies. The intelligence within approved syndicates is undisputedly factual, and therefore imperative to fair decisioning.
That said, in isolation consortium data cannot accurately, or ethically, predict the future. As a result, some consumers may be treated unfairly. For example, being required to undertake additional verification steps for longer than is necessary, or losing access to new product offers.
It is evident that bias is not exclusive to AI decisioning. In fact, AI modelling could solve many of our current challenges with equitable decisioning.
However, to protect consumers and business integrity, a proportional, vigilant response is key. This can be achieved by only implementing AI decisioning built on the following principles:
Any strategy, process or outcome could become skewed if broader context, objectivity and accountability are not hard coded into a counter-fraud approach. And, given the complexity of risk evaluation – paired with a demand for faster, fairer outcomes – an AI co-pilot seems vital to the continued success of fraud teams.
Do you need help using AI decisioning in your fraud strategy? Contact a Synectics Fraud Strategy Consultant below to arrange a chat at your convenience.