In a recent BBC interview, the EU's Competition Chief, Margrethe Vestager, expressed her concerns about the potential for AI to amplify bias or discrimination in loan and mortgage decisions. This is an important and valid concern. How can bias be avoided?

Issues of bias in AI can arise from the quality of the input data, how models are developed, and their subsequent training. If bias already exists in credit decisioning, AI models trained on this historical data will replicate that bias. Legacy credit systems can by their nature already be biased due to human decisions involving factors such as postcode, gender or race. Steps also need to be taken to ensure that bias is not generated by algorithms.

AI developers are well aware of the issues around bias and working to remove them in a number of ways. For example, counterfactual explanations aim to reveal the reasons behind decisions by showing what inputs needed to be different for an alternative outcome to have been achieved. Credit is an excellent example of the importance of understanding why an application is accepted or rejected, for both borrowers and lenders.

This method goes some way to opening up the ‘black box’ that can be machine learning by making algorithms more transparent. In other instances, race or gender may be removed as variables in models. However, it is possible for this information to be inferred, or assumed, from other input data.

Our proven approach to achieve bias-free results is through anonymisation, combined with transactional data and behavioural analytics, to determine trends and coefficients that correlate to good money management behaviour.

Re-imaging credit decisioning with transactional data
Technological advances have led to the introduction of Open Banking and Finance, making a new level of data available and insights possible. Understanding borrowers’ ability to manage and repay credit is hugely important to improving access to credit and ensuring better outcomes and these advances have transformed what it is possible to achieve around credit assessment.

To harness the benefits these vast amounts of real-time and historical data offer requires the huge computational power of AI and its high-processing capabilities, data that could take days or weeks to collate and analyse manually. This transactional data provides the basis for removing bias. It allows for the evaluation of how well people manage their money, rather than how they achieve against potentially biased static factors.

We designed a new FCS (Financial Capability Scoring) metric, powered by our FIOLA® AI-powered cash flow-based credit decisioning engine with behavioural analytics driven by advanced data science techniques. The FCS metric removes the focus on missed payments in favour of understanding the borrower's capability to manage money and credit on a monthly basis. It allows for the inclusion of factors such as regular rent payments, lenders to highly accurately price the risk to an APR that reflects borrowers’ financial capability, and works without the need for a traditional credit score when one is not available

Eliminating bias and improving outcomes
To remove potential negative factors, the FIOLA® risk engine was designed to work without any PII (personally identifying information), blinding our models to sensitive information that could play an active role in the credit-decisioning process. Applications are assessed based on cash-in and cash-out, with behavioural analytics indicating how the borrower prioritises and makes decisions around how they manage their money.

This helps enable access to credit and better rates for borrowers unable to obtain them under traditional methods, despite paying their bills on time. The FIOLA® risk engine is also designed to ensure lenders do not originate a loan unless the borrower passes a range of suitability and affordability checks. Another benefit that our application of AI can deliver using open banking data is real-time affordability and vulnerability analysis. This provides lenders with early alerts if borrowers’ circumstances or behaviour suddenly changes and they may require outreach.

Transparency and explainability around model development, monitoring and training are also vital to demonstrating elimination of bias and developing ethical AI. We provide transparency for the results that define predictions, through feature importance and regression coefficients, graphs or tables for example, helping financial institutions understand the factors contributing to the probability of loan defaults. We ensure that our AI is regulatory compliant, adhering to relevant financial regulations and that its predictions and decision-making processes are fair, ethical and compliant with legal requirements.

Wider benefits
At Finexos, eliminating bias has always been a fundamental factor in the development of our next-generation credit decisioning technology. The company was founded to increase access to affordable credit for all borrowers and drive financial inclusion through the removal of bias that can already exist in lending today by developing ethical AI by design. This benefits borrowers but equally lenders, and ultimately the economy and society, as default rates are reduced, viable lending opportunities opened up that would ordinarily be refused, and the flow of credit kept moving through the system.

Speed is important as the nature of today’s on-demand lending means borrowers expect, and sometimes require, rapid decisions. For lenders, failure to do so may result in the borrower approaching another lender. The use of AI speeds up decisions and significantly reduces lender processing costs, a benefit which can also be passed on to borrowers in the form of reduced credit rates.

Our vision is to harness AI to remove existing bias and improve access to credit. The reduction of bias is just one of the ways in which we are working to enable financial inclusion and achieve better credit outcomes for borrowers and lenders.