<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1304121953435685&amp;ev=PageView&amp;noscript=1">

Blog

Understanding the pitfalls of AI decision making and what to look out for

CHRIS GRIMES

hand-of-a-businessman-shaking-hands-with-a-android-D5M6JNV

 

Artificial intelligence is expected to lead to an additional annual economic output of over $13 trillion by 2030. It is already gaining a strong foothold across multiple industries, including the financial sector or fintech.

 

While the benefits of AI and machine learning are evident, we must acknowledge the pitfalls or limitations of these cutting-edge technologies. Let’s take a look at some potential challenges of using AI in the financial sector.

 

How AI and machine learning are prone to humane issues

Bias could be a problem with AI as well.

Fintech firms and mortgage lenders have reported lower operational costs, quicker processing periods, and lower default rates with AI-driven underwriting software. However, what most lenders fail to mention is the discriminatory behaviour exhibited by AI against a particular race, gender, or people with specific ethnic orientation.

 

It’s no fault of the AI. Machine learning takes into account various data points fed into it for decision-making, which means incomplete data or biased data could replicate the same bias in AI-driven programs.

Solution: Enterprises have to be careful against biased results throughout the testing phase. Making adjustments in the algorithms or adding a third-party bias detection suite (AI Fairness 360 from IBM) can help mitigate these risks.

 

Customer privacy could be a problem when using alternative data for loan underwriting.

The use of alternative data has done wonders in the lending industry. Consumers with limited credit history can qualify for different financial products based on these alternative data points, including their social media profile, bill payment history, online purchases, and even browsing history.

 

But, there is a downside as well. Consumer privacy is at risk with the use of alternative data points. The current AI, ML-driven solutions have no set boundaries over the kind of data they use or the extent up to which these data points to factor into the final results.

Solution: AI solutions have to be upfront about the use of personally identifiable user data. It is critical to have user consent before using these datasets. Additionally, AI solutions must disclose how they use consumer data without giving out any trade secrets.

 

  1. Use multiple people to code the data.

If there is some consistency between your interpretation and that of others, then it is more likely that there is some truth by agreement in your interpretations.

 

  1. Have participants review your results.

Ask the people who provided the data whether your interpretations seem to be representative of their beliefs.

 

  1. Verify with more data sources.

This is sometimes called triangulation. If you can find other sources of data that support your interpretations, then you can have more confidence that what you've found is legitimate.

 

  1. Check for alternative explanations.

Consider whether there are other reasons why you obtained your data. If you can rule out or account for alternative explanations, your interpretations will be stronger.

 

  1. Review findings with peers.

Ask others to review your conclusions. Sometimes others will see things that you missed or can identify gaps in your argument that need to be addressed. They also can provide affirmation that your conclusions are sound and reasonable given your data.

 

New security challenges come up as lenders store more user-centric data.

With the growing use of data across different sectors, cybercrimes, including data theft and identity theft, are on the rise. According to some reports, the cost of cybercrime could shoot up to $6 trillion by 2021, posing a substantial economic threat to corporations as well as individuals.

Since AI-driven solutions use a variety of consumer data, even a small, undetected data breach could wreak havoc for lenders as well as their clients.

Solution: The first step is to implement enterprise-grade security with every layer of consumer data. Secondly, independent security audits are critical for any AI or ML-driven lending suites. Also, banks and lending institutions have to understand the importance of basic security awareness among their staff. Taking simple preventive steps could cut down the risk of a cyber attack or data theft.

 

Bottom line

AI has had a ripple effect across every industry, but it’s about the right time to move from a frenzied infancy stage to a mature overview of its benefits as well as limitations.