Follow BigDATAwire:

July 1, 2022

Why 2022 Can Be the Year Financial Services Providers Embrace Ethical AI

Scott Zoldi

(chainarong06/Shutterstock)

Nearly two years after a global pandemic sent most banking customers online, the majority of financial institutions appear to be embracing digital transformation. But many still have a long way to go. For example, a recent survey of mid-sized U.S. financial institutions by Cornerstone Advisors found that 90% of respondents have launched, or are in the process of developing, a digital transformation strategy—but only 36% said they are halfway through. I believe that one of the reasons behind the lag in uptake is many banks’ new reluctance to use artificial intelligence (AI) and machine learning technologies.

Organizations of All Sizes Can Embrace Ethical AI

The responsible application of explainable, ethical AI and machine learning is critical in analyzing and ultimately monetizing the manifold customer data that is a byproduct of any institution’s effective digital transformation. Yet according to the Cornerstone research cited above, only 14% of the institutions that are halfway or more through their digital transformation journey (5% of total respondents) have deployed machine learning.

Low adoption rates may illustrate a reluctance by the C-suite to use AI, not entirely unfounded: AI has become deeply mistrusted even among many of the workers who deploy it, with research finding that 61% of knowledge workers believe the data that feeds AI is biased.

(Golden-Dayz./Shutterstock)

Yet ignoring AI isn’t a feasible avoidance strategy, either, because it’s already widely embraced by the business world at large. A recent PwC survey of U.S. business and technology executives found that 86% of respondents considered AI a “mainstream technology” at their company. More importantly, AI and machine learning present the best possible solution to a problem encountered by many financial institutions: After implementing anytime, anywhere digital access – and collecting the high volume of customer data it produces – they often realize they’re not actually leveraging this data appropriately to serve customers better than before.

The impact of a mismatch between increased digital access and provided digital data, coupled with  customers’ unmet needs, can be seen in FICO research, which found that while 86% of consumers are satisfied with their bank’s services, 34% have at least one financial account or engage in “shadow” activity with a non-bank financial services provider. Adjacently, 70% report being “likely” or “very likely” to open an account with a competing provider offering products and services addressing unmet needs such as expert advice, automated budgeting, personalized savings plans, online investments, and electronic money transfers.

The solution, which has gathered strong momentum throughout 2021, is for financial institutions of all sizes to implement AI that is explainable, ethical and responsible, and incorporating interpretable, auditable and humble techniques.

Why Ethics by Design Is the Solution

September 15, 2021 saw a major step toward a global standard for Responsible AI with the release of the IEEE 7000-2021 Standard. It provides businesses (including financial services providers) with an ethical framework for implementing artificial intelligence and machine learning by establishing standards for:

(everything possible/Shutterstock)

  • The quality of data used in the AI system;
  • The selection processes feeding the AI;
  • Algorithm design;
  • The evolution of the AI’s logic;
  • The AI’s transparency.

As the Chief Analytics Officer at one of the world’s foremost developers of AI decisioning systems, I have been advocating Ethics by Design as the standard in AI modeling for years. The framework established by IEEE 7000 is long overdue. As it solidifies into broad adoption, I see three new, complementary branches of AI becoming mainstream in 2022:

  • Interpretable AI focuses on machine learning algorithms that specify which machine learning models are interpretable versus those that are explainable. Explainable AI applies algorithms to machine learning models post-hoc to infer behaviors what drove an outcome (typically a score), whereas Interpretable AI specifies machine learning models that provide an irrefutable view into the latent features that actually produced the score. This is an important differentiation; interpretable machine learning allows for exact explanations (versus inferences) and, more importantly, this deep knowledge of specific latent features allows us to ensure the AI model can be tested for ethical treatment.
  • Auditable AI produces a trail of details about itself, including variables, data, transformations, and model processes including algorithm design, machine learning and model logic, making it easier to audit (hence the name). Addressing the transparency requirement of the IEEE 7000 standard, Auditable AI is backed by firmly established model development governance frameworks such as blockchain.
  • Humble AI is artificial intelligence that knows if it is unsure of the right answer. Humble AI uses uncertainty measures such as a numeric uncertainty score to measure a model’s confidence in its own decisioning, ultimately providing researchers with more confidence in decisions produced.

When implemented properly, Interpretable AI, Auditable AI and Humble AI are symbiotic; Interpretable AI takes the guesswork out of what is driving the machine learning for explainability and ethics; Auditable AI records a model’s strengths, weaknesses, and transparency during the development stage; and ultimately establishes the criteria and uncertainly measures assessed by Humble AI. Together, Interpretable AI, Auditable AI and Humble AI provide financial services institutions and their customers with not only a greater sense of trust in the tools driving digital transformation, but the benefits those tools can provide.

About the author: Scott Zoldi is Chief Analytics Officer at FICO responsible for the analytic development of FICO’s product and technology solutions, including the FICO Falcon Fraud Manager product which protects about two thirds of the world’s payment card transactions from fraud. While at FICO, Scott has been responsible for authoring more than 120 patents with 71 patents granted and 49 pending. Scott is actively involved in the development of new analytic products utilizing Artificial Intelligence and Machine Learning technologies, many of which leverage new streaming artificial intelligence innovations such as adaptive analytics, collaborative profiling, deep learning, and self-learning models. Scott is most recently focused on the applications of streaming self-learning analytics for real-time detection of Cyber Security attack and Money Laundering. Scott serves on two boards of directors including Tech San Diego and Cyber Center of Excellence. Scott received his Ph.D. in theoretical physics from Duke University. Keep up with Scott’s latest thoughts on the alphabet of data literacy by following him on Twitter @ScottZoldi and on LinkedIn.

Related Items:

Europe’s New AI Act Puts Ethics In the Spotlight

Achieving Data Literacy: Businesses Must First Learn New ABCs

AI Bias Problem Needs More Academic Rigor, Less Hype

 

BigDATAwire