Navigating the Risks: Understanding and Mitigating Generative AI Bias in Banking and Risk Management
By Maria Matkovska, Quantitative Analytics Sr. Manager, Model Risk, KeyBank
Generative AI is a subset of artificial intelligence models that learns information from their training data to generate new content (videos, text, images, etc.) [1]. While these models offer transformative potential in finance and banking areas, specifically in fraud detection, personalized banking, and customer service automation, they also introduce new dimensions of uncertainty to risk management frameworks within banks. Understanding these risks is crucial for senior risk managers to ensure that the adoption of Generative AI enhances rather than undermines the stability and reliability of banking operations.
This article will focus on identifying four key aspects generative AI could influence in the financial field, banking and risk management space, in particular:
- Areas generative AI biases stem from
- What these biases manifest as
- The potential damage these biases could do
- Prevention and mitigation of the related risks.
- Where Do Generative AI Bias Stem From?
Training Data Bias
Generative AI models require vast amounts of training data. If this data is biased or some populations are overrepresented or underrepresented, the model may learn and propagate these biases, leading to unfair or discriminatory outcomes. For instance, a biased dataset could result in a credit scoring model that unfairly penalizes certain demographics.
Model Architecture Bias
The design and architecture of the generative AI model itself can introduce biases. For instance, if the model architecture (e.g., RAG architecture in general) is not sufficiently complex or flexible, the model may struggle to retrieve and represent diverse perspectives or accurately capture the information from the data, leading to biased, toxic, hallucinated, or inaccurate outputs [2].
Tuning and Optimization Process
The optimization process, which tunes hyperparameters to minimize a defined loss function, can introduce bias in generative AI models. Factors such as the learning rate, batch size, and weight initialization play a role in how the model converges and generalizes, and these choices can result in biased outcomes [2]. The objective function (e.g., prompt design) used during training can also contribute to inaccuracy. If the objective function is not properly defined or if it prioritizes certain outcomes over others, the model may produce biased, hallucinated, or toxic outputs that align with those priorities.
Generative AI models hold immense potential to revolutionize banking operations but also introduce significant risks.
- What do These Biases Manifest As?
Stereotyping and Prejudice
Generative AI models may generate outputs that reflect stereotypes or prejudices present in the training data. For example, a language model trained on biased text data may produce biased, toxic, or discriminatory language in its outputs [3].
Underrepresentation or Marginalization
Groups that are underrepresented or marginalized in the training data may receive less accurate or fair treatment in the outputs generated by the model. For example, if an image model is trained on photos from the internet, it may learn and reproduce societal biases related to race, gender, or other sensitive attributes present in the training data.
Misleading or Unintended Associations
The models could generate outputs that are plausible-sounding but factually incorrect, inconsistent with the input data, or completely fabricated or hallucinated. Also, the model may learn unintended associations between different attributes or concepts, leading to biased or nonsensical outputs [3]. For example, it may associate certain occupations with specific genders or races, even if there is no inherent connection.
- What damage can such biases do to risk management and banks?
Inaccurate Risk Assessments and Missed Opportunities
If the generative AI model is biased, it may produce inaccurate risk assessments by favoring or penalizing certain groups or factors unfairly [3]. This can lead to incorrect risk predictions, potentially exposing organizations to unexpected losses or missed opportunities. A biased risk management process may also overlook valuable opportunities or innovations by excluding certain groups or perspectives from consideration. This could hinder organizational growth and competitiveness in the marketplace.
Unfair Treatment of Individuals or Groups and Reinforcement of Social Biases
Biased models can result in unfair treatment of individuals or groups, leading to disparities in access to opportunities, resources, or services. For example, biased algorithms may unfairly deny or price loans to certain demographic groups based on factors such as age, race, or gender. Generative AI models trained on biased data may also perpetuate and reinforce existing social biases present in the training data [3]. This can exacerbate inequalities and discrimination in risk management practices, further marginalizing already disadvantaged groups.
Legal and Regulatory Compliance Risks
Biased risk management systems may violate legal and regulatory requirements related to fairness, non-discrimination, and consumer protection, particularly relevant in areas like lending, loan underwriting, and hiring within financial institutions [4]. Bias in AI models could result in non-compliance with laws such as the Fair Lending laws and face legal challenges, regulatory scrutiny, fines, or reputational damage.
Increased Financial Risks
Biased risk assessments may result in misallocation of resources or investments, leading to financial losses for organizations. Inaccurate risk predictions driven by biases can undermine financial stability and sustainability [5].
Loss of Trust and Reputation
Organizations that use biased generative AI models in risk management risk losing the trust of customers, stakeholders, and the public. Biased decision-making can damage an organization’s reputation and erode trust, leading to loss of business and credibility. It can also potentially drive customers to competitors and hinder new mergers and acquisitions [6].
- How Can Banks Get a Handle on and Mitigate Generative AI Bias Related Risks?
Generative AI models hold immense potential to revolutionize banking operations but also introduce significant risks [6]. Therefore, it is essential to carefully prepare and preprocess training data, design model architecture, fine-tune the model to prioritize fairness and equity, and continually evaluate and mitigate biases throughout the development and deployment of generative AI systems [2]. Additionally, using quantitative and qualitative measures, including but not limited to human in the loop, involving diverse stakeholders and subject matter experts in the design, tuning, evaluating, and ongoing monitoring process, can help identify and address biases more effectively. By adopting robust risk management practices, banks can navigate the complexities of AI integration and build a resilient, forward-looking operational framework [4].
References:
[1] McKinsey & Company. (2023). What is generative AI? McKinsey & Company. Retrieved from https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai
[2] Saturn Cloud. (n.d.). Bias in generative AI models. Retrieved July 10, 2024, from https://saturncloud.io/glossary/bias-in-generative-ai-models/#:~:text=The%20choice%20of%20model%20architecture,data%2C%20leading%20to%20biased%20representations.
[3] National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). U.S. Department of Commerce. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
[4] Deloitte. (2023). Legal issues and generative AI: Risks, regulations, and practical considerations. Deloitte Insights. Retrieved from https://www2.deloitte.com/us/en/pages/consulting/articles/generative-ai-legal-issues.html
[5] European Central Bank. (2024, May 15). Artificial intelligence and financial stability. European Central Bank. https://www.ecb.europa.eu/press/financial-stability-publications/fsr/special/html/ecb.fsrart202405_02~58c3ce5246.en.html
[6] Yar, M. (2022, January 3). The reputational risks of AI. California Management Review. Retrieved from https://cmr.berkeley.edu/2022/01/the-reputational-risks-of-ai/
Disclaimer: the views expressed in this article are my own and don’t reflect the views of KeyBank