9 minute read 19 Oct 2023
Person watching a 3D projection

What banking directors should ask about AI and machine learning risks

Authors
Vidhya Sekhar

EY Americas Financial Services Data and Analytics Leader

Data and analytics professional. Problem solver. Team player. Working mother. Realist.

Bill Hobbs

Managing Director, Financial Services Consulting and Center for Board Matters, Ernst & Young LLP

Client-centric leader finely attuned to detail. Influencer of transformational change. Champion of rising stars. Community servant and youth mentor. Outdoor enthusiast. Husband. Father of five.

9 minute read 19 Oct 2023

Bank risk teams must help boards understand the challenges and opportunities that AI provides and ask hard questions of C-suite leaders.

In brief

  • Boards should be aware of how AI and machine learning are driving digital transformation across financial services plus being used by risk management teams. 
  • Credit risk management, anti-money laundering and regulatory compliance are use cases that offer clear value in improving model accuracy and efficiency.
  • As generative AI becomes more prevalent, bank boards must advocate for robust governance and controls, especially since regulatory direction is unclear.

Artificial intelligence and machine learning (AI/ML) have been near the top of the strategic agenda for boards and bank leaders for several years and are likely to continue. The emergence of generative AI tools capable of producing rich, prompt-based content and code has further fueled this focus. Even before generative AI burst onto the scene, boards were challenged to assess the full range of AI and machine learning risks. These challenges are highlighted in two 2022 surveys performed by Ernst & Young LLP (EY) and the Institute of International Finance (IIF) and are further supported by recent roundtable sessions we’ve held with chief risk officers (CROs) from financial services.

The key AI/ML implementation focus areas for bank risk management teams are credit risk management and fraud detection. Additionally, with generative AI, use cases are being explored in these areas and for broader regulatory compliance and policy frameworks. Generative AI has the potential to bring significant advancements and transform business functions.

However, AI/ML early adopters face increased risks, such as lawsuits arising from the use of web-based copyrighted material in AI outputs, concerns about bias, lack of traceability due to the “black box” nature of AI applications, and threats to data privacy and cybersecurity. As a result, many financial institutions are opting for a cautious approach to AI/ML. They are initially implementing applications in non-customer-facing processes or to aid customer-facing employees where the primary goals are improving operational efficiency and augmenting employee intelligence by offering insights, recommendations and decision-making support.

Lack of clear regulatory direction complicates board oversight. Regulators have expressed concerns about AI use in the business, including the embedding of bias into algorithms used for credit decisions and the sharing of inaccurate information by chatbots. Data privacy and security and the transparency of other models are also on authorities’ radars. Generative AI has amplified these concerns.

With AI usage increasingly democratized, robust, agile governance has become an urgent board priority. Even if companies don’t define or set up controls, boards must be diligent in ensuring that companies take a holistic and strategic approach to overseeing AI usage in risk management and overall business operations.

Four things for boards to consider

1. AI and machine learning are central to digital transformation, and CROs expect risks to increase as a result. 

AI/ML are crucial for speeding up digital transformations in financial services over the next three years, alongside modernized platforms, automated processes and cloud technologies. Improvements in generative AI over the last year have only increased this urgency. Directors should be aware that technology risk and project risk are interconnected and can reinforce each other. There is a risk that AI could be overshadowed by project risks as banks strive to modernize core functions and migrate to the cloud.

  • Chart description

    Bar chart listing the top 5 ways banks will accelerate digital transformation in the next three years. Categories include modernizing core functions and platforms, customer insights driven by advanced analytics (machine learning, AI), process automation (including intelligent automation), and cloud migration and adoption.

Our research also confirms that the majority of CROs see digital transformation and AI risks continuing to grow. There’s no doubt that generative AI will play a prominent role in ongoing digital transformation, especially in customer-facing operations, which will further increase the risk profile. Boards must continually consider how generative AI will amplify the existing risks of AI. For example, the adoption of large language models will strain computational and data management capabilities and make the explanation of existing AI models even more complex.

  • Chart description

    Bar chart showing that advanced analytics, such as artificial intelligence, has increased in priority for 78% of respondents and decreased in priority for 22%.

Questions for the board to consider:
  • How are risk teams and business leaders identifying and monitoring AI/ML risks within digital transformations and customer-facing deployments?
  • What is the most effective way to identify novel AI/ML risks as they arise?
  • Who is ultimately accountable for managing AI/ML risks?
  • How are digital transformation plans being modified to account for the unique upside of generative AI? What is the plan to scale usage?

2. CROs are embracing advanced technology to optimize risk operations

As with other functions across the business, risk management teams are expanding their use of AI/ML to improve their own work. At some organizations, the rapid pace of adoption means boards must engage management as soon as possible to establish oversight.

According to our survey, CROs are using advanced technologies to:

  • Automate operational tasks: 44%
  • Enhance financial crimes monitoring: 33%
  • Improve client credit decision-making: 37%
  • Identify possible cyber-attacks: 35%

Interestingly, CROs at global systemically important banks (G-SIBs) were more likely to focus on automation (67%) and financial crime monitoring (50%) in their AI/ML deployments than non-G-SIBs. All CROs expect to use these technologies for these activities in the future, indicating that we’re still in the early days.

  • Chart description

    Circle charts list top geopolitical risks for different types of organizations over the next 12 months.

In the future, we expect to see risk teams using AI to scan and review regulations and for process, risk and control diagnostics. Over time, AI-enabled scenario modelling will be used for market simulation, portfolio optimization and credit risk assessments. Automation of model documentation for consistency, clarity and reproducibility is another way banking CROs will adopt generative AI.

Questions for the board to consider:
  • How do AI/ML challenge existing business risk models?
  • What are the right controls for the use of AI/ML in risk management?
  • How is the risk management team sharing lessons learned from its own use of AI/ML with the rest of the business? How can leading practices be promoted across the enterprise?
  • How might AI/ML strengthen internal model risk management practices? 
  • What is the plan to use AI/ML to fight financial crime beyond fraud?
  • How will risk management teams attract talent and build the capacity to expand adoption of AI/ML?

3. Evolving governance models are mostly based on existing risk frameworks that should evolve

The use of AI/ML is being governed through existing risk models and enterprise risk frameworks, according to our research. Most banks surveyed use model monitoring feedback mechanisms and controls – or are in the process of defining feedback mechanisms – to ensure machine learning models deliver the expected outcomes.

According to survey respondents, the primary techniques used in validating machine learning models in credit risk management include:

  • Ongoing performance monitoring: 52%
  • Monitoring against benchmarks: 47%
  • In-sample/out-of-sample testing: 44%

Source: 2022 annual EY-IIF survey report on machine learning

Nearly every institution uses a range of mechanisms to avoid bias and discriminatory outcomes for credit risk management and AML. Mechanisms include institutional codes of ethics; auditing, testing and controls; and the exclusion of sensitive attributes from feature analysis, selection and engineering. All these frameworks need to be continuously reviewed and evolved to support generative AI and address the incremental risks associated with this technology – including hallucinations (where the models create fictitious responses), IP rights and bias management.

Questions for the board to consider
  • What skills and controls are necessary to develop risk models internally? 
  • How are third-party risks (e.g., data feeds, model inputs) accounted for in AI/ML governance models?
  • What triggers will signal that existing frameworks need to be updated?

4. Banks are engaging with regulators to establish standards

Board directors have concerns about the regulatory uncertainty concerning AI and machine learning in risk management and other business functions. To address this, boards may support management in engaging with regulators and participating in industry initiatives to establish adoption standards. Relative to anti-money laundering, CROs are particularly concerned about the complex and opaque nature of some algorithms and regulatory supervisors’ lack of experience with these technologies. Banks may also engage regulators to discuss transparency, bias and ethical issues, and regulatory constraints for credit risk applications.

Questions for the board to consider 
  • What are appropriate reporting standards for AI/ML usage? 
  • What is the plan to proactively engage regulators to define those standards?
  • How can banks help educate supervisors with limited knowledge of AI/ML?

Effective board leadership in the AI era

In a dynamic banking market, board directors have more risks to consider than ever before – and AI/ML should top the list, as our research confirms. Effective board oversight will only become more important as generative AI creates transformative new possibilities in finance, IT, product development, customer service, marketing and other parts of the business. We expect forward-looking banks to embrace AI-driven risk management strategies, enhance the operating model and build ecosystems with robust data governance and ethical, legal and regulatory frameworks for generative AI.

Boards should consider whether the organization has an underlying data and innovation culture. Those with strong data and/or innovation cultures will likely be more successful in their deployment of generative AI. Generative AI should support not only business goals but also company values. Change management and workforce alignment should be included as part of the larger equation.

Relative to the risk function, boards should support CROs and other business leaders in devising long-term and holistic roadmaps for AI usage, including generative AI.

In one of our recent roundtables, CROs pointed to several issues where AI can help:

  • “It is cumbersome to track changes in regulation and identify underlying impacted policies and procedures.”
  • “Risk assessment is a manual activity and is highly subjective and not forward-looking.”
  • “Testing is highly manual, historical and sample-based and does not accurately represent the level of risk.
  • “We want to improve our ability to identify control gaps and vulnerabilities and to manage our control.”

As AI becomes more deeply embedded in operations and processes – both within risk management and in the broader business – the priority for boards must be to understand how AI is used by CROs and risk management teams to mitigate other threats to the business.

About the surveys

The 2022 annual EY-IIF survey and report on machine learning: uses in credit risk and AML applications — The IIF surveyed 43 financial institutions across global regions, including G-SIBs, national banks, regional banks and other financial institutions. EY and the IIF performed analysis of the survey results covering a broad range of topics.

The 2022 12ᵗʰ annual EY-IIF survey of global banking CROs — EY, in conjunction with the IIF, surveyed CROs or other senior risk executives from 88 banks in 30 countries around the world from June 2022 through October 2022. Participants were interviewed, completed a survey or both. Participating banks were headquartered in Asia-Pacific (11%), Europe (16%), Latin America (18%), the Middle East and Africa (19%), and North America (36%), and 14% were G-SIBs.

EY and IIF are collaborating on the 2023 EY-IIF survey of global banking CROs as well as the 2023 AI/ML Use in Financial Services survey, which explores these themes further. The surveys will be published late 2023.

Contact EY professionals to learn more about the resources we offer financial services directors.

EY/IIF global bank risk management survey

The survey reveals CROs’ views on the most urgent issues facing their organizations now and in the next three to five years.

Access the report

IIF and EY Survey Report on Machine Learning Uses in Credit Risk and AML Applications

Read more on IIF

Summary

There’s no doubt that AI will become more deeply embedded in more operations and processes – both within risk management and in the broader business. For boards, the priority must be to understand the full range of risks, especially when AI is used by CROs and risk management teams to mitigate other threats to the business.

About this article

Authors
Vidhya Sekhar

EY Americas Financial Services Data and Analytics Leader

Data and analytics professional. Problem solver. Team player. Working mother. Realist.

Bill Hobbs

Managing Director, Financial Services Consulting and Center for Board Matters, Ernst & Young LLP

Client-centric leader finely attuned to detail. Influencer of transformational change. Champion of rising stars. Community servant and youth mentor. Outdoor enthusiast. Husband. Father of five.

Contact us

Like what you’ve seen? Get in touch to learn more.

Form