Skip to Content

Report Summary: Generative Artificial Intelligence in Finance: Risk Considerations by Ghiath Shabsigh and El Bachir Boukherouaa

Recommendation

Generative AI may improve business decisions, and its application, if properly monitored, can make the components of finance – analysis, research, risk assessment – run more efficiently and effectively. But then again, as IMF professionals Ghiath Shabsigh and El Bachir Boukherouaa explain in this informative report, it may not. Systems can misuse data, producing biases and driving erroneous conclusions that could undermine public confidence in the financial system. Financial professionals will find this an important look at generative AI’s double-edged sword.

Take-Aways

  • Generative artificial intelligence (GenAI) can transform the financial sector.
  • Yet it carries significant risks in its applications.
  • Regulators must address GenAI’s hazards to maintain public confidence in the financial system.

Summary

Generative artificial intelligence (GenAI) can transform the financial sector.

When applied to financial services, generative artificial intelligence can enhance decision making, business development and overall well-being through product and service innovation.

“AI is playing an increasingly important role in shaping economic and financial sector developments and is seen as an engine of productivity and economic growth.”

GenAI is a subcategory of AI machine learning that can create new content through the efficient processing of large quantities of data. GenAI has the capacity to reconfigure financial risk management, regulation and supervision. Several high-profile firms have already embraced this technology by applying GenAI to fraud detection, capital markets analysis, document generation and processing, and software development.

Yet it carries significant risks in its applications.

GenAI in financial services carries unique risks and will require that practitioners and regulators thoroughly understand it. The power of GenAI affects several aspects of finance, including:

  • Data privacy – Training large language models could leak confidential and proprietary data. GenAI system development must consider how best to protect personally identifiable information when GenAI uses it for fraud detection and credit evaluation.
  • Embedded bias – Computer models could potentially misuse data to discriminate against certain populations. This can lead to flawed decision making and inaccurate risk management. Search engine optimization tools can compound the problem.
  • Robustness – GenAI’s algorithmic models could produce inaccurate signals in unstable data environments. An extreme example would be false yet believable results, which GenAI would then “defend,” in what experts refer to as “hallucination.” Not addressing this flaw could skew risk management decisions.
  • Synthetic data – Such information is used to train models and to mitigate privacy and confidentiality issues. It is also cost-effective and could allow for bespoke data training to incorporate actual events. Challenges lie in data accuracy and completeness, as well as the possible unintended incorporation of human biases.
  • Explainability – The complexity of the GenAI process will introduce challenges in interpreting data correctly while ensuring accuracy.
  • Cybersecurity – GenAI models could be vulnerable to exogenous data corruption.
  • Financial stability – GenAI could accelerate systemic risk and support underwriting decisions that lack appropriate guardrails.

Regulators must address AI’s hazards to maintain public confidence in the financial system.

GenAI’s innate risks could create significant concerns for the financial industry, which could tarnish the sector’s reputation and lessen public confidence in it. Regulation must evolve to meet these challenges.

“GenAI use needs close human supervision commensurate with the risks that could materialize from employing the technology in financial institutions’ operations (for example, the use of AI for analysis or recommendations vs. the implementation of AI systems that have the capacity to make and execute decisions).”

Human oversight to identify and manage such risk will be critical. Macroprudential regulation will need to surveil and gauge the safety of these developing new technologies and their application in finance.

About the Authors

Ghiath Shabsigh and El Bachir Boukherouaa are with the International Monetary Fund.