The SEVEN Key Risks of Generative AI for Banks
Singapore's MAS lays it out like a comic book and in the fewest words possible!
This executive summary by Singapore’s MAS reads like a comic book, and that’s not a bad way to learn the seven big risks banks will be taking with Generative AI (GenAI).
👉TAKEAWAYS: GenAI “risk dimensions.”
FAIRNESS AND BIAS
Setting fairness objectives to help identify and address unintentional bias and discrimination.
ETHICS AND IMPACT
Ensuring responsible and ethical outcomes in the use of AI against clearly defined core values and practices. Value misalignment and environmental impact.
ACCOUNTABILITY AND GOVERNANCE
Enabling accountability and governance for the outcomes and impact of data and AI systems. Unclear 3rd party accountability and inadequate oversight.
TRANSPARENCY AND EXPLAINABILITY
Enabling human awareness, explainability, interpretability, and auditability of data and AI systems. Lack of output accuracy, misleading users, no recourse.
LEGAL AND REGULATORY
Identifying any legal or regulatory obligations that need to be met or may be breached by the use of AI, including issues with compliance, data protection, and privacy rules or related to equality laws. Data soverignty and ownership, IP infringement and protection.
MONITORING AND STABILITY
Ensuring robustness and operational stability of the model or service and its infrastructure. The hallucination problem! Inadequate model accuracy.
CYBER AND DATA SECURITY
Protecting data and AI systems from cyberattack, unauthorised access, data loss, and misuse or adversarial model manipulation by malicious actors.
The GenAI feedback loop is designed to ensure continuous improvements throughout the lifecycle of the system.
👊STRAIGHT TALK👊
We can all look forward to MAS’s complete white paper on GenAI risks in banking, but this executive summary is more than most need to know about AI risks.
Bankers across the globe, and most definitely consultants producing GenAI papers, are grossly underestimating GenAI’s most significant problem, “explainability.”
MAS Points out how Singapore already has guidance for algorithms and digital systems through the Fairness, Ethics, Accountability and Transparency (FEAT) Principles.
The problem is that FEAT principals are best suited to algorithmic systems. With traditional algorithms, you can test them to prove or disprove bias. GenAI will make the input and output bias and prejudice impossible to track.
While algorithms are fixed, GenAI will actually change its answers over time and cannot explain how it arrives at its answers. Let’s leave GenAI’s ability to hallucinate aside for a moment, but how will the FEAT principles be modified?
I’m sorry, but the scenario of being denied a loan by a bank and then having the bank say that they can’t explain why is not comforting or acceptable! Clearly, it’s not just loans; a bank’s most lowly bank customer chatbot will have the same issue.
Anyone thinking that GenAI adoption in banking will be quick is in for a rude awakening.