Regulating AI In FinServ: Kill Switches And Big Tech On The Hook?
The BIS is not timid in recommending some harsh regulations.
The BIS examines the regulation of AI use in finance and shows that new harsh regulations will likely be necessary. It’s not if, but when.
The BIS acknowledges that AI is transformative and is preparing for a future in which the financial sector rolls out direct-to-consumer AI products and services.
The BIS points out that many existing regulations cover AI for basics like reliability/soundness, accountability, transparency, fairness, ethics, and data protection, but they won’t be enough.
This is where the BIS excels and asks some very hard questions that no one in banking or big tech is prepared to answer.
Should there be a kill switch? The BIS asks a real question that few have raised. Should corporate governance specify when humans should intervene in AI? The answer should be yes.
Then, as though the BIS really wants to start a fight, it suggests that BigTech, the providers of AI, should be under “direct oversight.”
BigTech won’t like this one bit, but the BIS has a point. It’s their AI that may make a mistake!
Interestingly, the BIS is prepared to ruffle some feathers and shake things up a bit!
👊UNIQUE AI REGULATORY CHALLENGES👊
🔹 Governance framework.
The use of AI by financial institutions, particularly in their core business activities, would require clear allocation of roles and responsibilities across the entire AI life cycle. Including specifying the role of human intervention to minimise harmful outcomes from AI systems.
🔹 AI expertise and skills.
Financial authorities may, therefore, consider clarifying their expectations regarding the expertise and skills envisaged to be in place for financial institutions that plan on expanding AI use in their core business activities.
🔹 Model risk management.
The lack of explainability of AI models can cause heightened model risk. When model risk management guidance is in place, authorities might find it helpful to communicate their explainability-related expectations.
🔹 Data governance and management.
Existing regulations (e.g., those for model risk, consumer privacy, and information security) cover many of the relevant elements of data governance/management. Financial authorities may want to assess whether these are enough or need strengthening.
🔹 New/non-traditional players and new business models/arrangements.
To avoid potential regulatory gaps, regulations relevant to new/non-traditional players providing financial services would need to be assessed to determine whether they require adjustments to account for the cross-sectoral expectations regarding the use of AI. This should include BaaS arrangements.
🔹 Regulatory perimeter – third parties.
The concentration of cloud and AI service providers to a few large global technology firms strengthens the argument for putting in place direct oversight frameworks for these service providers depending on available legal authority.
This button will help you “Restack” or post to “X” and others.
Readers like you make my work possible! Please buy me a coffee or consider a paid subscription to support my work. Thank you!
Sponsor Cashless and reach a targeted audience of over 55,000 fintech and CBDC aficionados who would love to know more about what you do!