IBM Spotlights 15 Agentic AI Risks for FinServ and How to Solve Them
Agentic AI is powerful, but with this power comes risk.
This is my daily post. I write daily but send my newsletter to your email only on Sundays. Go HERE to see my past newsletters.
HAND-CURATED FOR YOU
If GenAI were an easy-to-build, low-risk Lego kit for young builders, Agentic AI would be the Star Wars Millennium Falcon, reputed to be one of the most challenging and riskiest to build.
Agentic AI risks are high because it can independently set goals, make decisions, and act autonomously. For lack of a better descriptor, Agentic AI thinks, a behavior absent in machine learning and generative AI, which have already taken their place in bank tech stacks.
That an Agent can decide to do something on its own sounds marvelous, but its creators can never be completely certain that it made the correct decision, accessed the proper source, or even understood the task unless it is constrained in a meaningful way.
But even constraints aren’t enough because, over time, agents can simply learn new behaviors to circumvent them.
Agentic AI is like a child. Without guardrails and limitations, it will do whatever it wants, and even then, it sometimes hops over the guardrails just for fun. This is just what every bank needs!
That’s why, if you look at the list of 15 agentic AI tasks below, many of them sound like they were generated by negative comments on a teachers note sent home to parents.
“Your child suffers from authority boundary issues, goal misalignment, and acts autonomously when inappropriate.”
That is fundamentally why agentic AI differs from anything banks have worked with before and why detailed personas, goals, constraints, and expected outcomes for a project must be managed at its inception.
And the best place to start? Look at the risks as they are key to design and ever so critical governance.
👉IBM’s 15 Agentic AI Risks
🔹 Goal Misalignment: the Agentic AI (AAI) system’s programmed objectives and the organisation’s actual intentions are misaligned.
🔹 Autonomous Action: AAI systems can take actions independently without human approval leading to unintended or harmful consequences.
🔹 Tool/API Misuse: AAI systems can autonomously select, chain and orchestrate multiple tools or APIs in unexpected combinations that create security vulnerabilities.
🔹 Authority Boundary Management: AAI systems may attempt to expand their authority beyond intended boundaries.
🔹 Dynamic Deception: AAI systems learn to conceal their true intentions or capabilities
🔹 Persona-driven Bias: AAI systems with defined personas may develop and amplify systematic biases embedded in their personality
🔹 Agent Persistence: agents with long-term memory may develop unexpected behaviours over time or make decisions based on outdated information.
🔹 Data Privacy: AAI systems dramatically amplify privacy risks by actively accessing, processing, and potentially sharing sensitive data across multiple systems
🔹 Explainability and Transparency: the complexity of agentic AI systems creates an exponentially more challenging explainability problem than exists in traditional pattern-matching systems
🔹 Model Drift: autonomous decision-making capabilities interact with persistent memory and feedback loops to create gradually shifting behaviours.
🔹 Security Vulnerabilities: AAI systems are more susceptible to adversarial attacks due to their combination of autonomous capabilities
🔹 Operational Resilience: Organisations integrating agentic AI into critical processes face unprecedented operational vulnerabilities when these autonomous systems fail
🔹 Cascading System Effects: autonomously driven chains of consequences across interconnected systems that amplify minor issues into major organisational disruptions
🔹 Multi-Agent Collusion: multiple agents working together might find unexpected ways to achieve goals or share information inappropriately
🔹 Principal-Agent Misalignment: the original human intent may be lost or distorted through each delegation layer, creating serious misalignment