Cognizant breaks new ground with this GenAI risk report, guaranteed to pique your interest!
I credit them for coming up with new risk categories, such as “audacious overreach,” that you’ve never heard of before!
The risks generative AI poses to banks and insurers are numerous, real, and potentially very costly.
Bankers need to read this report carefully as a counter to GenAI hype.
The GenAI hype machine tries to gloss over these risks, but remember, they don’t pay the regulatory penalties or take a reputational hit when disaster strikes.
And you know it will!
👉TAKEAWAYS
My favorites with a 🔥
Category 1: unintended consequences
1) Misplaced trust: Generative AI is all too capable of producing inaccurate information or biased answers. One way to instill trust is through prompt design strategies.
2) IP infringement: Public LLMs trained on third-party content could expose financial services firms to copyright infringement.
3) IP loss: Generative AI systems that use public models trained on sensitive or confidential data could expose the BFSI’s proprietary information to competitors.
🔥4) Orphan code: Generative AI may one day enable non-techies to become programmers. The result may be orphan code: code abandoned when the creator leaves but still must be maintained by the corporate IT function.
Category 2: market evolutions
5) Regulatory reflux: Globally, regulations on data privacy, generative AI use and related issues are still in their infancy. For financial institutions operating across borders, this is an area of great risk.
6) Tool/vendor roulette: Choosing generative AI vendors with staying power is a risky proposition given the technology’s embryonic state. A generative AI platform that files for bankruptcy in three years could give rise to maintenance problems.
🔥7) Unsustainable advantage: Many of today’s experiments and pilots are focused on capabilities that will become rapidly commoditized (i.e., chatbots, document summary tools, etc.). If nearly every company is using the same tools and infrastructures, sustainable advantage can rapidly shrink.
Category 3: Human Nature
🔥8) Audacious overreach: Overly ambitious objectives can lead to both governance challenges and speculative investments. If early returns fall short of expectations, initial excitement can quickly turn into skepticism.
9) Malicious behavior: Every time a new technology tool emerges, cyber criminals figure out how to abuse it—sometimes much faster than the good actors do.
🔥10) Organ rejection: There are many reasons why employees, customers or business partners could be slow to adopt, or even reject, generative AI-based solutions. To minimize gen AI rejection, businesses should focus on usability design to ensure these systems augment knowledge workers’ experience and judgment.
👊STRAIGHT TALK👊
Cognizant does a great job on a topic that is well covered by “keeping it real.”
They have gone beyond the obvious categories of risk covered ad nauseam and into the gritty world that banks live in.
AI companies will go bust, code needs maintenance, and some banks are so culturally challenged by AI that they may suffer “organ rejection.”
If that isn’t keeping it real, what is?
GenAI hype needs to be counterbalanced by reality, and this report does this quite nicely!
What do you think?
Joining our community by subscribing. It will be an exciting journey down the rabbit hole to our future, and you’ll be glad you did!
Sponsor Cashless and reach a targeted audience of over 50,000 fintech and CBDC aficionados who would love to know more about what you do!