Red Teaming AI: Bust Fintech Bots for Better Ethics, Trust, and Equality
Every AI system out there needs to be red teamed even those in fintech
This is my daily post. I write daily but send my newsletter to your email only on Sundays. Go HERE to see my past newsletters.
HAND-CURATED FOR YOU
The news that Grok is in big trouble for its anti-Semitic tirade shows that AIs still need people keeping an eye on them to ensure they stay honest.
That is why this UNESCO paper on Red Teaming for Social Good is something we should all think about and consider contributing to.
And yes, I am serious.
UNESCO just launched a step-by-step Red Teaming Playbook, designed to help non-technical communities test generative AI systems for bias, harm, and vulnerabilities.
This playbook was written to focus on issues affecting women and girls, a great cause, and something we should all support. I think we should add our knowledge of fintech to this effort in support of women in fintech bots.
Women globally have lower access to financial accounts, and economic and social barriers reduce their economic agency. Having AI perpetuate this problem is unacceptable.
Red Teaming
Red teaming involves deliberately stress-testing systems by simulating adversarial attacks or challenging scenarios to identify vulnerabilities, biases, or ethical flaws.
It is quite literally trying to break the AI, inasmuch as it reveals vulnerabilities.
Sound harsh? Not really, most of my readers are sophisticated fintech users and well aware of the damage a malfunctioning AI can do in a financial context.
My suggestion is to throw the next AI you meet some hard gender based questions and see if it responds appropriately.
Follow “The Woz,” Steve Wozniak!
Steve Wozniak, who founded Apple with Steve Jobs, is the most famous fintech Red Teamer ever!
Back in 2019, Steve Wozniak accused the Apple Card's algorithm of gender discrimination, claiming it gave him a credit limit ten times higher than his wife’s despite their shared assets and her higher credit score.
A New York State Department of Finance investigation showed that Goldman’s underwriting did not have gender biases.
While Wozniak’s attempt to Red Team Apple Card was unsuccessful, it is still the stuff of legend!
Break the Next Fintech AI
So let’s say you actually find bias and break an AI. What to do next? My suggestion is to screenshot the offensive answer and post it on X! I guarantee it will get a response more quickly than writing the offending finserv!
Harsh? No what’s harsh is a bot perpetuating bias with a financial institution that is clueless as to the damage it can do.
If you want to help someone, try your best to break the next AI you see.
👉 Red Teaming
🔹 89% of AI engineers report encountering Gen AI hallucinations, including errors, biases, or harmful content
🔹 58% of young women and girls globally have experienced online harassment
🔹96% of deepfake videos were non-consensual intimate content and 100% of the top five ‘deepfake pornography websites’ were targeting women. Malicious actors intentionally trick AI into producing or spreading such content, worsening the already serious issue of technology-facilitated gender-based violence (TFGBV).
🔹 In a survey of 901 women journalists in 125 countries10, including those in prominent and visible positions, nearly three quarters (73%) said they had experienced online violence.