The State Of AI 2025: Capturing Value, Overlooking Responsibility
Value is great, but it's a poor trade-off if responsible AI use isn't a top priority.
This is my daily post. I write daily but send my newsletter to your email only on Sundays. Go HERE to see my past newsletters.
HAND CURATED FOR YOU
McKinsey’s Quantum Black AI is out with its annual “State of AI” report, which shows a tremendous focus on capturing value and relative blindness on making AI responsible.
This is a great read, and shows how AI adoption is progressing in fits and starts within the industry. No one would dare call this “smooth sailing.”
The good news is that organizations are seeing real revenue increases due to AI's use, and the two charts on pages 22 and 23 do a nice job of illustrating AI’s benefits.
The bad news that stood out to me is that, if you look at some of the responses to the survey questions, a lot more focus is needed on building responsible AI, not just making it profitable.
I will back up this big statement by using McKinsey’s own statistics from the survey of 1,491 participants across 101 countries. Some of the responses to obvious questions should make you shrug your shoulders.
In the end, the race for AI value is shortsighted if the irresponsible use of AI damages trust, and I wish I saw more of that in this report.
All of the statistics below come directly from the report.
👉RESPONSIBLE AI or NOT?
🔹 72% of respondents whose organizations use AI report that their CEO is not responsible for overseeing AI governance. Then who is? As mentioned in my article last week, responsibility for AI should go to the CEO and straight up to the board….see next…
🔹 17% say their board of directors oversees AI governance. See my article on this, where I discuss how AI strategy and governance must go straight to the board, and 17% doesn’t cut it: here
🔹 30% of respondents say employees at their organizations review less than 20% of content created by gen AI before it is used! And the results get even more bizarre:
➣43% review less than 40% of content.
➣Good news: 27% review, 100% of content!
➣But wait, it gets worse…..in a survey of 1491 participants responsible for AI, 830 (55%) had no idea what percent of content was reviewed!
🔹 15% of respondents are trying to mitigate risks associated with “organizational reputation,” compared with 50% that are focusing on AI inaccuracy.
🔹 <33% of respondents report that their organizations are following most of the 12 adoption and scaling practices for gen AI.
🔹 14% Created a comprehensive approach to foster trust among customers in gen AI’s use
🔹 28% Created a comprehensive approach to foster trust among employees in gen AI’s use
🔹 18% Track well-defined KPIs for gen AI solutions WHAT?