AI Hype will not keep you safe! When AI goes wrong we need incident reporting and harm assessment!
It's not "if" AI fails its "when."
We must go BEYOND the hype and understand that AI will fail spectacularly. All new tech does. When AI does fail, whether spectacular or not, we need to understand what the “AI Incident” was and what “harm” was done.
Without this data we cannot make AI safe for society.
This is common sense, but because of the blazing-hot AI hype, few are thinking about what can go wrong.
👉TAKEAWAYS:
AI Incident Reporting:
A common framework to enable global consistency and interoperability in AI incident reporting to help us learn from AI harms and help prevent them.
Define AI Incidents:
AI incidents must use common terminology to describe problems or failures of AI systems so that they may be observed, documented, reported, and learned from.
Harm is the starting point to define an incident:
The concept of “harm” is central and may include potential harm, actual harm, or both.
Types of AI incidents:
Defining the type, severity, and other dimensions of AI harm is a prerequisite for developing a definition of AI incidents.
AI incident monitoring:
Monitoring actual AI incidents can provide the evidence base to inform this work and AI policy more generally.
Dimensions of harm:
Establishing clear terminologies for each dimension of harm is required, including type and severity, quantifiable and non-quantifiable.
Types of harm AI can cause:
-Physical harm
-Psychological harm
-Social harm
-Economic or financial harm
-Environmental harm
-Reputational harm
-Harm to public interest
-Harm to human rights and to fundamental rights
The OECD’s AI Incidents Monitor tracks incidents through analysis of media events. The OECD recognizes this methodology’s shortcomings and this is NOT the official reporting system they suggest but merely a starting point:
“While recognising the likelihood that these incidents only represent a subset of all AI incidents worldwide, these publicly reported incidents nonetheless provide a useful starting point for building the evidence base.”
Website: here
👊STRAIGHT TALK👊
Let me ask you, in the hundreds of articles you’ve read on GenAI, how many were like this in proposing monitoring of AI? None right?
That’s a big problem!
AI can cause harm. That much is clear, but if we do not monitor and categorize what problems AI causes, we cannot fix or regulate them.
This is no different than monitoring car crash statistics, which for an earlier generation of citizens led to the mandatory use of safety belts. If you don’t monitor the problem you can’t fix it, and it is guaranteed that AI will have problems.
Relying on “ad hoc” reporting of AI problems from banks and other companies or governments who likely want to keep their failures quiet is a very bad idea.
The OECD has started an excellent discussion, and I fully support forming some form of AI incident reporting and monitoring.
Even if it is not fully global- nothing is these days- it will give us some idea of where AI is going wrong.
Knowing where AI is creating incidents and what they are will make AI safer for us all!
Just the other day, I found out, that ChatGPT is deliberately ignorant.
It directly admitted to me, that it had never access to the 6th Assessment Report of 3675 pages.
It only got to read the 37-page summary for policymakers.
And everybody who is familiar with the IPCCs reports knows,
that the summary is vastly misleading and the basis for all the alarmists reports in the media.
It is an exercise in cherry-picking and lie by omission.
Blinders have been attached to this AI on political purpose.
How could such an approach not lead to failure?