AI Hype will not keep you safe! When AI goes wrong we need incident reporting and harm assessment!
It's not "if" AI fails its "when."
We must go BEYOND the hype and understand that AI will fail spectacularly. All new tech does. When AI does fail, whether spectacular or not, we need to understand what the “AI Incident” was and what “harm” was done.
Without this data we cannot make AI safe for society.
This is common sense, but because of the blazing-hot AI hype, few are thinking about what can go wrong.
AI Incident Reporting:
A common framework to enable global consistency and interoperability in AI incident reporting to help us learn from AI harms and help prevent them.
Define AI Incidents:
AI incidents must use common terminology to describe problems or failures of AI systems so that they may be observed, documented, reported, and learned from.
Harm is the starting point to define an incident:
The concept of “harm” is central and may include potential harm, actual harm, or both.
Types of AI incidents:
Defining the type, severity, and other dimensions of AI harm is a prerequisite for developing a definition of AI incidents.
AI incident monitoring:
Monitoring actual AI incidents can provide the evidence base to inform this work and AI policy more generally.
Dimensions of harm:
Establishing clear terminologies for each dimension of harm is required, including type and severity, quantifiable and non-quantifiable.
Types of harm AI can cause:
-Economic or financial harm
-Harm to public interest
-Harm to human rights and to fundamental rights
The OECD’s AI Incidents Monitor tracks incidents through analysis of media events. The OECD recognizes this methodology’s shortcomings and this is NOT the official reporting system they suggest but merely a starting point:
“While recognising the likelihood that these incidents only represent a subset of all AI incidents worldwide, these publicly reported incidents nonetheless provide a useful starting point for building the evidence base.”
Let me ask you, in the hundreds of articles you’ve read on GenAI, how many were like this in proposing monitoring of AI? None right?
That’s a big problem!
AI can cause harm. That much is clear, but if we do not monitor and categorize what problems AI causes, we cannot fix or regulate them.
This is no different than monitoring car crash statistics, which for an earlier generation of citizens led to the mandatory use of safety belts. If you don’t monitor the problem you can’t fix it, and it is guaranteed that AI will have problems.
Relying on “ad hoc” reporting of AI problems from banks and other companies or governments who likely want to keep their failures quiet is a very bad idea.
The OECD has started an excellent discussion, and I fully support forming some form of AI incident reporting and monitoring.
Even if it is not fully global- nothing is these days- it will give us some idea of where AI is going wrong.
Knowing where AI is creating incidents and what they are will make AI safer for us all!