It's not "if" AI fails its "when."
Just the other day, I found out, that ChatGPT is deliberately ignorant.
It directly admitted to me, that it had never access to the 6th Assessment Report of 3675 pages.
It only got to read the 37-page summary for policymakers.
And everybody who is familiar with the IPCCs reports knows,
that the summary is vastly misleading and the basis for all the alarmists reports in the media.
It is an exercise in cherry-picking and lie by omission.
Blinders have been attached to this AI on political purpose.
How could such an approach not lead to failure?
Just the other day, I found out, that ChatGPT is deliberately ignorant.
It directly admitted to me, that it had never access to the 6th Assessment Report of 3675 pages.
It only got to read the 37-page summary for policymakers.
And everybody who is familiar with the IPCCs reports knows,
that the summary is vastly misleading and the basis for all the alarmists reports in the media.
It is an exercise in cherry-picking and lie by omission.
Blinders have been attached to this AI on political purpose.
How could such an approach not lead to failure?