TERMINATOR: How to Avert AI's Extinction-Level Threat
Terminator is here, the US and the world must move "quickly and decisively."
Note that this document is the executive summary. The full report with even more incendiary proposals for Silicon Valley can be requested here
The US Government, with international partners, must move “quickly and decisively” to avoid the progress in advanced AI, creating new categories of Weapons of Mass Destruction-like catastrophic risks that include human extinction.
So, if you thought AI was all about customer service chatbots, think again.
This report puts the blame on AI firms that are operating far beyond any reasonable guardrails in their quest for superhuman artificial general intelligence, which has a breakout time of around 18 months.
“A key driver of these risks is an acute competitive dynamic among the frontier AI labs that are building the world’s most advanced AI systems.”
Silicon Valley will not be pleased with the report recommending making it illegal to train AI beyond a certain level of computing power and creating new regulatory agencies to reign in the threat.
This action plan, developed over the past 13 months by Gladstone, an AI firm working for the US Government, is an essential read.
The report is a “blueprint for intervention.” While many will debate the threat to humanity, its recommendation that AI be put on a tighter leash will make for a safer world for all—even those using customer chatbots.
👉TAKEAWAYS
The report contains five “Lines of Effort” I will take a few of the more stunning recommended actions.
🔹LOE1 — Establish interim safeguards
-Create a task force to coordinate the implementation and oversight of interim safeguards for advanced AI development.
-Put in place controls on the advanced AI supply chain which is prone to proliferation.
🔹LOE2 — Strengthen capability & capacity
-Develop an early-warning framework for advanced AI and AGI incidents.
-Develop scenario-based contingency plans.
🔹LOE3 — Support AI safety research
-Develop safety and security standards for responsible AI development and adoption.
🔹LOE4 — Formalize safeguards in law
-Create an advanced AI regulatory agency with rulemaking and licensing powers.
-Establish a criminal and civil liability regime, including emergency powers to enable rapid response to fast-moving threats.
🔹LOE5 — Internationalize advanced AI safeguards
-Build domestic and international consensus on catastrophic AI risks and necessary safeguards.
-Enshrine those safeguards in international law.
Putting AI safeguards into law will not just help with catastrophic risks but the real risks that AI presents to normal citizens. Seeking international agreement on some standards will be critical.
👊STRAIGHT TALK👊
“The rise of advanced AI and AGI has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.”
That line sets the tone for where we are headed with AI achieving AGI. If it sounds like a worn-out plot from any number of science fiction movies, that’s because it is.
We live in a time where reality mirrors the movies, and the idea that a rogue AGI will go Terminator on us all is now a reality.
What is clear is that AI needs regulation and that the existing regulatory framework is woefully unprepared to deal with computers that think.
Regulation is essential not just to avoid global extinction, but also to ensure that the systems we use for more mundane tasks are unbiased and fair to all.
Regulatory standards can help make AI a better tool for all of society, and international standards should be top of the list.
The question is, can Washington and the rest of the world ignore the need for regulation?
So far, they have with only a hodge-podge of national AI regulations and little progress internationally.
While this paper will be debated, that alone is a positive outcome and starts a larger international discussion.
Thoughts?
🤖 If AGI breaks out in 18 months, then we humans need a collective, yet distributed defense mechanism to "cope" with it. Say a smart app on mobiles & computers that gives alerts, news, support, etc for immediate AI dangers & crimes. Based on open standards backed by some intl tech org. Like a digital first aid kit. (Also, have a running prompt: How to protect ourselves from malicious AI attacks) ⛑️