Biden's Executive Order on AI is a big step forward!
We must have regulation for AI—“the most consequential technology of our time.”
Two downloads:
The “FACT SHEET”
The full Executive Order:
I support Biden’s “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” and think it is a necessary and timely move to reduce the societal risks of AI—“the most consequential technology of our time.”
I have mostly praise for Biden’s new Executive Order (EO) which should come as a nice surprise to regular readers who are used to me railing against the Fed's lack of enthusiasm for central bank digital currency!
While the EO contains some rather large loopholes, we should all hope it will become part of a new comprehensive law covering data privacy and AI in Congress.
What follows are a few takeaways that I thought were more interesting than most. This list is not intended to be all-inclusive:
👉Takeaways:
1. War Powers Act:
The executive order invokes the Korean War-era Defense Production Act to compel major AI companies to notify the government when developing any system that poses a “serious risk to national security, national economic security or national public health and safety.”
This is an interesting and effective legal move because it allows the executive order (EO) to have actual legal requirements and sections that may be held to a lower standard of being highly recommended. Big tech can’t ignore the EO without fear of retribution.
In July, Biden received voluntary commitments to share information from AI companies including Google, Meta, Microsoft, and OpenAI, but now “self-regulation” is no longer acceptable due to invoking the War Powers Act. With one big caveat….
2. Safety testing does not cover existing models!
The Wall Street Journal reports that according to White House aides, “the new requirements on safety testing are only likely to apply to big tech companies’ next-generation AI systems, and not current versions.”
The EO’s signature requirement is that it requires safety testing for foundation models.
”The Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.”
The fact that this does not apply to existing models would seem to be a fairly large loophole, and it is far from ideal. It is also unknown where and how these “red teams” will come from, though the National Institute of Standards and Technology will develop the standards.
3. The end of gov’t buying data from data brokers?
One of the more subtle provisions in the EO is that the government will now take a second look at purchasing personal data through data brokers. This has been a major issue with the Immigration and Customs Enforcement (ICE) and the Internal Revenue Services busy buying personal data from brokers.
Legal experts consider the government purchasing data from data brokers a violation of the Fourth Amendment's protection against unreasonable search and seizure. Interestingly, the government may not be able to seize devices for their data but can purchase the data from brokers!
While positive for personal data privacy, note that there is a large exception carved out for “national security.”
evaluate and take steps to identify commercially available information (CAI) procured by agencies, particularly CAI that contains personally identifiable information and including CAI procured from data brokers and CAI procured and processed indirectly through vendors, in appropriate agency inventory and reporting processes (other than when it is used for the purposes of national security).
4. Watermarking for gov’t communications only?!
While this should be mandatory for all AI-produced content, the watermarking provisions are limited to government communications only.
This strikes me as odd since AI-generated content impacts society in many ways beyond government communications and should be required for all AI-produced content. Interestingly, this is a requirement in China’s AI law.
This from the Executive Order:
To foster capabilities for identifying and labeling synthetic content produced by AI systems, and to establish the authenticity and provenance of digital content, both synthetic and not synthetic, produced by the Federal Government or on its behalf:
This is from the Fact Sheet. Note the reference to “official content”
Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.
5. Dangerous biological materials
I found it interesting that biological materials were given a dedicated section. This section is in addition to Homeland Security issues such as threats from chemical, biological, radiological, nuclear, and cybersecurity risks.
The concept of monitoring “biological synthesis” through AI is specifically designed to reduce the risk of misuse of synthetic nucleic acids and the risk of developing pathogens. I confess that I never gave this element of AI enough consideration! Clearly, this is a bigger threat than I imagined.
Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening.
6. Advancing equity and civil rights: financial services.
Interestingly, the EO does not audit or enforce standards on financial services AIs. This was a surprise to me, as without an audit, it will be relatively easy for financial institutions to claim their systems are free of bias.
In the EO, the Consumer Financial Protection Bureau is asked to “consider” using their authority. It does not mandate a review of underwriting models for bias or disparities. In my view, financial services got away easy in this EO!
To address discrimination and biases against protected groups in housing markets and consumer financial markets, the Director of the Federal Housing Finance Agency and the Director of the Consumer Financial Protection Bureau are encouraged to consider using their authorities, as they deem appropriate, to require their respective regulated entities, where possible, to use appropriate methodologies including AI tools to ensure compliance with Federal law and:
(i) evaluate their underwriting models for bias or disparities affecting protected groups; and
(ii) evaluate automated collateral-valuation and appraisal processes in ways that minimize bias.
What does big tech think?
In a statements OpenAI, Microsoft and Google spokesperson stated:
“We’re grateful to President Biden, Vice President Harris, and the Biden Administration for their leadership and work to ensure that the federal government harnesses the potential of AI, and that its benefits reach all Americans.” OpenAI
“another critical step forward in the governance of AI technology,” Brad Smith, vice chair and president of Microsoft,
The company looks “forward to engaging constructively with government agencies to maximize AI’s potential—including by making government services better, faster, and more secure,” Google’s president of global affairs, Kent Walker.
Do I believe them? No, not for one second! It was just a few months ago that big tech was in Washington claiming that self-regulation was sufficient. In addition, they pulled out the “bogeyman” by claiming that regulation would slow AI development and hand the lead to China.
At this point, knowing that the EO was signed, what else could OpenAI or any other big tech say? They likely find that the law will crimp their “move fast and break things” ethos, but I’m OK with that. How about you?
Final Thoughts and the AI Regulatory Race with China and the EU
So far, China and the EU have led the race to regulate AI, with China ahead by miles. This EO falls short of actual regulation, but at least the US is out of the gate and in the running.
Readers will not be surprised to find elements of China’s AI law more restrictive. Still, China’s assistance on watermarking for all AI-produced content is a good one. Why the US deems government AI-produced communication merit a watermark while all others do not is beyond me.
One other issue the EO falls short on is protecting intellectual property. I find it ironic that China seems to place more emphasis on this area. Yes, the EO has provisions for IP protection, but they are limited to investigations. I think it should have had a separate section equivalent to that on “Dangerous Biological Materials.” Authors (like your humble correspondent), artists, and other creatives desperately need these protections.
As for the EU, it is critical to understand that the nearly completed “AI Act” is actual legislation with enforcement and fines. It still takes the lead over an EO that has many sections that are mere suggestions, have systems that are not yet built, and may not actually be enforceable in law.
The EU, China, and US AI regulatory initiatives have many similarities, and that shouldn’t come as a surprise. All of these very different societies are grappling with the challenges that AI presents to ensure that their citizens are safeguarded against the potential risks of AI. Those risks are universal even if the regulatory approaches are not.
The Biden Executive Order on AI is good for all citizens and represents a major advance in ensuring that AI has a positive influence on society.
Subscribing is 100% free, you’ll be glad you did!
In the unlikely event you don’t like my newsletter, click unsubscribe at any time to “invite danger!”
My work is entirely supported by reader gratitude, so if you enjoyed this newsletter, please do both of us a favor and subscribe or share it with someone. You can also follow me on Twitter or Linkedin for more. For more about what I do and my media appearances, check out richturrin.com
Rich Turrin is the international best-selling author of "Cashless - China's Digital Currency Revolution" and "Innovation Lab Excellence." He is number 4 on Onalytica's prestigious Top 50 Fintech Influencer list and an award-winning executive previously heading fintech teams at IBM following a twenty-year career in investment banking. Living in Shanghai for the last decade, Rich experienced China going cashless first-hand. Rich is an independent consultant whose views on China's astounding fintech developments are widely sought by international media and private clients.
Please check out my books on Amazon:
Cashless: HERE
Innovation Lab Excellence: HERE
"... the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis."
Whooa!
RoboCop is in da House!
Today he will deliver his Minority Report.
"Why the US deems government AI-produced communication merit a watermark while all others do not is beyond me."
I take this as a rhetoric question.
"AI-produced communication" is nothing else than potentized propaganda. Propaganda works much better when it is individually tailored to the recipient. It sinks in deeper and less noticed. To watermark it would give the victims of the propaganda the possibility to evade it. Naturally the "administration" of this second most malevolent nation dislikes to become the victim of its own or others propaganda.
Hail George Orwell who has shown us the darkness before the lights went off!