Where the US and China Can Find Common Ground on AI

Earlier this month, OpenAI released its most-advanced models yet, saying they had the ability to “reason” and solve complex math and coding problems. The industry-leading startup, valued at some $150 billion, also acknowledged that they raised the risk artificial intelligence could be misused to create biological weapons.

You would think the potential of such a consequential outcome would raise alarm bells that stricter oversight of AI is critical. But despite almost two years of existential warnings from industry leaders, academics and other experts about the technology’s potential to wreak catastrophe, the US hasn’t enacted any federal regulation. A chorus of voices inside and outside the tech industry dismiss these doomsday warnings as distractions from AI’s more near-term harms, such as potential copyright infringement, the proliferation of deepfakes and misinformation, or job displacement. But lawmakers have done little to address these current risks, either.

One of the core arguments leveled against regulation is that it will impede innovation and could result in the US losing the AI race to China. But China has been rapidly advancing in spite of heavy-handed oversight — and all-out US efforts to block it from accessing critical components and equipment. The export controls have hampered China’s progress, but one area where it leads the US has been in setting standards for how the most sweeping technology of our time can be created and used.

China’s autocratic regime makes imposing strict rules much easier, as suffocating as they may seem for its tech industry. And the government obviously has different motives, including maintaining social stability and party power. But Beijing also sees AI as a priority, therefore it’s working with the private sector to boost innovation while still maintaining supervision.

Despite political differences, there are some lessons the US can learn. For starters, China is tackling the near-term concerns through a combination of new laws and court precedents. Cyber regulators rolled out laws on deepfakes in 2022, protecting victims whose likeness was used without consent and requiring labels on digitally altered content. Chinese courts have also set standards on how AI tools can be used, issuing rulings that protect artists from copyright infringement and voice actors from exploitation.

Broader interim rules on generative AI require developers to share details with the government about how algorithms are trained, and pass stringent safety tests. (Part of these assessments is to ensure the outputs align with socialist values). But regulators have also shown balance and rolled back some of the most daunting requirements after feedback from the industry. The revisions send a signal that they are willing to work with the tech sector while maintaining supervision.

This stands in stark contrast to efforts in the US. Lawsuits over current AI harms are slowly making their way through the courts, but the absence of federal action has been stark. A lack of guidelines also creates uncertainty for business leaders. US regulators could take a leaf out of China’s playbook and narrowly target laws focused on known risks while working more closely with the industry to set up guardrails for the far-off existential dangers.

In the absence of federal regulation, some states are taking matters into their own hands. Californian lawmakers last month approved an AI safety bill that would hold companies liable if their tools are used to cause “severe harm,” such as to unleash a biological weapon. Many tech companies, including OpenAI, have fiercely opposed the bill, arguing that such legislation should be left to Congress. An open letter from AI entrepreneurs and researchers also said that the bill would be “catastrophic” for innovation and would let “places like China take the lead in development of this powerful tool.”

It would be wise for policymakers to remember that loud voices in the tech sector have used this line of argument to fend off regulation long before the AI frenzy. And the fact that the US can’t even seem to agree on laws to prevent worse-case AI scenarios — let alone address the more immediate harms — is concerning.

Ultimately, using China as an excuse to avoid meaningful oversight is not a valid argument. Approaching AI safety as a zero-sum game between the US and China leaves no winners. Mutual suspicion and mounting geopolitical tensions mean we won’t likely see the two working together to mitigate the risks anytime soon. But it doesn’t have to be this way.

Some of the most vocal proponents for regulation are the pioneers who helped create the technology. A few so-called AI godfathers, including Turing Award winners Yoshua Bengio, Geoffrey Hinton and Andrew Yao, sat down earlier this month in Italy and called for global cooperation across jurisdictions. They acknowledged the competitive geopolitical climate, but also implored that loss of control or malicious use of AI could “lead to catastrophic outcomes for all of humanity.” They offered a framework for a global system of governance.

Many argue that they may be wrong, but the risks seem too high to entirely them write off. Policymakers from Washington to Beijing should learn from these scientists, who have at least shown it is possible to find some common ground.


A message from Advisor Perspectives and VettaFi: To learn more about this and other topics, check out some of our webcasts.

Bloomberg News provided this article. For more articles like this please visit bloomberg.com.

Read more articles by Catherine Thorbecke