[RAY BOGAN]
A newly released report commissioned by the State Department states AI is creating entirely new categories of “weapons of mass destruction-like” risks. Those risks, according to the report, include catastrophic events that could lead to human extinction.
The report also warns of the potential consequences if the US government fails to regulate AI.
“The prospect of inadequate security at frontier AI labs raises the risk that the world’s most advanced AI systems could be stolen from their U.S. developers, and then weaponized against U.S. interests,” the report states.
According to Time, which first published the report, the authors make policy proposals to prevent these negative outcomes which include: making it illegal to train AI models using more than a certain level of computing power, making it illegal to publish the inner workings of AI models under open source licenses, and tightening controls on the manufacture and export of AI chips.
We spoke with multiple members of Congress who said those ideas sound viable. But there are major hurdles to making that happen.
First, as the report states: “development in AI is now so rapid that an ordinary policymaking process could be overtaken by events by the time the resulting policies take effect.”
Lawmakers admit that they’re learning about AI as they go, so there is a curve to overcome before they can write effective legislation.
[DICK DURBIN]
“I’m not an expert in the area and that’s one of the problems. The committees of Congress that are given the responsibility of viewing AI are hardly up to the challenge and the task in terms of their own knowledge and information available to them,” Sen. Dick Durbin, D-Ill., said. “You know, that’s what has bothered me the most when you give this assignment to Congress, you have to be prepared for a long lead in period.”
[RAY BOGAN]
There is a Senate artificial intelligence working group that has been tasked with taking lead on the issue. They’re trying to pass major legislation this year.
The second hurdle is big tech opposition. Sen. Josh Hawley, R-Mo., said big tech has too much influence in Congress and they’ll try to squash legislation that isn’t in their favor.
[JOSH HAWLEY]
“I mean, do you think any AI legislation is going to see any time of day on the Senate floor? Not in this Senate. Because why? Who owns the AI technology and is developing it? The same mega corporations, Google, Microsoft, Meta, you know, TikTok, probably, too. I mean, these guys, if they don’t want it, it doesn’t see time on the Senate floor,” Sen. Josh Hawley, R-Mo., said. “It doesn’t matter who’s in charge, Democrats, Republicans, doesn’t matter. All of them bought and paid for by these corporations.”
[RAY BOGAN]
Hawley, along with Sen. Richard Blumenthal, D-Conn., have a bipartisan framework for AI legislation which would establish a licensing regime administered by an independent oversight body and ensure legal accountability for harms.
Blumenthal pointed out the third hurdle – the US is already behind in its AI capabilities. He said a prime example are the chips and semiconductors in Russian weapons used against Ukraine.
[RICHARD BLUMENTHAL]
“The Ukrainians are finding Russian chips, AI components, that are now a fact of modern warfare,” Blumenthal said. “(AI) is exploding exponentially, in its defense uses and ought to deeply concern in fact, gravely frighten the American people because we are falling behind in AI technology, particularly in defense uses and it’s a matter of national security.”
[RAY BOGAN]
Right now there are more than 75 AI related bill proposals in Congress.