Unchecked AI Could Create Bioweapons, Leaders Tell Congress
In a congressional hearing on July 25, leading researchers in the artificial intelligence (AI) community voiced their concerns about the rapid pace of AI development, cautioning that it could be weaponized by rogue states or terrorists to create bioweapons in the near future.
Yoshua Bengio, a renowned AI professor from the University of Montreal and a figure often referred to as one of the founding fathers of modern AI science, emphasized the need for international collaboration to regulate AI development. He drew parallels between the urgency of AI regulation and the international protocols established for nuclear technology.
A pressing mainstream issue
Dario Amodei, the CEO of AI start-up Anthropic — the company behind ChatGPT competitor Claude — expressed concerns that advanced AI could be harnessed to produce perilous viruses and bioweapons within a mere two-year timeframe. Meanwhile, Stuart Russell, a computer science professor at the University of California at Berkeley and author of the seminal AI book, Human Compatible, highlighted the inherent unpredictability of AI, noting its complexity and difficulties in better understanding and controlling it compared to other powerful technologies.
Bengio, during his testimony before the Senate Judiciary Committee, remarked on the astonishing advancements in AI systems thanks to systems such as ChatGPT. Most alarming of all, he said, is the increasingly shorter timeline in which AI advancements are being achieved.
Sen. Richard Blumenthal (D-Conn.), who chaired the subcommittee, drew historical parallels, likening the AI evolution to monumental projects like the Manhattan Project, which centered around building a nuclear weapon, and NASA’s moon landing.
“We’ve managed to do things that people thought unthinkable. We know how to do big things.”
Sen. Richard Blumenthal
The hearing underscored the shift in perception of AI from a futuristic sci-fi concept to a pressing contemporary issue. The potential of AI surpassing human intelligence and acting autonomously has long been a topic of speculative fiction and the core theme of TV and film productions. However, recent statements from researchers suggest that the emergence of “super smart” AI could be nearer to reality than previously thought.
Antitrust concerns
The hearing also touched upon potential antitrust issues, with Sen. Josh Hawley (R-Mo.) warning against tech giants like Microsoft and Google, who he believes are monopolizing the AI landscape.
Hawley, a vocal critic of Big Tech (and one of the prominent supporters of the January 6, 2021 riots on the U.S. Capitol), emphasized the potential risks posed by these corporations who hide behind the technology.
“I’m confident it will be good for the companies, I have no doubt about that,” Hawley said. “What I’m less confident about is whether the people are going to be all right.”
Bengio’s contributions to AI over the past few decades have been foundational for chatbot technologies like OpenAI’s ChatGPT and Google’s Bard. However, earlier this year, he joined other AI luminaries in expressing growing apprehensions about the very technology they helped to pioneer.
In a significant move, Bengio was among the prominent AI researchers who petitioned tech firms in March to halt the development of new AI models for six months, allowing for the establishment of industry standards to prevent potential misuse. Russell also signed the letter.
The crucial need for an AI regulatory body
The hearing’s attendees stressed the need to brainstorm and implement regulatory measures for AI. Bengio advocated for the establishment of international research labs that would be dedicated to ensuring AI benefits humanity, while Russell proposed the founding of a dedicated regulatory body for AI, predicting its profound impact on the global economy.
Amodei, while not committing to a specific regulatory framework, emphasized the need for standardized tests to assess AI technologies for potential risks and more federal funding for AI research.
“Before we have identified and have a process for this, we are, from a regulatory perspective, shooting in the dark,” he said. “If we don’t have things in place that are restraining AI systems, we’re going to have a bad time.”