Canadian AI pioneer brings plea to U.S. Congress: Pass a law now


A giant in the field of artificial intelligence has issued a warning to American lawmakers: Regulate this technology, and do it quickly.

That appeal came at a hearing in Washington on Tuesday from Yoshua Bengio, a professor at the University of Montreal and founder of Mila, the Quebec AI institute.

“I firmly believe that urgent efforts, preferably in the coming months, are required,” said Bengio, one of three witnesses.

The hearing before a U.S. Senate subcommittee came as lawmakers study possible legislation to regulate the fast-evolving technology, touted for its potential world-changing economic and scientific benefits as well as unfathomable risks.

It listed a litany of nightmare scenarios: Corrupted elections, with candidates impersonated or vote systems hacked. Bank fraud. Instructions for a biological-weapons attack, available at the push of a button. Nuclear weapons controlled by automated programs, which suddenly go rogue.

Bengio suggested a series of possible actions. International co-ordination, he said, is a must. If the U.S. passes laws by itself, he said, rogue actors could simply abuse this technology elsewhere.

WATCH | More warnings about AI: 

Congress warned of potential perils if AI is left unchecked

A bipartisan showing of U.S. senators, industry executives and academics gathered to tell the Senate Judiciary Committee of the various ways that artificial intelligence, without proper oversight, could be used maliciously, from developing biological weapons to launching nuclear attacks.

He also suggested setting up secure-access international laboratories to research countermeasures against the criminal use of AI, and said social media companies should be forced to confirm users are human, as banks do with clients.

One senator, Amy Klobuchar, expressed concern about people having bank accounts emptied by digital imposters. 

The Minnesota Democrat asked if he would support a law giving people control over their image, name and voice, and he replied: “Certainly. But I would go further.”

He noted the strict laws and severe criminal penalties against counterfeiting money; those deter people, Bengio said, so there should also be penalties for counterfeiting humans.

Finally, he urged limits on releasing open-source software, saying bad actors can easily tweak such systems to nefarious ends.

Paraphrasing his fellow Canadian AI pioneer, Geoffrey Hinton, Bengio asked rhetorically: “If nuclear bombs were software, would you allow open-sourced nuclear bombs?”

A missile is seen standing in a silo.
A decommissioned intercontinental ballistic missile, seen here in 2019 in Sahuarita, Ariz. Among the many bills under consideration in the U.S. is one to prohibit use of AI in launching a nuclear weapon. (Nicole Neri/Reuters)

Committee chairman, Sen. Richard Blumenthal of Connecticut, opened the hearing by hailing the tech minds before him as one of the most distinguished panels he’d ever seen in Congress.

That includes Bengio, who, Blumenthal noted, pioneered several foundational computer technologies, and is referred to sometimes as one of the godfathers of AI.

The chair, a Democrat, stated the goal of this hearing, the latest in a series he’s led: “To write real laws — enforceable laws.”

Voluntary guidelines

But he also addressed the elephant in the room — doubts about the ability of Congress to act quickly, given its notoriety for gridlock.

Blumenthal said Congress can’t allow a repeat of its experience with social media, when it talked, and talked, for years without passing major laws.

“The emergency here demands action. The future here is not science fiction or fantasy. It’s not even the future. It’s here now,” Blumenthal said.

U.S. policy-makers are studying different guardrails for AI, although it’s far from clear if any of them will become law. Passing a bill would require getting through the Republican-led House of Representatives and also getting 60 per cent of the vote in the Senate. 

In the meantime, the White House has published voluntary guidelines.

Last fall, the Biden administration released a first-of-its-kind AI bill of rights: urging that such tools be used safely, transparently, with respect for user privacy, and that institutions, say, law enforcement, not use algorithms that discriminate.

Just last week, the White House announced new voluntary measures with seven major companies, including Amazon, Google, Microsoft, Meta, and OpenAI.

The companies committed to extensive safety testing before releasing a product, including testing by outside experts, to a reporting system for vulnerabilities, and to using watermarks to identify a made-up image.

U.S. President Joe Biden touted the threats, but also great opportunities he said AI brings in fighting climate change and cancer.

“This is a serious responsibility. We have to get it right,” Biden said, predicting more technological change in the coming decade than over the past 50 years. 

WATCH | AI addresses senators: 

Artificial intelligence makes opening statement at U.S. Senate hearing

To open the hearing on artificial intelligence, Democratic Sen. Richard Blumenthal played a recorded statement created entirely by OpenAI’s ChatGPT and AI voice cloning software trained on his own speeches to mimic his voice. OpenAI CEO Sam Altman also testified at the hearing, calling on the government to regulate artificial intelligence.

Bill C-27

But his guidelines are not binding legislation, with fines and criminal penalties.

In a recent interview with CBC News, Bengio said Canada might actually wind up being the first AI-producing country with significant legislation.

Bill C-27 has gone through two readings in the House of Commons and is expected to be considered by a parliamentary committee this fall before a final Commons vote.

It’s more limited in scope than the White House principles. In reality, it’s an omnibus privacy and data-protection bill that has one section on AI. It would allow a federal minister to demand company records and audit AI systems.

Companies would have to take measures to prevent certain harms. 

Failure to comply would result in fines of up to $25 million, up to five per cent of a company’s global revenues, and up to five years in prison.

The Liberal government says it’s a foundational law, upon which future AI rules can be added.

The Conservatives have called the bill too vague, too limited, and too weak in protecting privacy. They were the only party voting against it at second reading.

Shadow side profile of male lawmaker
The committee chair, Sen. Richard Blumenthal, said the goal of the hearing is to help pass ‘enforceable laws.’ (Evelyn Hockstein/Reuters)

What U.S. Congress is considering

In the U.S., there’s no shortage of ideas for legislation. Members of both parties have co-operated on several bills, still in their early stages.

One would allow Americans hurt by AI to sue companies — for instance if their likeness is used in fake videos. 

The bill, introduced last month, specifies that legal immunities famously protecting social-media companies from lawsuits don’t apply to AI.

Another bipartisan bill would require companies to share information with researchers. One bill would prohibit the malicious impersonation of an election candidate; another would prohibit use of AI in launching a nuclear weapon.

Seemingly on the same page as Bengio, the Democratic leader in the Senate wants legislation soon. 

Chuck Schumer intends to hold public forums this fall, each focused on specific aspects of what he calls his SAFE framework — secure, accountable systems that respect foundational democratic values and can explain their decision-making process.


Leave a Reply

Your email address will not be published. Required fields are marked *