How can AI be developed safely? There’s a global summit tackling this right now


Digital officials, tech company bosses and researchers are converging Wednesday at a former codebreaking spy base near London to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence.

The two-day summit focusing on so-called frontier AI notched up an early achievement with officials from 28 nations and the European Union signing an agreement on safe and responsible development of the technology.

Frontier AI is shorthand for the latest and most powerful general purpose systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. They’re underpinned by foundation models, which power chatbots like OpenAI’s ChatGPT and Google’s Bard and are trained on vast pools of information scraped from the internet.

The AI Safety Summit is a labour of love for British Prime Minister Rishi Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.

But U.S. Vice President Kamala Harris may divert attention Wednesday with a separate speech in London setting out the Biden administration’s more hands-on approach.

A man in a navy suit and dark tie shakes hands with a woman in a light pink blazer.
Britain’s Michelle Donelan, Secretary of State for Science, Innovation and Technology, right, greets Canada’s Francois-Pillipe Champange, Minister for Innovation, Science and Industry, as he arrives at the AI Safety Summit on Wednesday. (Alastair Grant/The Associated Press)

She’s due to attend the summit on Thursday alongside government officials from more than two dozen countries including Canada, France, Germany, India, Japan, Saudi Arabia — and China, invited over the protests of some members of Sunak’s governing Conservative Party.

Canada’s Minister of Innovation, Science and Industry Francois-Philippe Champagne said AI would not be constrained by national borders, and therefore interoperability between different regulations being put in place was important.

“The risk is that we do too little, rather than too much, given the evolution and speed with which things are going,” he told Reuters.

WATCH | Summit convenes to discuss risks of AI advancement: 

AI Safety Summit convenes to discuss risks of AI advancement

Featured VideoScientists, industry experts and world leaders gathered in London to discuss the future of artificial intelligence and the dangers associated with leaving its advancement unchecked.

Tesla CEO Elon Musk is also scheduled to discuss AI with Sunak in a livestreamed conversation on Thursday night. The tech billionaire was among those who signed a statement earlier this year raising the alarm about the perils that AI poses to humanity.

European Commission President Ursula von der Leyen, United Nations Secretary-General Antonio Guterres and executives from U.S. artificial intelligence companies such as Anthropic, Google’s DeepMind and OpenAI and influential computer scientists like Yoshua Bengio, one of the “godfathers” of AI, are also attending.

In all, more than 100 delegates were expected at the meeting held at Bletchley Park, a former top secret base for World War II codebreakers that’s seen as a birthplace of modern computing.

Politicians stand in rows for a formal portrait.
Britain’s Michelle Donelan, Secretary of State for Science, Innovation and Technology, sixth right front row, with digital ministers who are attending the AI Safety Summit on Wednesday. (Alastair Grant/The Associated Press)

28 countries agree on need to manage risk

As the meeting began, U.K. Technology Secretary Michelle Donelan announced that the 28 countries and the European Union had signed the Bletchley Declaration on AI Safety. It outlines the “urgent need to understand and collectively manage potential risks through a new joint global effort.”

South Korea has agreed to host a mini virtual AI summit in six months, followed by an in-person one in France in a year’s time, the U.K. government said.

Sunak has said the technology brings new opportunities but warned about frontier AI’s threat to humanity, because it could be used to create biological weapons or be exploited by terrorists to sow fear and destruction.

Only governments, not companies, can keep people safe from AI’s dangers, Sunak said last week. However, in the same speech, he also urged against rushing to regulate AI technology, saying it needs to be fully understood first.

WATCH | The ‘godfather of AI’ about the risks involved:

He helped create AI. Now he’s worried it will destroy humanity

Featured VideoCanadian-British artificial intelligence pioneer Geoffrey Hinton says he left Google because of recent discoveries about AI that made him realize it poses a threat to humanity. CBC chief correspondent Adrienne Arsenault talks to the ‘godfather of AI’ about the risks involved and if there’s any way to avoid them.


Leave a Reply

Your email address will not be published. Required fields are marked *