1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

AI summit in UK seeks global consensus on mitigating risk

November 1, 2023

At the world's first artificial intelligence summit, tech bosses and representatives from over 50 nations, including the US and China, have agreed to cooperate on designing safeguards for the future.

Elon Musk, left, attends the first plenary session on of the AI Safety Summit at Bletchley Park, on Wednesday, Nov. 1, 2023 in Bletchley, England
The two-day summit will be followed by two more, in South Korea and France, next yearImage: Toby Melville/Reuters/AP/picture alliance

A two-day artificial intelligence summit kicked off in the UK on Wednesday, as more than 50 nations signed an agreement to work together on the potential threats posed by the rapidly evolving technology. 

The agreement focused on identifying risks of shared concern posed by AI, building scientific understanding of risks, and developing transnational risk mitigation policies.

The summit called for a "new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community."

It is being held at Bletchley Park, where top British codebreakers once cracked Nazi Germany's "Enigma" code during World War II. The joint agreement is being called the "Bletchley Declaration."

British Prime Minister Rishi Sunak launched the summit. Follow-up AI summits are scheduled to take place next year in South Korea and France.

Wu Zhaohui, China's vice minister of science and technology, told the opening session that Beijing was ready to increase collaboration on AI safety to help build an international "governance framework."

British Prime Minister Rishi Sunak and US Vice President Kamala Harris met in LondonImage: Carl Court/Getty Images

Tech leaders praise AI summit 

While the potential of AI raises many hopes, particularly for medicine, its development is seen as largely unchecked.

Some tech experts and political leaders have warned that the accelerated development of AI poses an existential threat to the world if not regulated.

Among the tech attendees were Sam Altman, CEO of OpenAI, the firm behind ChatGPT, and Elon Musk, the CEO of Tesla and owner of social media company X, formerly Twitter, who described the event as "timely."

"What we're really aiming for here is to establish a framework for insight so that there's at least a third-party referee, an independent referee, that can observe what leading AI companies are doing and at least sound the alarm if they have concerns," Musk told reporters.

"It's one of the existential risks that we face and it is potentially the most pressing one if you look at the timescale and rate of advancement — the summit is timely, and I applaud the prime minister for holding it," he said.

'AI is one of the existential risks that we face' — Elon MuskImage: Leon Neal/AP Photo/picture alliance

US seeks policy lead  

US Secretary of Commerce Gina Raimondo announced on Wednesday that the US will launch will launch an AI safety institute to evaluate known and emerging risks of so-called "frontier" AI models.

"I will almost certainly be calling on many of you in the audience who are in academia and industry to be part of this consortium," she said in a speech to the AI Safety Summit in Britain. "We can't do it alone, the private sector must step up."

Raimondo added that she would also commit to the US institute establishing a formal partnership with the UK Safety Institute.

US Vice President Kamala Harris is due to attend the second day of the event but caused some surprise among UK officials by giving a speech on AI at the US Embassy in London on Wednesday and holding some meetings with attendees of the summit, causing them to leave early. Her speech made only a passing reference to the Bletchley Park event.

"It's not necessarily a bad thing that the US has announced a policy blitz to coincide with the summit," a source from Britain's technology department told Reuters news agency. "We would obviously prefer it if guests didn't leave early."

On Monday, US President Joe Biden signed an AI executive order requiring developers of AI systems that pose risks to US national security, the economy, public health or safety, to share the results of safety tests with the US government, in line with the Defense Production Act.

The UK used the event to announce its plan to invest 300 million pounds ($364 million) in AI supercomputing, boosting funding from the previously announced 100 million pounds.

Predictive policing: When AI predicts criminal activity

02:49

This browser does not support the video element.

mds/wmr (Reuters, AFP, AP)

Skip next section Explore more
Skip next section DW's Top Story

DW's Top Story

Skip next section More stories from DW