At the UK AI Summit, developers and governments agreed on testing to help manage risks

  • Participants agreed to the need to test the model before publication
  • Governments, including China, signed a statement on Wednesday
  • They agreed to work together to tackle AI safety
  • Sunak to meet with Elon Musk after the summit

BLETCHLEY PARK, England, Nov 2 (Reuters) – Leading artificial intelligence developers have agreed to work with governments to test new frontier models before launch to help manage the risks of the rapidly developing technology, a potential breakthrough It is prominent in UK artificial intelligence. Summit – conference.

Some technology and political leaders have warned that AI poses great risks if left unchecked, from destroying consumer privacy to endangering humans and creating a global catastrophe, and these concerns are fueling a race by governments and institutions to Safeguards and regulations have been designed.

At the inaugural AI Safety Summit at Bletchley Park, home of Britain’s World War II codebreakers, political leaders from the United States, European Union and China agreed on Wednesday to take a joint approach to identifying risks and ways to mitigate them.

On Thursday, British Prime Minister Rishi Sunak said the United States, the European Union and other like-minded countries had also agreed with a select group of companies working on artificial intelligence on the principle that models should be carefully evaluated before and after. . has been deployed.

Referred to as the godfather of artificial intelligence, Yoshua Benjiu helps provide a “state of the science” report to build a common understanding of the capabilities and risks ahead.

“Until now, the only people who have tested the safety of new AI models have been the companies that developed them,” Sunak said in a statement. As many would agree, we shouldn’t rely on them to mark our assignments.

way ahead

The summit has brought together around 100 politicians, academics and technology executives to chart a way forward for technology that could transform the way companies, societies and economies operate, and some hope to create an independent body for global oversight.

In the West’s first attempt to manage the safe development of artificial intelligence, a Chinese vice minister joined other political leaders at a summit on Wednesday that focused on highly capable general-purpose models called “frontier artificial intelligence.”

Chinese Vice Minister of Science and Technology Wu Zhaohui signed the Bletchley Declaration on Wednesday, but China was not present on Thursday and did not include its name in the tentative agreement.

He was criticized by some lawmakers in his own party for inviting China after many Western governments scaled back technology cooperation with Beijing, but Sunak said any effort to secure artificial intelligence must include its key players.

He also said it showed the UK could play a role in bringing together the big three economic blocs of the US, China and the EU.

“It shows our ability to bring people together and bring them together,” Sunak said at a press conference. Inviting China was not an easy decision and many people criticized me for it, but I think it was the right decision in the long run.

Representatives from OpenAI, Anthropic, Google DeepMind, Microsoft ( MSFT.O ), Meta ( META.O ) and Microsoft-backed xAI attended Thursday’s meetings alongside leaders including European Commission President Ursula von der Leyen, U.S. Vice President Kamala participated. Harris and UN Secretary General Antonio Guterres.

The EU’s von der Leyen said that complex algorithms can never be fully tested, so “above all, we need to make sure that developers know when problems arise, both before and after they bring their models to market. , they act quickly.”

The last words on AI in the two days will be a conversation between Sunak and billionaire entrepreneur Elon Musk, scheduled to be broadcast later Thursday on Musk X, the platform formerly known as Twitter.

According to two sources at the summit, Musk told other attendees on Wednesday that governments should not rush to implement AI rules.

Instead, he suggested, companies using the technology are in a better position to spot problems and share their findings with lawmakers charged with writing new rules.

“I don’t know necessarily what fair rules are, but you have to start with insight before oversight,” Musk told reporters on Wednesday.

Reporting by Paul Sandel and Martin Coulter. Additional reporting by William James and Jan Stropchowski. Edited by Emelia Sithole-Matarise and Susan Fenton

Our Standards: The Thomson Reuters Trust Principles.

Obtain license rightsopens a new tab

#Summit #developers #governments #agreed #testing #manage #risks
Image Source : www.reuters.com

Leave a Comment