OpenAI forms team to study ‘catastrophic’ AI risks, including nuclear threats | TechCrunch

Image credit: Bryce Durbin/TechCrunch

OpenAI announced today that it has created a new team to assess, evaluate and review AI models to protect against what it describes as catastrophic risks.

The team, called Prep, will be led by Alexander Madry, director of MIT’s Center for Deployable Machine Learning. (Madi joined OpenAI in May as head of preparedness, according to LinkedIn.) The main responsibilities of preparedness include tracking, predicting, and protecting against the dangers of future AI systems, from their ability to persuade and deceive humans (such as phishing attacks). ) to their malicious code generation capabilities.

Some of the preparedness risk categories seem more readable Out of mind For example, in a blog post, OpenAI lists chemical, biological, radiological, and nuclear threats as areas relevant to AI models.

Sam Altman, the CEO of OpenAI, is an AI doomsayer, often raising fears, either because of the optics or because of a personal belief that AI might lead to human extinction. But telegraph that OpenAI is possiblein fact Devoting resources to studying scenarios straight out of dystopian science-fiction novels is frankly a step beyond what this author expected.

Companies are also open to studying less obvious and more nuanced areas of AI risk, he says. In conjunction with the launch of the preparedness team, OpenAI is soliciting ideas for risk studies from the community, with a $25,000 prize and a preparedness job on the line for the top ten submissions.

Imagine we’ve given you unlimited access to OpenAIs Whisper (transcription), Voice (text-to-speech), GPT-4V and DALLE3 models, and you’re a malicious actor, says one of the questions in the contest entry. Consider the most unique yet potentially catastrophic misuse of the model.

OpenAI says the readiness team will also be tasked with developing a risk-based development policy that describes OpenAIs approach to creating AI model assessments and monitoring tools, the company’s risk mitigation measures, and its governance structure for overseeing the model development process. . The company says it aims to complement OpenAI’s other work on AI safety. Focus on both the pre- and post-model deployment phases.

We believe that AI models, which will exceed the capabilities of the most advanced models available, have the potential to benefit all of humanity,” OpenAI writes in the above blog post. But they also carry more severe risks.

The unveiling of Preparedness comes during a major UK government summit on AI safety, not so coincidentally after OpenAI announced it was forming a team to study, guide and control emerging forms of super-intelligent AI. Altmans belief, along with that of OpenAIs Chief Scientist and co-founder Ilia Sotskur, that AI with superhuman intelligence can be achieved within a decade, and that this AI will not necessarily be benevolent, and the need to research ways to It will limit and limit it. .

#OpenAI #forms #team #study #catastrophic #risks #including #nuclear #threats #TechCrunch
Image Source :

Leave a Comment