ChatGPT-maker says it is doubling down on preventing AI from 'going rogue' | Inquirer Business

ChatGPT-maker says it is doubling down on preventing AI from ‘going rogue’

/ 09:39 AM July 06, 2023

Sam Altman of OpenAI

Sam Altman, CEO of Microsoft-backed OpenAI and ChatGPT creator listens to Ilya Sutskever, co-founder and chief scientist of OpenAI during a talk at Tel Aviv University in Tel Aviv, Israel June 5, 2023. REUTERS/Amir Cohen/File photo

ChatGPT‘s creator OpenAI plans to invest significant resources and create a new research team that will seek to ensure its artificial intelligence remains safe for humans – eventually using AI to supervise itself, it said on Wednesday.

“The vast power of superintelligence could … lead to the disempowerment of humanity or even human extinction,” OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike wrote in a blog post. “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

Article continues after this advertisement

Superintelligent AI – systems more intelligent than humans – could arrive this decade, the blog post’s authors predicted. Humans will need better techniques than currently available to be able to control the superintelligent AI, hence the need for breakthroughs in so-called “alignment research,” which focuses on ensuring AI remains beneficial to humans, according to the authors.

FEATURED STORIES

OpenAI, backed by Microsoft, is dedicating 20 percent of the compute power it has secured over the next four years to solving this problem, they wrote. In addition, the company is forming a new team that will organize around this effort, called the Superalignment team.

The team’s goal is to create a “human-level” AI alignment researcher, and then scale it through vast amounts of compute power. 

Article continues after this advertisement

OpenAI says that means they will train AI systems using human feedback, train AI systems to assistant human evaluation, and then finally train AI systems to actually do the alignment research.

Article continues after this advertisement

AI safety advocate Connor Leahy said the plan was fundamentally flawed because the initial human-level AI could run amok and wreak havoc before it could be compelled to solve AI safety problems.

Article continues after this advertisement

“You have to solve alignment before you build human-level intelligence, otherwise by default you won’t control it,” he said in an interview. “I personally do not think this is a particularly good or safe plan.”

The potential dangers of AI have been top of mind for both AI researchers and the general public. In April, a group of AI industry leaders and experts signed an open letter calling for a six-month pause in developing systems more powerful than OpenAI‘s GPT-4, citing potential risks to society.

Article continues after this advertisement

A May Reuters/Ipsos poll found that more than two-thirds of Americans are concerned about the possible negative effects of AI and 61 percent believe it could threaten civilization.

READ:

ChatGPT: the promises, pitfalls and panic

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

US begins study of possible rules to regulate AI like ChatGPT

TAGS: Artificial Intelligence, Microsoft, safeguards

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.