Search
Add Listing
  • You have no bookmark.

Your Wishlist : 0 listings

Sign In
U.S.

Another safety researcher is leaving OpenAI

Another safety researcher is leaving OpenAI
  • Miles Brundage, who advises OpenAI leadership on safety and policy, announced his departure.
  • He said he was leaving the company to have more independence and freedom to publish.
  • The AGI Readiness team he oversaw will be disbanded.

Miles Brundage, a senior policy advisor and head of the AGI Readiness team at OpenAI, is leaving the company. He announced the decision on Wednesday in a post on X, which was accompanied by a Substack article explaining the decision. The AGI Readiness team he oversaw will be disbanded, with its various members distributed among other parts of the company.

Brundage is the latest high-profile safety researcher to leave OpenAI. In May, the company dissolved its Superalignment team, which focused on the risks of artificial superintelligence, after the departure of its two leaders, Jan Leike and Ilya Sutskever. Other executives who have departed in recent months are Mira Murati, its chief technology officer; Bob McGrew, its chief research officer; and Barret Zoph, a vice president of research.

OpenAI did not respond to a request for comment.

For the past six years Brundage has advised OpenAI’s executives and board members about how to prepare for the rise of artificial intelligence that rivals human intelligence — something many experts think could fundamentally transform society.

He’s been responsible for some of OpenAI’s biggest innovations in safety research, including instituting external red teaming, which involves outside experts looking for potential problems in OpenAI products.

Brundage said he was leaving the company to have more independence and freedom to publish. He referred to disagreements he had with OpenAI about limitations on research he was allowed to publish and said that “the constraints have become too much.”

He also said that working within OpenAI had biased his research and made it difficult to be impartial about the future of AI policy. In his post on X, Brundage described a prevailing sentiment within OpenAI that “speaking up has big costs and that only some people are able to do so.”

Read the original article on Business Insider



This article was originally published by Darius Rafieyan at All Content from Business Insider (https://www.businessinsider.com/another-safety-researcher-is-leaving-openai-2024-10).

General Content Disclaimer



The content on this website, including articles generated by artificial intelligence or syndicated from third-party sources, is provided for informational purposes only. We do not own the rights to all images and have not independently verified the accuracy of all information presented. Opinions expressed are those of the original authors and do not necessarily reflect our views. Reader discretion is advised, as some content may contain sensitive, controversial, or unverified information. We are not responsible for user-generated content, technical issues, or the accuracy of external links. Some content may be sponsored or contain affiliate links, which will be identified accordingly. By using this website, you agree to our privacy policy. For concerns, including copyright infringement (DMCA) notices, contact us at info@texasnews.app.

The New York Liberty celebrated its first-ever championship and record-breaking WNBA season Prev Post
The New York Liberty celebrated its first-ever championship and record-breaking WNBA season
I flew across the country to spend 48 hours in Las Vegas. Here are 4 things that were worth it and 4 that weren’t. Next Post
I flew across the country to spend 48 hours in Las Vegas. Here are 4 things that were worth it and 4 that weren’t.

Add Comment

Your email is safe with us.

0
Close

Your cart

No products in the cart.