OpenAI's Superalignment Team Disbands Amid Internal Strife

Who’s keeping AI in check if the watchdogs are quitting?
OpenAI's team dedicated to managing the long-term risks of superintelligent AI has disbanded. This comes after high-profile departures, including co-leads Ilya Sutskever and Jan Leike.
Sutskever’s exit followed a failed attempt to oust CEO Sam Altman, which triggered a company-wide revolt. Leike cited disagreements over resource allocation as his reason for leaving.
To say that this is problematic is an understatement because OpenAI is now driven by shareholders' profit, just like all other Big Tech, and not what is best for society.
The team’s dissolution raises concerns about OpenAI’s commitment to controlling potentially rogue AI. With these researchers gone, who will ensure AI development remains safe and beneficial? How can we trust AI to not outsmart us if even the experts are stepping back?
Read the full article on Wired.
----
💡 We're entering a world where intelligence is synthetic, reality is augmented, and the rules are being rewritten in front of our eyes.
Staying up-to-date in a fast-changing world is vital. That is why I have launched Futurwise; a personalized AI platform that transforms information chaos into strategic clarity. With one click, users can bookmark and summarize any article, report, or video in seconds, tailored to their tone, interests, and language. Visit Futurwise.com to get started for free!
