OpenAI has disbanded its team responsible for mitigating security risks for its advanced AI systems. According to OpenAI, the group will be integrated into other research departments. One of the team leaders criticizes the company in X.
OpenAI confirms To Bloomberg She disbanded the “Superalignment” team. This team And he was in It was established in July 2023 to reduce the risks of future artificial intelligence systems, which are considered “smarter” than humans. Section It was his goal To solve the major technical challenges of controlling and controlling “super intelligent artificial intelligence” by 2027.
OpenAI told the news agency that the group will no longer exist as an independent division, but will instead be “more deeply integrated into our research activities to achieve security goals.” Team leaders, Ilya Sutskever and Jan Laiki, resigned from the company this week. The former is a co-founder of OpenAI and voted to oust CEO Sam Altman last year. He did not give a reason for his departure.
Like Let us know at that he had been at odds with OpenAI managers for some time over “core priorities” and that a “breaking point” had now been reached. The team is said to have not received sufficient resources in recent months to continue its “crucial” research. “Building machines smarter than humans is an inherently dangerous endeavor. OpenAI carries an enormous responsibility on behalf of all of humanity, but safety culture and procedures have become secondary to shiny products in recent years.”
Altman thanks you In a post on X For his contribution to safety investigations. He also responds succinctly to the researcher’s posts: “He’s right, we still have a lot to do and we want to stick to it.”
“Coffee buff. Twitter fanatic. Tv practitioner. Social media advocate. Pop culture ninja.”
More Stories
Strong increase in gas export pipeline from Norway to Europe
George Louis Bouchez still puts Julie Tatton on the list.
Thai Air Force wants Swedish Gripen 39 fighter jets