OpenAI CEO Sam Altman Addresses Safety Concerns and Controversy Surrounding GPT-4o Voice Model
Category Machine Learning Wednesday - May 22 2024, 15:48 UTC - 6 months ago OpenAI CEO Sam Altman addresses concerns over the safety of their AI technology and recent controversy over the use of a voice resembling that of actress Scarlett Johansson in their ChatGPT AI. Altman defends the safety of their technology and urges developers to take advantage of the current time to build innovative products. Recent actions, such as dissolving their AI risk mitigation team, have raised questions about OpenAI's commitment to AI safety. Altman issued a public apology to Johansson but insisted that the voice was not based on hers and that OpenAI's technology uses publicly available text to create human-like voices.
On Wednesday, May 22, 2024, OpenAI CEO Sam Altman spoke at a Microsoft event in Seattle to address growing concerns about the safety and potential risks of the company's AI technology. Altman's remarks came on the heels of a new controversy surrounding OpenAI's ChatGPT-style AI voice that closely resembled that of actress Scarlett Johansson. Altman, who gained global prominence in 2022 when ChatGPT was released, is also facing criticism over the departure of the team responsible for mitigating long-term AI risks. Despite these challenges, Altman advised developers not to delay their plans and to take advantage of the current time to build products using OpenAI's technology.
As a close partner of Microsoft, OpenAI provides the foundational technology for the company's AI tools, primarily the GPT-4 large language model. Microsoft has heavily invested in AI, pushing out new products and advocating for the embracing of generative AI capabilities. Altman acknowledged that while GPT-4 is not perfect, it is generally considered robust enough and safe enough for a wide variety of uses. He also emphasized that the company has put a significant amount of work into ensuring the safety and robustness of its AI models.
However, recent actions by OpenAI have raised questions about their commitment to AI safety. Last week, the company dissolved its "superalignment" group, a team dedicated to mitigating long-term AI risks. Team co-leader Jan Leike, who later announced his departure from OpenAI, criticized the company for prioritizing shiny new products over safety, stating that he was concerned they weren't on the right trajectory to address these issues.
This controversy was further fueled by a public statement from Scarlett Johansson, who expressed outrage over a voice used by OpenAI's ChatGPT that closely resembled her voice in the 2013 film "Her." The voice in question, called "Sky," was featured in the release of OpenAI's more human-like GPT-4o model last week. In response to Johansson's statement, Altman issued a public apology, stating that the voice was not based on hers and that OpenAI's technology is designed to create human-like voices using publicly available text, such as movie scripts. He added that the company regrets not considering the implications of using a well-known voice without obtaining permission from the individual.
In conclusion, Altman defended the safety of OpenAI's technology and urged developers to continue using it to create innovative products. He also addressed concerns about recent controversies and expressed the company's commitment to AI safety moving forward.
Share