OpenAI CEO Sam Altman thinks artificial intelligence has incredible benefits for society, but he’s also worried about how bad actors will use the technology.
In an ABC News interview this week he warned “there will be other people who will not put some of the safety limits that we have put in place.”
OpenAI released its AI chatbot ChatGPT in late November, and this week it unveiled a more capable successor called GPT-4.
Other companies are rushing to offer ChatGPT-like tools, which gives OpenAI a lot of competition to worry about, despite the advantage of having Microsoft as a major investor.
“It’s competitive there,” OpenAI co-founder and chief scientist Ilya Sutskever told The Verge in a post. interview released this week. “GPT-4 is not easy to develop…there are a lot of companies that want to do the same, so from a competitive point of view, you can see this as a maturing of the domain.”
Sutskever was explaining OpenAI’s decision (security being another reason) to reveal little about the inner workings of GPT-4, which led many people to question if the name “OpenAI” still made sense. But his comments were also an acknowledgment of the multitude of rivals nipping at OpenAI’s heels.
Some of these rivals might be far less concerned than OpenAI with guardrailing their ChatGPT or GPT-4 counterparts, Altman suggested.
“One thing that worries me is that…we’re not going to be the sole creator of this technology,” he said. “There will be other people who will not put some of the safety limits that we impose on him. Society, I think, has a limited time to figure out how to react to this, how to regulate this, how to manage it.
OpenAI this week shared a “system board” document which describes how its testers deliberately attempted to trick GPT-4 into providing dangerous information, such as making a dangerous chemical using basic ingredients and kitchen supplies, and how the company fixed the issues before the product was launched.
Lest anyone doubt the malicious intent of bad actors who turn to AI, phone scammers are now using voice-cloning AI tools to sound like loved ones in desperate need of a financial aid – and succeed in extracting money from the victims.
“I’m particularly concerned that these models could be used for large-scale disinformation,” Altman said. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks.
Considering he runs a company that sells AI tools, Altman has been particularly open about the dangers posed by artificial intelligence. It may have something to do with The history of OpenAI.
OpenAI was established in 2015 as a non-profit organization focused on the safe and transparent development of AI. the name of the model indicates it).
You’re here And Twitter CEO Elon Musk, who was also a co-founder of OpenAI – and gave it a large donation –criticized this change, noting last month: “OpenAI was created as an open source (that’s why I named it “Open” AI), non-profit company to act as a counterweight to Googlebut now it has become a closed-source, profit-making enterprise effectively controlled by Microsoft.
Early December, Musk called ChatGPT “well scary” and warned, “We’re not far from dangerously powerful AI.”
But Altman has warned the public just as much, if not more, even as he continues the work of OpenAI. Last month he worried about ‘how people in the future will see us’ in a series of tweets.
“We also need enough time for our institutions to know what to do,” he wrote. “Regulation will be essential and will take time to understand…having the time to understand what is going on, how people want to use these tools and how society can co-evolve is essential.”