ChatGPT, created by US company OpenAI, has seen several upgrades since its launch. The AI capabilities of underlying technology has become so sophisticated that people started raising questions whether AI can replace jobs and be used in spreading misinformation. Sam Altman, CEO of OpenAI has now opened up saying he was “a little bit scared” of his company’s invention but is positive about the good it can do.
While talking to ABC News, Altman said that he believes that AI technology comes with real dangers, but can also be “the greatest technology humanity has yet developed” to drastically improve people’s lives.
“We’ve got to be careful here. I think people should be happy that we are a little bit scared of this,” Altman was quoted as saying. He said that if he wasn’t scared, “you should either not trust me or be very unhappy that I’m in this job.”
AI replacing jobs
Altman said that AI will likely replace some jobs in the near future and is worried about how quickly that could happen. He, however, also pointed out the bright side that technology will improve our lives.
“I think over a couple of generations, humanity has proven that it can adapt wonderfully to major technological shifts,” Altman said, adding, “But if this happens in a single-digit number of years, some of these shifts … That is the part I worry about the most.”
“It is going to eliminate a lot of current jobs, that’s true. We can make much better ones. The reason to develop AI at all, in terms of impact on our lives and improving our lives and upside, this will be the greatest technology humanity has yet developed,” Altman noted.
He also encouraged people to use ChatGPT as more of a tool and not as a replacement. Altman also discussed the positive effects of AI on education.
“We can all have an incredible educator in our pocket that’s customised for us, that helps us learn. Education is going to have to change,” he said.
AI use in misinformation
For Altman, one consistent issue with AI language models like ChatGPT is misinformation. He said that the program can give users factually inaccurate information.
“The thing that I try to caution people the most is what we call the ‘hallucinations problem’. The model will confidently state things as if they were facts that are entirely made up,” he said, adding that GPT-4, the latest language model, is more powerful than the one with which ChatGPT was launched.
“The right way to think of the models that we create is a reasoning engine, not a fact database,” Altman said.
“They can also act as a fact database, but that’s not really what’s special about them – what we want them to do is something closer to the ability to reason, not to memorise,” he added.
The top executive of the company noted that the technology itself was incredibly potent and potentially hazardous.