Apple CEO Tim Cook has spoken on the topic of AI hallucinating, saying he would never say the technology is 100% foolproof.
Apple unveiled its Apple Intelligence A.I. at WWDC earlier this week. The company has won widespread praise for making a compelling case for why the average person would want to use AI and the benefits they will see. While the company is using its own AI models, it is also offering customers the option to tap into OpenAI’s ChatGPT.
One of the most concerning issues with AI, however, is its tendency to hallucinate, the term used to describe when AI randomly provides wrong information for no apparent reason. In an interview for The Washington Post, columnist Josh Tyrangiel asked: “What’s your confidence that Apple Intelligence will not hallucinate?”
“It’s not 100 percent. But I think we have done everything that we know to do, including thinking very deeply about the readiness of the technology in the areas that we’re using it in,” Cook replied. “So I am confident it will be very high quality. But I’d say in all honesty that’s short of 100 percent. I would never claim that it’s 100 percent.”
Cook’s comments echo those by other CEOs and tech experts. Alphabet CEO Sundar Pichai admitted that AI hallucinations are “expected.”
“No one in the field has yet solved the hallucination problems,” he added. “All models do have this as an issue.”
Pichai went on to say that part of the problem had to do with the fact that researchers still “don’t fully understand” how AI works.
“There is an aspect of this which we call—all of us in the field—call it a ‘black box,’” he said. “And you can’t quite tell why it said this, or why it got it wrong.”
Reports leading up to WWDC indicated that Apple was eager to avoid some of the high-profile missteps its rivals had made, including embarrassing incidents involving AI hallucinations. In fact, those reports indicated Apple is especially focused on neural network-type AIs specifically in an effort to address these issues, as we covered previously:
The company has long been aware of the potential of “neural networks” — a form of AI inspired by the way neurons interact in the human brain and a technology that underpins breakthrough products such as ChatGPT.
Chuck Wooters, an expert in conversational AI and LLMs who joined Apple in December 2013 and worked on Siri for almost two years, said: “During the time that I was there, one of the pushes that was happening in the Siri group was to move to a neural architecture for speech recognition. Even back then, before large language models took off, they were huge advocates of neural networks.”
In the meantime, the fact that Cook is now admitting that he will “never claim that it’s 100 percent,” is indicative of the challenges AI firms are facing as they continue to push the technology forward.