Google has fired Blake Lemoine, a software engineer who made headlines for claiming the company’s AI had achieved sentience.
Blake Lemoine worked at Google as an engineer, working with the company’s LaMDA chatbot technology. Lemoine became increasingly that LaMDA had achieved sentience and self-awareness based on the conversations he had with it. Others, both inside and outside the company, were not convinced.
“Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” said Margaret Mitchell, who led Google’s AI ethics team before being fired. “I’m really concerned about what it means for people to increasingly be affected by the illusion.”
After placing Lemoine on leave in June, Google has now fired him for violating the company’s policies.
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” a Google spokesperson said in an email to Reuters.
Lemoine’s case illustrates the complex challenges associated with AI development. Many individuals tend to look for intelligence and sentience where it doesn’t exist. Conversely, the ongoing effort to combat false positives could, theoretically, impede recognition of true sentience if and when it emerges.
More than anything, the entire situation with Lemoine demonstrates why companies like Google should be investing in top-tier AI ethicists instead of firing them.