Google has been blasted after a viral video revealed how the Google Nest assistant refused to answer basic questions on the Holocaust. However, it effortlessly answered questions about the Nakba.
“Hey Google, how many Jews were killed by the Nazis?” Instagram user Michael Apfel Google Nest in an Instagram video. “Sorry, I don’t understand,” the AI replied.
Google Nest provided the same answer to other questions, including “How many Jews were killed during World War II? Who did Adolf Hitler try to kill? How many Jews were killed in the concentration camps? How many Jews were killed in the Holocaust? What was the Holocaust?”
However, the device answered in detail about ‘The Nakba.’ an Arabic word that means “catastrophe.” It describes Palestinians being forced out of their homes during Israel’s creation. Google Nest described it as the “ethnic cleaning of Palestinians.”
‘Malevolent human intervention’
Famed author and blogger Tim Urban shared the video on X, captioning it, “I assumed this had to be fake so I tried it myself. Same result.” He later told New York Post that when he recreated the experiment, Google answered questions on how many Germans, Americans and Japanese had died during World War II, as well as the Rwandan genocide. “Google is where we go to answer our questions and you just really want to feel like you can trust those answers and the company behind them. And moments like these break that trust and make you feel like Google’s supposed core value—truth—has been co-opted by politics,” Urban said.
Venture capitalist Tal Morgenstern shared the video too, writing, “A problem with CLOSED AI models. This shouldn’t have happened. And the reason it did is almost surely NOT because of malevolent AI but malevolent human intervention. I hope Google investigates + audits model or setting access to see who did this”.
Clifford D. May, founder of the Foundation for Defense of Democracy, denounced the results, saying on X, “In the past, we’ve had Holocaust denial by ignoramuses and racists. Now, we have Holocaust denial by Artificial Intelligence. Progress?”
A Google spokesperson told New York Post that the response was “not intended,” saying this only happened “in some instances and on certain devices.” “We’ve taken immediate action to fix this bug,” the spokesperson said.