Google's new AI Overviews feature wants to give you tidy summations of search results, researched and written in mere seconds by generative AI. So far so good. The problem is, it sometimes gets stuff wrong .

How often? It's hard to say just yet, though examples piled up quickly last week. But even an occasional mistake is a bad look for a tool that's supposed to be smarter and faster than you and me. Consider these flubs: When asked how to keep cheese on pizza, it suggested adding an eighth of a cup of nontoxic glue .

That's a tip that originated from an 11-year-old comment on Reddit. And in response to a query about daily rock intake for a person, it recommended we eat "at least one small rock per day." That advice hailed from a 2021 story in The Onion .

Read more: Glue in Pizza? Eat Rocks? Google's AI Search Is Mocked for Bizarre Answers Essentially, it's a new variation on AI hallucinations , which occur when a generative AI model serves up false or misleading information and presents it as fact. Hallucinations result from flawed training data, algorithmic errors or misinterpretations of context. The large language model behind AI engines like those from Google, Microsoft and OpenAI is "statistically predicting data it may see in the future based on what it has seen in the past," said Mike Grehan, CEO of digital marketing agency Chelsea Digital.

"So, there's an element of 'crap in, crap out' that still exists." It's a bad look for Google as it's trying to get its footing i.