
“Running with scissors is a cardio exercise that gets your heart rate up and requires focus,” Google's new AI search feature says. “Some people say it can improve pores and give strength.”
Google's AI feature pulled this response from a website called Little Old Lady Comedy, which, as the name suggests, is a comedy blog. But the mistake is so egregious that it's being circulated on social media along with Google's clearly incorrect AI overview. In fact, everyday users are now red-teaming these products on social media.
In cybersecurity, some companies hire “red teams” (ethical hackers) who attempt to compromise their products as if they were malicious actors. If the red team discovers vulnerabilities, the company can fix them before the product ships. Google implemented a form of red teaming before rolling out its AI products to Google Search, which is estimated to handle trillions of queries per day.
It's surprising, then, that a company as resourceful as Google still releases a product with obvious flaws. This is why it has become a meme mocking the failure of AI products. This is especially true in an era where AI becomes more prevalent. We've seen this in ChatGPT's misspellings, a video generator that doesn't understand how humans eat spaghetti, and X's Grok AI news roundup that doesn't understand sarcasm just like Google. But these memes can actually serve as useful feedback for companies developing and testing AI.
Despite the notable nature of these flaws, technology companies often downplay their impact.
“The examples we have seen are generally very rare queries and are not representative of most people’s experiences,” Google told TechCrunch in an emailed statement. “We conducted extensive testing before launching this new experience and will use these individual cases as we continue to improve the system as a whole.”
Not all users see the same AI results, especially since by the time incorrect AI suggestions are made, the problem has already been fixed. In a recent example that went viral, Google suggested that if you're making pizza and your cheese isn't sticking, you can “make it stickier” by adding about an eighth of a cup of glue to the sauce. As a result, the AI pulled this answer from an 11-year-old Reddit comment from a user named “f––smith.”
Beyond being a huge mistake, this is also a sign that AI content trading may be overvalued. For example, Google signed a $60 million deal with Reddit to license content for training AI models. Reddit signed a similar deal with OpenAI last week, and Automattic properties WordPress.org and Tumblr are rumored to be in talks to sell data to Midjourney and OpenAI.
As Google has acknowledged, many of the errors floating around social media come from unconventional searches designed to put AI to work. At least I hope no one is seriously looking for “the health benefits of running with scissors.” However, some of these mistakes are more serious. Science journalist Erin Ross posted on
Ross's post, which received more than 13,000 likes, said the AI recommended applying a tourniquet to a wound, cutting it, and sucking out the poison. According to the U.S. Forest Service, here's all you need to do: ~ no If you get bitten, do so. Meanwhile, at Bluesky, author T Kingfisher zoomed in on a post showing Google's Gemini misidentifying poisonous toadstools as common white button mushrooms. Screenshots of the post spread to other platforms as a cautionary tale.
If a bad AI reaction goes viral, AI may become more confused by the resulting new content on the topic. On Wednesday, New York Times reporter Aric Toler posted a screenshot to X showing a query asking whether dogs have ever played in the NHL. The AI's response was 'yes'. For some reason, the AI called Calgary Flames player Martin Pospisil a dog. Now, if you do the same query, the AI will pull up an article from the Daily Dot about how Google AI keeps thinking dogs are playing sports. The AI feeds its own mistakes, further poisoning the AI.
This is an inherent problem in training large-scale AI models on the Internet. Sometimes people on the Internet lie. But just as there are no rules for dogs playing basketball, unfortunately there are no rules for big tech companies shipping bad AI products.
As the saying goes: Garbage in, garbage out.









