
Anthropic’s Super Bowl ad, one of four ads released Wednesday by the AI Institute, begins with the word “BETRAYAL” splashed boldly across the screen. The camera pans to a man who earnestly asks a chatbot (clearly intended to depict ChatGPT) for advice on how to talk to his mother.
The robot, depicted by a blonde woman, offers some classic advice. Start by listening. Enjoy a nature walk! It then cuts to an ad for a fictional (I hope!) cougar dating site called Golden Encounters. Anthropic concluded by saying that advertising would be served to AI, but not to its own chatbot, Claude.
Another ad features a lanky young man seeking advice on building a six-pack. After entering his height, age, and weight, the bot will show him an ad for height-boosting insoles.
Anthropic ads are cleverly targeting OpenAI users following the company’s recent announcement that ads will be available on ChatGPT’s free tier. And they caused an immediate stir, generating headlines with Anthropic “mocking,” “skewering,” and “dunking” OpenAI.
They’re so funny that even Sam Altman admits he laughed at them in X. But he clearly didn’t find them funny. They inspired him to write a novel-length rant in which he called his rivals “dishonest” and “authoritarians.”
In the post, Altman explains that the ad-supported tier is intended to shoulder the burden of providing free ChatGPT to millions of users. ChatGPT is still the most popular chatbot.
But the OpenAI CEO claimed the ad was “dishonest” in implying that ChatGPT would distort conversations to insert ads. “We certainly would not run ads in the manner depicted by Anthropic,” Altman wrote in a social media post. “We are not stupid and we know users will reject this.”
Tech Crunch Event
Boston, Massachusetts
|
June 23, 2026
In fact, OpenAI has promised that ads will be separated, labeled, and will not affect chat. But the company also said it plans to create conversation-specific ads, a key claim in Anthropic’s ads. “Based on the current conversation, we plan to test ads at the bottom of ChatGPT replies when there is a relevant sponsored product or service,” OpenAI explained in its blog.
Altman then peppered his rival with equally dubious claims. “Anthropic provides expensive products to the rich,” he wrote. “We also felt strongly that we needed to make AI available to the billions of people who can’t afford to pay for a subscription.”
However, Claude also has a free chat tier that offers subscription fees of $0, $17, $100, and $200. ChatGPT has tiers of $0, $8, $20, and $200. You could argue that the subscription tiers are pretty much the same.
Altman also claimed in his post that “Anthropic wants to control what people do with AI.” He claims to block Claude Code from being used by “companies they don’t like,” like OpenAI, and said Anthropic lets people know what they can and cannot use AI for.
In fact, Anthropic’s entire marketing deal has been “responsible AI” from the beginning. The company was founded by two OpenAI alumni, who claimed that working there made them aware of AI safety.
Still, both chatbot companies have usage policies, AI guardrails, and talk about AI safety. OpenAI allows ChatGPT to be used for erotica, while Anthropic does not, but OpenAI, like Anthropic, has decided that it should block some content, especially when it comes to mental health.
But Altman took this claim of telling Anthropic what to do to an extreme level when he accused Anthropic of being “authoritarian.”
“An authoritarian company is not going to take us there on its own, let alone other obvious risks. It’s a dark path,” he wrote.
Using the word “authoritarian” to describe a cheeky Super Bowl ad is misleading at best. This is especially tactless given the current geopolitical climate in which protesters around the world are being murdered by agents of their own governments. While its business rivals competed in advertising from the beginning, Anthropic clearly cared.