
OpenClaw creator Peter Steinberger posted to
The ban did not last long. Hours later, after the post went viral, Steinberger said his account was restored. Considering that Steinberger is currently employed by Anthropic rival OpenAI, among the hundreds of comments about the conspiracy theory was one written by an Anthropic engineer. The engineer told the prominent developer that Anthropic had never banned the use of OpenClaw and had never offered to help.
I’m not sure if that’s the key that restored my account. (We contacted Anthropic about this.) But the entire message string was enlightening on several levels.
To recap recent history, the ban follows news last week that Anthropic’s Claude subscription would no longer apply to “third-party harnesses, including OpenClaw,” the AI model company said.
OpenClaw users will now have to pay separately for their usage based on their consumption through Claude’s API. Anthropic, which essentially serves its own agent Cowork, is now imposing a “claw tax”. Steinberger said he was following these new rules and using his own API, but was banned anyway.
Anthropic said it introduced the price change because the subscription was not built to address the “usage patterns” of claws. Claw can be more computationally intensive than prompts or simple scripts because it runs continuous inference loops, automatically repeats or retries operations, and can be connected to many other third-party tools.
But Steinberger didn’t accept that excuse. After Anthropic changed its prices, he posted, “Funny how the timing matches up. First copy some popular features into a closed harness, then lock it down open source.” Although he didn’t specify it, he may have been referring to features added to Claude’s Agent Cowork, such as Claude Dispatch, which allows users to remotely control agents and assign tasks. Dispatch launched a few weeks before Anthropic changed its OpenClaw pricing policy.
Steinberger’s frustration with Anthropic surfaced again Friday.
“You had a choice but you made the wrong choice,” one person wrote, implying some of that was down to him for taking the job at OpenAI instead of Anthropic. Steinberger responded, “One person welcomed me, the other sent legal threats.”
ouch.
When several people asked him why he uses Claude instead of his employer’s model, he explained that he only uses it for testing purposes to ensure that OpenClaw updates don’t cause problems for Claude users.
“You have to separate the two: what I do at the OpenClaw Foundation, where I want to make sure OpenClaw works well for *all* model providers, and what I do at OpenAI, which is to support our future product strategy,” he explained.
Several people also pointed out that the reason you should test Claude is because its model remains a more popular choice for OpenClaw users than ChatGPT. He also heard that when Anthropic changed its prices, they responded, “We’re working on that issue.” (So this is a clue as to what his job is at OpenAI.)
Steinberger did not respond to a request for comment.