Anthropic CEO goes full tech optimist with 15,000-word ode to AI

Anthropic CEO Dario Amodei wants you to know that he is not an AI “destroyer.”

At least that’s what I read about the “mic drop” in a roughly 15,000-word essay that Amodei posted on his blog late Friday. (I asked Anthropic’s Claude chatbot for consent, but unfortunately the post exceeded the length limit of the free plan.)

In a broad sense, Amodei paints a picture of a world where all AI risks are mitigated and the technology delivers hitherto unrealized prosperity, social uplift, and abundance. He argues that this is not meant to minimize the downsides of AI. Initially, Amodei (without naming names) targeted AI companies that oversell and generally tout their technological capabilities. However, one might argue, and so does this author, that this essay leans too far in a techno-utopian direction and that its claims are not supported by the facts.

Amoday believes that ‘strong AI’ will arrive as early as 2026. Powerful AIHe means an AI that is “smarter than a Nobel Prize winner” in fields like biology and engineering and can do things like prove unsolved mathematical theorems, write “very good novels,” and more.) Amodei says this AI can be controlled. They say there will be. Any piece of software or hardware imaginable, including industrial machines, essentially does most of the tasks humans do today, but better.

“(This AI) can engage in any task, communication, or remote task, including performing tasks on the Internet, instructing or instructing humans, ordering materials, directing experiments, watching videos, producing videos, etc.,” Amodei wrote. . “Although they have no physical embodiment other than living on a computer screen, computers allow us to control existing physical tools, robots or laboratory equipment. In theory, we could even design robots or equipment for our own use.”

A lot will have to happen to get to that point.

Even the best AI today cannot “think” in the way we understand. The model does not make inferences as much as the repeating patterns it observes in the training data.

Assuming for the purposes of Amodei’s claim that the AI ​​industry is soon If we ‘solve’ human-like thinking, will future AIs be able to keep up with tasks such as conducting laboratory experiments and manufacturing their own tools? The vulnerability of today’s robots means this is not easy.

But Amodei is optimistic. Very optimistic.

He believes that within the next 7 to 12 years, AI could help cure almost all infectious diseases, eliminate most cancers, treat genetic diseases and stop Alzheimer’s disease in its early stages. Amodei believes that within the next five to 10 years, conditions such as PTSD, depression, schizophrenia and addiction will be treated with AI-created drugs or genetically prevented through embryo selection (a controversial opinion). It exists to “adjust cognitive functions and emotional states” so that “(our brains) behave a little better and have more satisfying daily experiences.”

If this comes to fruition, Amodei predicts that the average human lifespan will double to 150 years.

“My basic prediction is that AI-enabled biology and medicine will enable human biologists to condense into 5 to 10 years the advances that will be achieved over the next 50 to 100 years,” he wrote. “I call this the ‘compressed 21st century.’ The idea is that within a few years of the development of powerful AI, we will make all the advances in biology and medicine that we would have made in the entire 21st century.”

This also seems far-fetched, considering that AI has not yet fundamentally transformed medicine, and may not for quite some time, or ever. Even if AI reduces the labor and cost of putting drugs through preclinical testing, they can still fail at later stages, just like human-designed drugs. AI deployed in healthcare today has in many ways been shown to be biased and dangerous, or incredibly difficult to implement in traditional clinical and laboratory settings. To suggest that all of these problems and more will be solved in roughly 10 years is, well… eagerIn short.

But Amodei doesn’t stop there.

He claims that AI can solve world hunger. We can turn the tide of climate change. And this could transform the economies of most developing countries. Amodei believes that within 5-10 years, AI can bring sub-Saharan Africa’s GDP per capita ($1,701 in 2022) to China’s GDP per capita ($12,720 in 2022).

This is a bold statement, to put it mildly. This will be familiar to anyone who has listened to disciples of the “Singularity” movement, who expect similar results. Amodei acknowledged that this would require a “massive commitment to global health, philanthropy and political advocacy,” which he said would happen because it was in the world’s best economic interest.

However, I would like to point out that this has not been historically true in one important respect. Many workers responsible for labeling data sets used to train AI are paid well below minimum wage, while employers gain tens or hundreds of millions of dollars in capital from the results.

Amodei briefly touches on the risks of AI to civil society, suggesting that a coalition of democracies secure AI’s supply chain and block adversaries who seek to use AI for harmful purposes through powerful means of AI production (such as semiconductors). At the same time, he suggests that AI, in the right hands, could be used to “weaken oppressive governments” and even reduce bias in the legal system. (AI has historically exacerbated bias in the legal system.)

“Truly mature and successful AI implementations have the potential to: reduce We need to be unbiased and fairer to everyone,” Amodei wrote.

So if AI were to take over every job imaginable and perform it better and faster, wouldn’t humans suffer economically? Amodei says yes. I admit it will happen. And at that point, society will have to have a conversation about “how the economy should be organized.”

But he offers no solutions.

“People want a sense of accomplishment, even competition. In the post-AI world, it is perfectly possible to spend years attempting very difficult tasks with complex strategies, similar to what people do today when they start out. A research project, trying to become a Hollywood actor or starting a company,” he wrote. “The fact that (a) AI could in principle do this job better and (b) this job is no longer an economically rewarding element of the global economy doesn’t seem to matter much to me.”

Amodei advances the notion that AI is merely a technological accelerator, that humans are naturally oriented toward “the rule of law, democracy, and Enlightenment values.” But in doing so, he ignores many of the costs of AI. AI is expected to have a huge impact on the environment, and it already has. And it’s creating inequality. Nobel Prize-winning economist Joseph Stiglitz and others have pointed out that AI-induced labor disruption could further concentrate wealth in the hands of corporations and leave workers more powerless than ever.

These companies include Anthropic, as Amodei acknowledges. (He only mentions Anthropic six times throughout the essay.) Anthropic is a business, after all, and is reportedly worth close to $40 billion. And those who benefit from AI technology are largely corporations whose sole responsibility is to increase returns to shareholders, not better humanity.

Indeed, this essay seems cynically timed, considering that Anthropic is in the process of raising billions of dollars in venture capital. OpenAI CEO Sam Altman issued a similar technology potential statement shortly before OpenAI closed a $6.5 billion funding round. Maybe it’s just a coincidence!

Again, Amodei is not a philanthropist. He, like any other CEO, has a product to promote. It just happens. his The product will “save the world.” Those who think (or would be led to believe) that they risk being left behind if they don’t.