Home Technology Anthropic gives Claude Code more control, but continues to be tied down.

Anthropic gives Claude Code more control, but continues to be tied down.

Anthropic gives Claude Code more control, but continues to be tied down.

For developers using AI, “vibe coding” currently boils down to babysitting all the work or risking the model running unchecked. Anthropic says its latest update to Claude aims to remove that choice by letting the AI ​​decide for itself, with some restrictions, what actions are safe to take.

The move reflects a broader shift across the industry, as AI tools are increasingly designed to operate without waiting for human approval. The challenge is balancing speed and control. Too many guardrails will slow you down, while too few will make your system dangerous and unpredictable. Anthropic’s new “silent mode,” which is currently in a research preview phase, meaning it’s available for testing but not yet a finished product, is its latest attempt to thread that needle.

Automated mode uses AI safeguards to review each operation before execution, checking for signs of risky behavior and rapid insertion that the user did not request. This is a type of attack where malicious instructions are hidden in the content processed by the AI, causing it to perform unintended actions. Safe actions proceed automatically, while dangerous actions are blocked.

This is essentially an extension of Claude Code’s existing “dangerously-skip-permissions” command, which passes all decision-making to the AI ​​but adds an added layer of safety on top.

This feature builds on autonomous coding tools from companies like GitHub and OpenAI, which can execute tasks on behalf of developers. But it goes one step further by shifting the decision about when to ask the user for permission to the AI ​​itself.

Anthropic did not detail the specific criteria it uses in its safety hierarchy to distinguish between safe and hazardous tasks. Developers will want to understand this feature better before it is widely adopted. (TechCrunch has reached out to the company for more information on this issue.)

The automatic mode comes after Anthropic launched Claude Code Review, an automated code reviewer designed to catch bugs before they reach the codebase, and Dispatch for Cowork, which lets users send tasks to an AI agent to handle them on their behalf.

Tech Crunch Event

San Francisco, California
|
October 13-15, 2026

Silent mode will be rolled out to Enterprise and API users in the future. The company says it currently only works with Claude Sonnet 4.6 and Opus 4.6 and recommends using the new feature in an “isolated environment” (a sandbox setup kept separate from production systems) to limit potential damage if something goes wrong.

Exit mobile version