Anthropic Introduces Agentic Code Review Tool to Claude Code

Anthropic launched a new artificial intelligence (AI)-based coding tool Monday to review pull requests (PRs) before they are deployed. In Claude Code, it is used as a new tool called Dubbed Code Review and can be activated for specific or all repositories (i.e. According to the company, “the tool is based on an extensive review system that uses a multi-agent review and has been designed to be thorough.” A similar agentic tool is one it internally runs for “nearly every PR” and the AI firm also said that the new agentical tool ‘is like one of its own tools’. Paraphrasingr ’It is.

Code Review Comes to Claude Code

The San Francisco-based AI startup announced and described the new AI tool in a post, which was released in an interview with reporters. Increasingly, with the advent of vibe coding platforms and AI assistants that can write and edit code, organisations are seeing an increasing number of code submissions. In this more volume, Anthropic says that “overworked human reviewers skimme through the code rather than analyse it in detail” (as opposed to analyzing an entire piece of text) often resulte.

Play Video

That means that code with bugs and severe flaws can be approved, and is applied to critical programming bases. Code Review is said to be Anthropic’s answer to the problem, and it has been described as conducting “deep, multi-agent reviews that catch bugs human reviewers often miss themselves”. ” , ‘I’m sure it is worth reading.

It is admin permission to activate Code Review for a organisation, which requires admin approval. Admins must enable the tool in the settings of an Claude Code, install the GitHub App and select the repositories where it is used by AI to review the PRs. After this, all new PRs are automatically reviewed once they have been completed with no further configuration.

If it receives the tool’s PR, Code Review send a team of AI agents that search for bugs in parallel, verify them to remove false positives (and rank them by severity) and rank each. Big PRs are more agents and a deeper read, while trivial code is treated less heavily (and lighter) than large or complex PR. A normal review of can take up to 20 minutes. If an analyser is completed, then the result is added to the PR as one overview comment (along with in-line comments for certain bugs).

Code Review is more comprehensive and expensive than the open-source Claude Code GitHub Action, which has been anthropic claims. The token-based tool is charged with usage, and the company says an average bill can be $15 (roughly Rs). 138), $25 (about Rs. Paraphrasing 2,300) By defining total monthly spends across reviews, organisations can control their expenditure by using the tool on selected repositories and tracking its performance through the analytics dashboard.


Thanks for reading Anthropic Introduces Agentic Code Review Tool to Claude Code
MightNews
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.