By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Scoopico
  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
Reading: Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
Share
Font ResizerAa
ScoopicoScoopico
Search

Search

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel

Latest Stories

Gen Z Teacher Buys .9M Sydney Apartment Amid Jealousy Claims
Gen Z Teacher Buys $1.9M Sydney Apartment Amid Jealousy Claims
Iran hangs 3 people, including teen wrestler, in first executions over January protests
Iran hangs 3 people, including teen wrestler, in first executions over January protests
Commonwealth Fusion Systems CEO says viable reactor possible by 2030s
Commonwealth Fusion Systems CEO says viable reactor possible by 2030s
Prosecutors Say Stefanie Pieper May Have Been Buried Alive
Prosecutors Say Stefanie Pieper May Have Been Buried Alive
Baked by Melissa’s founder is ‘so freaking thrilled’ to step down as CEO
Baked by Melissa’s founder is ‘so freaking thrilled’ to step down as CEO
Have an existing account? Sign In
Follow US
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © Scoopico. All rights reserved
Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
Tech

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4

Scoopico
Last updated: March 20, 2026 12:28 am
Scoopico
Published: March 20, 2026
Share
SHARE



Contents
Cursor is pitching long-horizon coding, not just better completionsThe benchmark gains are substantial, even if GPT-5.4 still leads on one key chartWhy the “locked to Cursor” point matters for buyersThe bigger picture: Cursor is making an operational argument

Cursor, a San Francisco AI coding platform from startup Anysphere valued at $29.3 billion, has launched Composer 2, a new in-house coding model now available inside its agentic AI coding environment, and it offers drastically improved benchmarks from its prior in-house model.

It's also launching and making Composer 2 Fast, a higher-priced but faster variant, the default experience for users.

Here's the cost breakdown:

  • Composer 2 Standard: $0.50/$2.50 per 1 million input/output tokens

  • Composer 2 Fast: at $1.50/$7.50 per 1 million input/output tokens

That's a big drop from Cursor's predecessor in-house model, Composer 1.5, from February, which cost $3.50 per million input tokens and $17.50 per million output tokens; Composer 2 is about 86% cheaper on both counts.

Composer 2 Fast is also roughly 57% cheaper than Composer 1.5.

There's also discounts for "cache-read pricing," that is, sending some of the same tokens in a prompt to the model again, of $0.20 per million tokens for Composer 2 and $0.35 per million for Composer 2 Fast, versus $0.35 per million for Composer 1.5.

It also matters that this appears to be a Cursor-native release, not a broadly distributed standalone model. In the company’s announcement and model documentation, Composer 2 is described as available in Cursor, tuned for Cursor’s agent workflow and integrated with the product’s tool stack.

The materials provided do not indicate separate availability through external model platforms or as a general-purpose API outside the Cursor environment.

Cursor is pitching long-horizon coding, not just better completions

The deeper technical claim in this release is not merely that Composer 2 scores higher than Composer 1.5. It is that Cursor says the model is better suited to long-horizon agentic coding.

In its blog, Cursor says the quality gains come from its first continued pretraining run, which gave it a stronger base for scaled reinforcement learning. From there, the company says it trained Composer 2 on long-horizon coding tasks and that the model can solve problems requiring hundreds of actions.

That framing is important because it addresses one of the biggest unresolved issues in coding AI. Many models are good at isolated code generation. Far fewer remain reliable across a longer workflow that includes reading a repository, deciding what to change, editing multiple files, running commands, interpreting failures and continuing toward a goal.

Cursor’s documentation reinforces that this is the use case it cares about. It describes Composer 2 as an agentic model with a 200,000-token context window, tuned for tool use, file edits and terminal operations inside Cursor.

It also notes training techniques such as self-summarization for long-running tasks. For developers already using Cursor as their main environment, that tighter tuning may matter more than a generic leaderboard claim.

The benchmark gains are substantial, even if GPT-5.4 still leads on one key chart

Cursor’s published results show a clear improvement over prior Composer models. The company lists Composer 2 at 61.3 on CursorBench, 61.7 on Terminal-Bench 2.0, and 73.7 on SWE-bench Multilingual.

That compares with Composer 1.5 at 44.2, 47.9 and 65.9, and Composer 1 at 38.0, 40.0 and 56.9.

The release is more measured than some model launches because Cursor is not claiming universal leadership.

On Terminal-Bench 2.0, which measures how well an AI agent performs tasks in command line terminal-style interfaces, GPT-5.4 still leads at 75.1, while Composer 2 scores 61.7, ahead of Opus 4.6 at 58.0, Opus 4.5 at 52.1 and Composer 1.5 at 47.9.

That makes Cursor’s pitch more pragmatic and arguably more useful for buyers. The company is not saying Composer 2 is the single best model at everything. It is saying the model has moved into a more competitive quality tier while offering more attractive economics and stronger integration with the product developers are already using.

Cursor also included a performance-versus-cost chart on its CursorBench benchmarking suite that appears designed to make a Pareto-style argument for Composer 2.

In that graphic, Composer 2 sits at a stronger cost-to-performance point than Composer 1.5 and compares favorably with higher-cost GPT-5.4 and Opus 4.6 settings shown by Cursor. The company’s message is not simply that Composer 2 scores higher than its predecessor, but that it may offer a more efficient cost-to-intelligence tradeoff for everyday coding work inside Cursor.

Why the “locked to Cursor” point matters for buyers

For readers deciding whether to use Composer 2, the most important question may not be benchmark performance alone. It may be whether they want a model optimized for Cursor’s own product experience.

That can be a strength. According to the documentation, Composer 2 can access Cursor’s agent tool stack, including semantic code search, file and folder search, file reads, file edits, shell commands, browser control and web access.

That kind of integration can be more valuable than raw model quality if the goal is to complete real software tasks rather than produce impressive one-shot answers.

But it also narrows the addressable audience. Teams looking for a model they can deploy broadly across multiple external tools and platforms should recognize that Cursor is presenting Composer 2 as a model for Cursor users, not as a generally available standalone foundation model.

The bigger picture: Cursor is making an operational argument

The significance of Composer 2 is not that Cursor has suddenly taken the top spot on every coding benchmark. It has not. The more important point is that Cursor is making an operational argument: its model is getting better, its pricing is low enough to encourage broader use, and its faster tier is responsive enough that the company is comfortable making it the default despite the higher cost.

That combination could resonate with engineering teams that increasingly care less about abstract model prestige and more about whether an assistant can stay useful across long coding sessions without becoming prohibitively expensive.

Cursor’s broader pricing structure helps frame the competitive pressure around this launch. On its current pricing page, Cursor offers a free Hobby tier, a Pro plan at $20 per month, Pro+ at $60 per month, and Ultra at $200 per month for individual users, with higher tiers offering more usage across models from OpenAI, Anthropic and Google.

On the business side, Teams costs $40 per user per month, while Enterprise is custom-priced and adds pooled usage, centralized billing, usage analytics, privacy controls, SSO, audit logs and granular admin controls. In other words, Cursor is not just charging for access to a coding model. It is charging for a managed application layer that sits on top of multiple model providers while adding team features, governance and workflow tooling.

That model is increasingly under pressure as first-party AI companies push deeper into coding itself. OpenAI and Anthropic are no longer just selling models through third-party products; they are also shipping their own coding interfaces, agents and evaluation frameworks — such as Codex and Claude Code — raising the question of how much room remains for an intermediary platform.

Commenters on X, while unverified and not necessarily representative of the broader market, have increasingly described moving from Cursor to Anthropic’s Claude Code, especially among power users drawn to terminal-first workflows, longer-running agent behavior and lower perceived overhead.

Some of those posts describe frustration with Cursor’s pricing, context loss or editor-centric experience, while praising Claude Code as a more direct and fully agentic way to work. Even treated cautiously, that kind of social chatter points to the strategic problem Cursor faces: it has to prove that its integrated platform, team controls and now its own in-house models add enough value to justify sitting between developers and the model makers’ increasingly capable coding products.

That makes Composer 2 strategically important for Cursor.

By offering a much cheaper in-house model than Composer 1.5, tuning it tightly to Cursor’s own tool stack and making a faster version the default, the company is trying to show that it provides more than a wrapper around outside systems.

The challenge is that as first-party coding products improve, developers and enterprise buyers may increasingly ask whether they want a separate AI coding platform at all, or whether the model makers’ own tools are becoming sufficient on their own.

[/gpt3]

Starbucks is giving out free coffee on Monday, Feb. 9 — here’s how it works
NRL Grand Closing 2025 livestream: The right way to watch NRL without spending a dime
‘Ride the D’: LA Metro has the internet blushing with new transit campaign
Wordle immediately: The reply and hints for December 15, 2025
Get lifetime entry to 1min.AI for simply $29.97
Share This Article
Facebook Email Print

POPULAR

Gen Z Teacher Buys .9M Sydney Apartment Amid Jealousy Claims
top

Gen Z Teacher Buys $1.9M Sydney Apartment Amid Jealousy Claims

Iran hangs 3 people, including teen wrestler, in first executions over January protests
U.S.

Iran hangs 3 people, including teen wrestler, in first executions over January protests

Commonwealth Fusion Systems CEO says viable reactor possible by 2030s
Politics

Commonwealth Fusion Systems CEO says viable reactor possible by 2030s

Prosecutors Say Stefanie Pieper May Have Been Buried Alive
Entertainment

Prosecutors Say Stefanie Pieper May Have Been Buried Alive

Baked by Melissa’s founder is ‘so freaking thrilled’ to step down as CEO
Money

Baked by Melissa’s founder is ‘so freaking thrilled’ to step down as CEO

Project Hail Mary Review: Gosling’s Space Epic Drags On
Entertainment

Project Hail Mary Review: Gosling’s Space Epic Drags On

Scoopico

Stay ahead with Scoopico — your source for breaking news, bold opinions, trending culture, and sharp reporting across politics, tech, entertainment, and more. No fluff. Just the scoop.

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
  • Contact Us
  • Privacy Policy
  • Terms of Service

2025 Copyright © Scoopico. All rights reserved

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?