By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Scoopico
  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
Reading: Weibo's new open supply AI mannequin VibeThinker-1.5B outperforms DeepSeek-R1 on $7,800 post-training finances
Share
Font ResizerAa
ScoopicoScoopico
Search

Search

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel

Latest Stories

Jennifer Aniston’s Favourite Levi’s Denims Are 40% Off
Jennifer Aniston’s Favourite Levi’s Denims Are 40% Off
Why are lung most cancers charges rising amongst younger, match ladies who’ve by no means smoked?
Why are lung most cancers charges rising amongst younger, match ladies who’ve by no means smoked?
Meet the celebs behind the collection and the roles they play
Meet the celebs behind the collection and the roles they play
Waymo taxis will take the freeway in choose cities
Waymo taxis will take the freeway in choose cities
The right way to earn Hilton Honors standing, even on the final minute
The right way to earn Hilton Honors standing, even on the final minute
Have an existing account? Sign In
Follow US
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © Scoopico. All rights reserved
Weibo's new open supply AI mannequin VibeThinker-1.5B outperforms DeepSeek-R1 on ,800 post-training finances
Tech

Weibo's new open supply AI mannequin VibeThinker-1.5B outperforms DeepSeek-R1 on $7,800 post-training finances

Scoopico
Last updated: November 12, 2025 8:27 pm
Scoopico
Published: November 12, 2025
Share
SHARE



Contents
A Totally different Coaching Method: Spectrum-to-SignEfficiency Throughout DomainsSteerage for Enterprise AdoptionWeibo’s Technique and Market PlaceWhat It Means for Enterprise Technical Resolution Makers

One other day in late 2025, one other spectacular consequence from a Chinese language firm in open supply synthetic intelligence.

Chinese language social networking firm Weibo's AI division lately launched its open supply VibeThinker-1.5B—a 1.5 billion parameter massive language mannequin (LLM) that may be a fine-tuned variant of rival Chinese language tech agency Alibaba's Qwen2.5-Math-1.5B.

It's accessible now totally free obtain and utilization by researchers and enterprise builders—even for business functions—beneath a permissive MIT License on Hugging Face, GitHub and ModelScope, with a technical report on open entry science publishing web site arxiv.org.

And but, regardless of its compact dimension, VibeThinker-1.5B achieves benchmark-topping reasoning efficiency on math and code duties, rivaling or surpassing fashions lots of of instances its dimension, even outperforming Chinese language rival DeepSeek's famed R1 that went viral in the beginning of this yr—a 671-billion parameter mannequin—on formal reasoning benchmark.

It additional eclipses Mistral AI's Magistral Medium and holds its personal in opposition to Anthropic's Claude Opus 4 and OpenAI's gpt-oss-20B Medium, all whereas requiring a fraction of the infrastructure and funding.

It additionally does so having been post-trained on a finances of merely $7800 USD for compute sources (3900 GPU hours on Nvidia H800s) — far lower than the tens, and even lots of, of 1000’s of {dollars} sometimes required to fine-tune fashions of comparable or bigger scale.

Recall this isn’t the whole price of the mannequin's growth, nevertheless: LLMs are skilled in phases. First comes pre-training, when the mannequin learns primary language construction and basic information by predicting the following phrase throughout huge quantities of textual content from the web, books, and articles. This offers it fluency however not a lot sense of the right way to observe directions or maintain a dialog

Submit-training comes subsequent, utilizing a lot smaller, higher-quality datasets—sometimes collections of instance questions, prompts, and expert-written solutions—to show the mannequin the right way to reply helpfully, cause by way of issues, and align with human expectations. Nonetheless, Weibo's post-training price effectiveness on VibeThinker-1.5B is noteworthy and must be counseled.

The open-source launch upends assumptions about parameter scale, compute depth, and the minimal viable dimension for high-performance LLMs.

A Totally different Coaching Method: Spectrum-to-Sign

VibeThinker-1.5B owes its efficiency to not scale, however to the coaching framework behind it: the Spectrum-to-Sign Precept (SSP).

As a substitute of optimizing a mannequin purely for single-answer correctness (Cross@1), the SSP framework decouples supervised fine-tuning (SFT) and reinforcement studying (RL) into two distinct phases with totally different objectives:

  • SFT (“Spectrum Part”): The mannequin is skilled to maximise range throughout potential right solutions, bettering its Cross@Okay rating. This builds a variety of believable answer paths.

  • RL (“Sign Part”): A second-stage reinforcement studying system (known as MaxEnt-Guided Coverage Optimization, or MGPO) is used to establish and amplify essentially the most right paths from this various answer pool. MGPO prioritizes issues the place the mannequin is most unsure, utilizing entropy-based weighting to focus studying.

The authors argue this separation permits small fashions to discover reasoning area extra successfully—reaching sign amplification with out counting on large parameter counts.

VibeThinker-1.5B makes a compelling case that the business’s reliance on parameter scaling as the one route to raised reasoning efficiency could also be outdated.

By adopting a diversity-first coaching pipeline, WeiboAI has proven that smaller, extra accessible fashions can match and even outperform billion-dollar techniques in logic-heavy duties.

The low useful resource footprint is among the many most vital points of VibeThinker-1.5B. At beneath $8,000, the post-training price is 30–60x decrease than fashions like DeepSeek R1 and MiniMax-M1, which price between $294K and $535K to coach.

Efficiency Throughout Domains

Regardless of its small dimension, VibeThinker-1.5B delivers cross-domain reasoning that outpaces many bigger open-source and business fashions:

Mannequin

AIME25

LiveCodeBench v6

GPQA-Diamond

VibeThinker-1.5B

74.4

51.1

46.7

GPT-OSS-20B-Medium

72.1

54.9

66.0

Claude Opus 4

69.2

56.6

79.6

MiniMax M1 (456B)

74.6

62.3

69.2

DeepSeek R1 (671B)

70.0

65.9

71.5

Kimi K2 (1.09T)

49.5

53.7

75.1

VibeThinker was benchmarked in opposition to each reasoning-centric fashions (Magistral, Claude, OpenAI o3-mini) and non-reasoning LLMs (GPT-4.1, Kimi K2, DeepSeek V3). Throughout structured reasoning benchmarks, the mannequin constantly outperformed non-reasoning fashions, no matter dimension:

  • On AIME24 (math), it beat Kimi K2 (1.09T) by over 10 factors (80.3 vs. 69.6).

  • On LiveCodeBench v6, it surpassed Claude Opus 4 (51.1 vs. 47.4).

  • On GPQA, it scored beneath GPT-4.1 and Claude, however nonetheless doubled its base mannequin (from 16.4 to 46.7).

This helps the authors’ declare that dimension just isn’t the one path to reasoning functionality—with correct coaching design, smaller fashions can attain and even exceed the efficiency of far bigger techniques in focused duties.

Notably, it achieves parity with fashions lots of of instances bigger on math and code, although it lags behind normally information reasoning (GPQA), the place bigger fashions keep an edge.

This means a possible specialization trade-off: whereas VibeThinker excels at structured logical duties, it has much less capability for wide-ranging encyclopedic recall, a identified limitation of smaller architectures.

Steerage for Enterprise Adoption

The discharge contains really helpful inference settings (temperature = 0.6, top_p = 0.95, max tokens = 40960).

The mannequin is sufficiently small to be deployed on edge units, together with cell phones and vehicle-embedded techniques, whereas inference prices are estimated to be 20–70x cheaper than with massive fashions.

This positions VibeThinker-1.5B not simply as a analysis achievement, however as a possible basis for cost-efficient, domestically deployable reasoning techniques.

Weibo’s Technique and Market Place

Weibo, launched by Sina Company in 2009, stays a cornerstone of China’s social media ecosystem. Usually described as China’s model of X (previously Twitter), the platform blends microblogging, multimedia content material, and trending-topic options with a regulatory setting formed by tight authorities oversight.

Regardless of counting 600 million month-to-month energetic customers (greater than twice that of X), traders aren’t optimistic about its promoting income progress potential within the close to time period, and Weibo is navigating intensifying competitors from video-first platforms like Douyin, that are drawing youthful customers and rising time-spent elsewhere.

In response, Weibo has leaned into creator-economy monetization, live-streaming, and vertical video—including instruments for influencer engagement, e-commerce integration, and richer analytics for manufacturers.

The platform’s position as a digital public sq. additionally makes it a spotlight of regulatory scrutiny. Chinese language authorities proceed to use stress on points starting from content material governance to knowledge safety. In September 2025, Weibo was among the many platforms cited in official warnings, highlighting its ongoing publicity to coverage dangers.

Weibo’s push into AI R&D—exemplified by the discharge of VibeThinker-1.5B—indicators a shift in ambition. Past being a media platform, Weibo is positioning itself as a participant within the subsequent part of Chinese language AI growth, utilizing its capital reserves, consumer habits knowledge, and in-house analysis capability to pursue adjoining technical domains.

What It Means for Enterprise Technical Resolution Makers

For engineering leaders and enterprise AI groups, VibeThinker’s launch has sensible implications for every part from orchestration pipelines to price modeling.

A 1.5B-parameter mannequin that outperforms 100x bigger fashions on math and programming duties doesn’t simply save compute—it shifts the architectural steadiness. It allows LLM inference on constrained infrastructure, reduces latency on the edge, and lowers the barrier to entry for functions that in any other case would have required API entry to closed, frontier-scale fashions.

That issues for enterprise ML leads attempting to deploy reasoning-capable brokers inside present techniques, or for platform homeowners tasked with integrating LLMs into automated workflows.

It additionally speaks to these working reinforcement studying from human suggestions (RLHF) pipelines or managing inference optimization throughout hybrid cloud environments.

The mannequin’s post-training methodology—notably its entropy-targeted reinforcement studying method—affords a roadmap for groups seeking to refine smaller checkpoints as a substitute of counting on large-scale pretraining.

VibeThinker’s benchmark transparency and knowledge decontamination steps additionally tackle one other rising precedence in enterprise AI: auditability. Whereas its efficiency on general-knowledge assessments nonetheless trails massive frontier fashions, its task-specific reliability makes it a sexy candidate for managed environments the place correctness issues greater than protection.

In brief, VibeThinker-1.5B isn’t only a analysis milestone—it’s a robust candidate for sensible enterprise use, deployment and learnings. It suggests {that a} new class of compact, reasoning-optimized fashions is viable for enterprise use circumstances that had been beforehand the area of far bigger techniques. For organizations attempting to steadiness price, latency, interpretability, and management, it’s a very good new choice to the lengthy, rising record of Chinese language open supply choices.

[/gpt3]

Jon Stewart responds to Jimmy Kimmel’s return to air
Wordle in the present day: The reply and hints for July 25, 2025
Tips on how to unblock Pornhub without cost in Virginia
Moon part in the present day defined: What the moon will seem like on July 26, 2025
‘A Helpful Ghost’ evaluation: Making out with a vacuum is the tip of the iceberg on this haunted comedy
Share This Article
Facebook Email Print

POPULAR

Jennifer Aniston’s Favourite Levi’s Denims Are 40% Off
Entertainment

Jennifer Aniston’s Favourite Levi’s Denims Are 40% Off

Why are lung most cancers charges rising amongst younger, match ladies who’ve by no means smoked?
News

Why are lung most cancers charges rising amongst younger, match ladies who’ve by no means smoked?

Meet the celebs behind the collection and the roles they play
Sports

Meet the celebs behind the collection and the roles they play

Waymo taxis will take the freeway in choose cities
Tech

Waymo taxis will take the freeway in choose cities

The right way to earn Hilton Honors standing, even on the final minute
Travel

The right way to earn Hilton Honors standing, even on the final minute

Trump is ‘dedicated’ to ,000 tariff dividend funds, White Home says
U.S.

Trump is ‘dedicated’ to $2,000 tariff dividend funds, White Home says

Scoopico

Stay ahead with Scoopico — your source for breaking news, bold opinions, trending culture, and sharp reporting across politics, tech, entertainment, and more. No fluff. Just the scoop.

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
  • Contact Us
  • Privacy Policy
  • Terms of Service

2025 Copyright © Scoopico. All rights reserved

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?