By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Scoopico
  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
Reading: The limits of bubble thinking: How AI breaks every historical analogy
Share
Font ResizerAa
ScoopicoScoopico
Search

Search

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel

Latest Stories

Hitchhiker’s Guide to the ‘talking filibuster’ and SAVE Act
Hitchhiker’s Guide to the ‘talking filibuster’ and SAVE Act
Lindsey Vonn Shows Off Bandaged Leg During Rehab Workout
Lindsey Vonn Shows Off Bandaged Leg During Rehab Workout
Middle East War Risks Hiking UK Shop Prices After Wet February Dip
Middle East War Risks Hiking UK Shop Prices After Wet February Dip
Trump says war with Iran is ‘very complete’ and could end soon
Trump says war with Iran is ‘very complete’ and could end soon
“You’ve got no place on a racetrack with those guys”
“You’ve got no place on a racetrack with those guys”
Have an existing account? Sign In
Follow US
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © Scoopico. All rights reserved
The limits of bubble thinking: How AI breaks every historical analogy
Tech

The limits of bubble thinking: How AI breaks every historical analogy

Scoopico
Last updated: March 10, 2026 12:35 am
Scoopico
Published: March 10, 2026
Share
SHARE



Contents
Why markets keep overshootingThe category error we keep makingThe skeptics are right about the hype, and wrong about what it meansWhere humans still matter (for now)This time is actually different

It’s always the same story: A new technology appears and everyone starts talking about how it’ll change everything. Then capital rushes in, companies form overnight, and valuations climb faster than anyone can justify. Then, many many months later, the warnings arrive, and people suddenly remember the dot-com crash or crypto.

You’ve probably seen it before. And if you have, you probably think AI is the next bubble. Humans are great at pattern-matching. We’ve evolved to see patterns, so when something familiar emerges, we instinctively map it onto the closest story we already know. We think we’ve seen it before, and we’re confident we know how it ends.

But that instinct can mislead us. AI feels like a bubble because we’re forcing something genuinely discontinuous into a familiar story. The idea that everything that rises quickly must ultimately collapse sounds prudent. But it doesn’t mean it’ll always be true.

Why markets keep overshooting

Every major technological shift produces the same outward symptoms: Inflated expectations, followed by high-visibility failure. Dot-com, mobile, and crypto all went through a phase where the world lost its sense of proportion.

Why does this keep happening? Because markets don’t have a framework for discontinuous change. Discounted cash flow models assume steady, stable growth, and comparable companies assume the category already exists. So people assume the near future looks like the recent past, but that doesn’t work when the underlying category itself is changing.

Most valuation tools are designed for incremental progress, so analysts look at quarterly forecasts and incremental improvements. They don’t know what to do with step changes, and they can’t model nonlinear adoption.

So when you see capital overshooting or extreme dispersion of outcomes, that’s the market trying to value decade-long bets using quarterly logic. (Which doesn’t work.) And that’s what a bubble actually is: An indication that no one yet knows how to price what’s coming. That uncertainty looks like invalidation, but it just exposes the limits of existing frameworks.

The category error we keep making

When something new arrives, we reach for comparisons.

AI is like electricity.

AI is like computers.

AI is like the internet.

AI is like mobile.

These comparisons are comforting because they all produced massive, economy-wide change, and attracted enormous capital. They changed how work got done.

They also share something deeper. Every one of those technologies extended human capability without replacing human cognition. Electricity powered machines, but humans still decided what to build. Computers processed data, but humans interpreted it. The internet moved information, but humans decided what mattered. Mobile put computing in your pocket, but human attention remained the scarce resource. In every case, human intelligence anchored everything. It was also the bottleneck.

AI is different because it performs cognitive work. And if that makes you uneasy, it should. Because if AI can actually think, then a lot of what we’ve built our careers on, like our expertise and our hard-won skills, might not be as defensible as we thought. The junior engineer who spent years developing intuition now works alongside a tool that has it instantly. So does the financial analyst known for their variance analysis. People aren’t completely sure where value actually lives anymore, and that’s terrifying.

I talk to CFOs every week. Six months ago, they asked me abstract questions like “what is AI?” and “should we have an AI strategy?” Now the questions are concrete: “Which parts of my team’s work no longer need to be done this way?” That shift happened so quickly, it’s already changing how resources get allocated.

For example, a founder I know started using Claude to write SQL queries that used to take her analyst a couple of days. Did she replace the analyst? Of course not. But she removed the bottleneck, and doesn’t have to depend on him anymore for quick answers. Then her analyst’s role changed completely. He went from spending 60% of his time writing queries to 10% checking them and 90% on strategic recommendations. The company didn’t reduce headcount or costs, and the analyst went from supporting three stakeholders to supporting fifteen.

This is where historical comparisons really start to fail. Tools like GitHub Copilot are compressing expertise. A junior engineer can now operate at a level that once required years of work experience. And every time the tool is used, it learns. A hammer doesn’t improve just because you built a house with it, but AI tools do. And when tools get better through use, the rate of improvement compounds. That dynamic doesn’t fit cleanly into any prior technological analogy, which is why the instinct to call this a “bubble” misses the actual point.

Previous technologies assumed a fixed ceiling on human cognition. They made us faster and stronger, but the limiting factor was always the same: How many smart people could we put on a problem? AI stretches that ceiling way beyond what we’re used to. Before, understanding your business better usually meant one of three things: More data, more analysts, or more experienced leaders. The constraint was how much human attention and judgment you could afford. With AI, that constraint shifts. When analysis that once took days appears in seconds, the new constraint is knowing what to look for. What questions matter? The limiting factor stops being talent and starts being judgment.

The skeptics are right about the hype, and wrong about what it means

Let’s take the strongest version of the bubble argument at face value. Maybe AI actually is overhyped, and most of these companies will fail. Maybe we’re early, and real impact takes another five or ten years. All of that could be completely true, and it still wouldn’t change the core point, which is this:

Even if the majority of AI startups fail, and even if adoption is way slower than expected, AI is still the first technology that can perform knowledge work. That doesn’t disappear because markets overshoot or expectations reset. The skeptics are right that the hype is inflated. But they’re wrong that inflated hype makes the technology irrelevant. We’ve seen this before: The dot-com bubble was real, and Pets.com crashed and burned, but the internet still changed everything. Both things were true at the same time.

The finance leaders I’m working with are beyond arguing about whether AI matters. Now they’re trying to understand which workflows change first, and how fast they need to adapt. That conversation is happening quietly, underneath all the noise.

And the workflows collapsing first share three properties:

  1. They require expertise, but they’re repetitive.

  2. They’re bottlenecks to strategic work.

  3. They’re easy to verify but hard to generate.

These workflows are important enough to pay for, but not so strategic that automating them threatens competitive advantage. They require skill, but that skill doesn’t compound dramatically with repetition, which makes them economically fragile, and explains why they’re already being automated away.

Where humans still matter (for now)

AI is great at recognizing trends, and terrible at knowing which ones actually matter. It can generate variance analysis, but it can’t tell you whether a 12% swing in spend signals healthy growth or a deeper problem. It can draft strategies, but it can’t tell you which strategy fits this market and this team in this exact moment. Judgment under uncertainty, and high-stakes tradeoffs where the downside is catastrophic, remain human responsibilities. For now.

When the constraint is no longer “do we have enough smart people,” the problem becomes one of priority. What deserves attention? What’s worth building next? That’s where I see many founders get stuck. They ask if this is a bubble and if they’re too early, but those aren’t the most useful questions. The right one is: “What can I build in the next year that creates real value, regardless of what valuations do?”

The companies that last will be the ones quietly iterating and embedding AI into actual workflows that solve actual problems. Take CFOs, for example. They’re buying AI because their board wants faster variance analysis, and they’re tired of hiring analysts who quit after six months. That’s a real-world problem that companies need to solve.

And the same is true for investors. The ones who succeed long-term will be those who tolerate uncertainty long enough to see what actually works.

This time is actually different

In the short term, AI will disappoint. Many use cases won’t deliver what they promise, and a lot of companies formed in this wave won’t survive. But the technology will. And, over the long term, AI will reshape every field that depends on knowledge work. Not all at once, and not evenly, but a decade from now, it will be difficult to find a knowledge-based industry that looks the same as it does today.

AI is different because intelligence itself, which was historically the core constraint of human innovation, has now become scalable. That’s an observable fact with measurable consequences. The conversation about bubbles will fade, as it always does, and what will remain are the systems that quietly adapted while everyone else argued about valuations. The skeptics will have been right about the excess, and wrong about what actually mattered, because, five years from now, we’ll probably look back at today’s panic the same way we look back at people who dismissed the internet because a handful of companies failed. And the winners will be those who were building while everyone else argued about valuations.

In time, those are the only stories anyone remembers.

Siqi Chen is co-founder and CEO of Runway.

[/gpt3]

Act fast — a 1TB MacBook Pro for under $450 won’t last long
NYT Strands hints, answers for February 11, 2026
The 11 runtime assaults breaking AI safety — and the way CISOs are stopping them or can cease them
Prime Day Deal: Meta Quest 3S bundle drops to $329 — contains Batman: Arkham Shadow and a 3-month Meta Horizon+ trial
19 Prime Day Apple offers: Save on AirPods, iPads & Apple Watch
Share This Article
Facebook Email Print

POPULAR

Hitchhiker’s Guide to the ‘talking filibuster’ and SAVE Act
Politics

Hitchhiker’s Guide to the ‘talking filibuster’ and SAVE Act

Lindsey Vonn Shows Off Bandaged Leg During Rehab Workout
Entertainment

Lindsey Vonn Shows Off Bandaged Leg During Rehab Workout

Middle East War Risks Hiking UK Shop Prices After Wet February Dip
business

Middle East War Risks Hiking UK Shop Prices After Wet February Dip

Trump says war with Iran is ‘very complete’ and could end soon
News

Trump says war with Iran is ‘very complete’ and could end soon

“You’ve got no place on a racetrack with those guys”
Sports

“You’ve got no place on a racetrack with those guys”

‘Fatal Frame II: Crimson Butterfly REMAKE’ review: So scary, I’ll never play it again
Tech

‘Fatal Frame II: Crimson Butterfly REMAKE’ review: So scary, I’ll never play it again

Scoopico

Stay ahead with Scoopico — your source for breaking news, bold opinions, trending culture, and sharp reporting across politics, tech, entertainment, and more. No fluff. Just the scoop.

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
  • Contact Us
  • Privacy Policy
  • Terms of Service

2025 Copyright © Scoopico. All rights reserved

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?