By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Scoopico
  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
Reading: AI’s cyborg problem: you have to embrace it to really succeed but 90% of people can’t or don’t want to
Share
Font ResizerAa
ScoopicoScoopico
Search

Search

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel

Latest Stories

This Republican voted to convict Trump. Now he’s up for reelection. Can he survive? : NPR
This Republican voted to convict Trump. Now he’s up for reelection. Can he survive? : NPR
Applied Materials: AI Chip Equipment Boom Looks Priced For Perfection (NASDAQ:AMAT)
Applied Materials: AI Chip Equipment Boom Looks Priced For Perfection (NASDAQ:AMAT)
Gaza flotilla sails for a third time despite threats: 500 activists depart
Gaza flotilla sails for a third time despite threats: 500 activists depart
Opinion | The Great Political Realignment of 2026
Opinion | The Great Political Realignment of 2026
Anime Story 2 Traits guide
Anime Story 2 Traits guide
Have an existing account? Sign In
Follow US
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © Scoopico. All rights reserved
AI’s cyborg problem: you have to embrace it to really succeed but 90% of people can’t or don’t want to
Money

AI’s cyborg problem: you have to embrace it to really succeed but 90% of people can’t or don’t want to

Scoopico
Last updated: May 16, 2026 10:14 am
Scoopico
Published: May 16, 2026
Share
SHARE


Contents
The experimentThe four qualitiesWhere I come inFear of other peopleWhat’s actually at stakeThe false productivity trap

A few weeks ago, I became briefly famous for the wrong reasons.

The Wall Street Journal ran a piece about how I use AI in my work as an editor at Fortune — prompting drafts, synthesizing interviews, and accelerating a reporting process that used to take me twice as long. The response was swift, loud, and chaotic. The “journalism community” was divided as editors perked up and reporters recoiled. Strangers on the internet called me lazy. A few journalists told me privately they were doing the same thing and would never admit it. One reader asked to meet for coffee specifically to explain why I was wrong.

I had not expected this. I had expected, maybe, curiosity. What I got instead felt like something older and more personal than a debate about journalism ethics — more like the look you get when a coworker figures out a shortcut and doesn’t share it.

I’ve been trying to understand the reaction ever since. The person who finally gave me a framework for it wasn’t a media critic or a journalism professor. She was a neuroscientist who has spent 30 years wiring AI into human beings.

The experiment

Vivienne Ming‘s career began in 1999, when her undergraduate honors thesis — a facial analysis system trained to distinguish real smiles from fake ones, which she proudly told me was partly funded by the CIA for lie-detection research — introduced her to machine learning before most people had even heard the term. She went on to build one of the first learning AI systems embedded in a cochlear implant, a model that learned to hear within a human brain that was also learning to hear. She has since founded companies applying AI to hiring bias, Alzheimer’s research, and postpartum depression. For three decades, her self-appointed mission has been to take a technology most people misunderstand and figure out how to use it to make the world better.

courtesy of Vivienne Ming

Last year, she ran an experiment that got a lot of attention for what she’s called the “cognitive divide” and even a “dementia crisis.” But she told me it clarified something she had long suspected.

Ming recruited teams of UC Berkeley students to use AI tools to predict real-world outcomes on Polymarket — the forecasting exchange where professionals with real money bet on geopolitical events, commodity prices, and economic indicators. The task was specifically designed to be impossible to game from memory: no amount of studying would tell you what a barrel of oil would cost in six months. She wanted to see not whether AI helped, but how humans used it — and what that revealed about the humans themselves.

She also put EEG monitors on some participants.

What the brain scans showed, before she had even fully analyzed the behavioral data, was something out of a Marvel Comic. When most students handed a question to the AI and submitted the answer, their gamma wave activity — the neural signature of cognitive engagement — dropped by roughly 40%. “That would be the equivalent of going from working on a hard math problem to watching TV,” she told me. These were bright students at a top university. With access to the most powerful AI tools in the world, they had become, in her words, “a very expensive copy-paste function that needed health insurance.”

She calls this group the automators. They were the majority.

A second group — the validators — used AI differently: to confirm what they already believed. They cherry-picked supporting evidence, ignored pushback, submitted answers that reflected their priors more than the data. They performed worse than AI operating alone.

Then there was the third group. Small — she estimates 5% to 10% of the general population. When she analyzed their interaction transcripts, something unusual appeared: you couldn’t tell who was making the decisions. The human and the machine were genuinely integrated. The humans would explore — surfacing hypotheses, chasing hunches, venturing into territory the data didn’t obviously support. The AI would ground them, correcting overreach, pulling back toward evidence. The human would update and push further. Round after round.

Ming calls them cyborgs. They outperformed the best individual humans in the study and they outperformed the best AI models running alone. They were roughly on par with Polymarket’s expert markets — professionals with millions of dollars on the line.

Here is the detail that most surprised her: it barely mattered whether the cyborg teams used a state-of-the-art model or a cheap open-source one you could run on a phone. The benchmarks that AI companies obsess over — the ones cited in Senate hearings and investor decks and every major tech announcement — predicted almost nothing about outcomes. What predicted everything was the quality of the human.

Specifically, Ming isolated four traits crucial for cyborg success: curiosity, fluid intelligence, intellectual humility, and perspective-taking. Ming notes that these same traits, measured in children, predict lifetime earnings and all-cause mortality rates. “There’s a reason these things are predictive of life outcomes, because they change how we engage with the world.”

The four qualities

Ming identified four traits that reliably predicted whether someone became a cyborg or an automator. They are worth naming carefully, because they matter more than anything else in this story.

Curiosity — the disposition to keep searching even when the AI has given you a good enough answer. Fluid intelligence — the ability to reason through novel problems that don’t fit existing templates. Intellectual humility — the willingness to update your beliefs when the machine pushes back, rather than digging in or collapsing entirely. Perspective-taking — the ability to model how others see the world, to explore possibilities that the data doesn’t obviously surface.

Ming notes that these same four traits, measured in children, predict lifetime earnings and all-cause mortality rates. They are not incidental or peripheral qualities. They are the deepest measures of human capability we have — and they are almost entirely absent from the hiring systems and educational frameworks that currently sort people into careers.

courtesy of McKinsey

A week later, I was sitting across from Kate Smaje at McKinsey’s office on the 61st floor of 3 World Trade Center. Smaje is the consulting giant’s global leader of technology and AI, and I started to think she had been eavesdropping on my call with Ming.

Across hundreds of client engagements on every continent, in every major industry, when asked what human skills remain essential and irreplaceable in an AI-augmented world, she arrived at a list of four. These are: Judgment — the ability to decide what matters when you’re drowning in more output than you can process. Conceptual problem-solving — the capacity to create something net new, to see connections that even sophisticated models miss. Empathy — the depth of genuine human-to-human understanding that no machine can replicate. Trust — the scarce resource in a world of AI-generated abundance, built only through human relationships. They map almost directly onto Ming’s list. Judgment: fluid intelligence. Conceptual problem-solving: curiosity. Empathy: perspective-taking. Trust: intellectual humility.

“I fundamentally believe that the world is going to need really great humans,” Smaje told me, adding that she sees this was the most underappreciated insight in the entire AI transition. Organizations are not failing in the AI transition because they couldn’t get the technology, she explained. “They’re failing because they didn’t put in place the level of human change that needed to sit around it.”

Where I come in

When Ming described the cyborg profile to me, I told her (with as much intellectual humility as possible) that it sounded like me. In terms of journalism, I consider the AI to be handling a lot of the well-posed work — what does this transcript say, how does this connect to that data — while I try to handle the ill-posed work: what is the real story here, what does this mean, why does it matter.

My process isn’t complicated. I use AI to generate first drafts from my notes, to find angles I might have missed, to synthesize large amounts of material quickly. Then I check everything — every quote against the original transcript, every claim against the source. I ask the AI what I’m missing. I push back when it goes in a direction I don’t recognize. I try to stay in control of the ideas. And it’s true, I have been thinking of myself as more and more of a cyborg for months now.

Ming responded with an idea she writes about in her new book, Robot-Proof, the difference between what she calls “well-posed problems” and “ill-posed problems.” The former is when we understand the question, and we know how to get the answer, and machines, especially AI, are superhuman at solving these. But they haven’t been very effective at tackling ill-posed problems.

“I think most interesting problems in the world are ill-posed,” Ming said, adding that she sees a world struggling to adjust because it’s been built for much easier problems. “We built a whole employment system that’s based on people getting some degree of an education to answer well-posed questions that nowadays are better answered by a machine.” This could explain much of the backlash — and much of the scramble within the C-suite, as boards ask McKinsey leaders like Smaje to suddenly pivot their companies from well-posed to ill-posed problems.

Fear of other people

Ming has a name for what was underneath the response I received. “Most of our fears about AI,” she told me, “are fears about other people”.

Her answer surprised me with its specificity. She wasn’t dismissive of AI risk. She said she worries about autonomous weapons and about hiring, medical, and policing algorithms making civil-rights decisions in milliseconds, built by companies with no fiduciary obligation to the people they affect. These are real concerns.

But the ambient dread — the kind that fills comment sections and manifests as professional outrage when a colleague admits to using a tool differently than expected — that, she argues, is not really about the technology. It is the specific anxiety of watching someone else gain leverage you haven’t figured out how to gain yourself. A cyborg colleague doesn’t just work faster. They implicitly change what the job is, and in doing so, indict the way you’ve been doing it.

Other people I spoke with for this piece had each, in their own way, run into the same wall.

courtesy of Bret Greenstein

A wall of framed Marvel Comics surrounded Bret Greenstein, who leads AI transformation as the Chief AI Officer at the consulting firm West Monroe, as he told me about the psychological resistance he most often encounters when helping organizations adopt AI. It’s not confusion or skepticism, but identity. “People identify as ‘the person who makes the PowerPoint’ and ‘the person who fills in the Excel’ and ‘the person who you know writes the thing,’” he said, obscuring the fact that in the world of work, you’re really a person who makes a decision more than does a thing. He agreed that he may be predisposed to welcome the cyborg future as someone who, like me, has been reading Marvel Comics most of his life and already saw them expressed in the form of, say Iron Man aka Tony Stark.

West Monroe calculated that AI added the equivalent of 320 full-time employees’ worth of output in six months without adding headcount, according to Greenstein. He said that when he showed people what was possible, some lit up. Others shut down — not because the technology was hard, but because it made their sense of professional self suddenly feel unstable.

courtesy of EY-Parthenon

Mitch Berlin, Americas vice chair at EY-Parthenon, the strategy consulting arm of the Big 4 giant, told me that he’s largely not seeing a resistance, at least in conversations with C-suite leaders. The people he talks to are “pretty on board and excited right now,” he said, citing a recent survey by his firm that shows the overwhelming majority see AI as a lever both for growth and productivity. He described the current landscape as a “gap” between “the acknowledgement that it’s there and it’s not going away, but how do you actually implement it in your organization?” In other words, there aren’t enough cyborgs in the workforce, or they haven’t been identified yet or even self-awakened.

courtesy of Gad Levanon

Gad Levanon, chief economist at the Burning Glass Institute and one of the country’s leading labor experts, had watched anti-AI sentiment consolidate along a striking demographic line: “highly educated liberals,” disproportionately in creative and knowledge professions. “Generative AI is a real threat to many professions that many liberals have,” he told me — journalism, design, writing, academia. He wasn’t entirely unsympathetic to the underlying anxiety: these are people watching a tool emerge that targets exactly what they spent years and significant money becoming good at. He, for one, said he welcomed the chance to become a cyborg. “”I don’t write easily. Like, it doesn’t come easy to me. And I’m also not a native speaker. So for me, it was a big difference. I usually give it, like, bullet points and ask it to develop the prose out of that.”

Dror Poleg, an economic historian whose forthcoming book focuses on how to thrive in a world of intensifying uncertainty, inequality and volatility, offered a more precise diagnosis. He pointed to remote work as a template for understanding what’s happening with AI resistance now: the technology didn’t create a new reality so much as force people to confront one that had been quietly arriving for years. “AI is like a catalyst, or a forcing function,” he told me, “a bit like COVID forced us to realize things about remote work and the internet that maybe were true five or 15 years before COVID.”

courtesy of Dror Poleg

Poleg argued that for 50 years, the economy’s center of gravity has been moving more toward producing intangible rather than tangible things, meaning “more inequality, more uncertainty, more professions, fewer places to hide, like fewer normal jobs where you can just learn something, and that knowledge will remain useful for the next 20, 30, 40 years, and you’ll just do the same thing.” AI is just the thing that made this more visible, somehow — even though it has existed for decades already and it somehow took on a new appearance over the last four years.

What’s actually at stake

The stakes beneath the culture war are significant enough to warrant separation from it.

Levanon’s reading of the labor data is that the economy is bifurcating in a specific and underreported way. Entry-level white-collar positions — the apprenticeship layer of professional careers — are quietly disappearing, hollowed out first because they are composed almost entirely of what Ming calls well-posed problems: tasks with known methods and computable answers. This is not a prediction about the future. Young college graduates are already feeling it, competing for fewer entry points in professions that once reliably absorbed them. Levanon’s own daughter, a recent graduate, took far longer than expected to find work. Her friends are still looking.

The Microsoft AI Diffusion Report for Q1 2026 quantifies the pace: global AI adoption grew 1.5 percentage points in a single quarter, with the Global North now at 27.5% of the working-age population versus 15.4% in the Global South — a divide widening twice as fast in wealthier economies. Within countries, a similar split is forming among individuals: between those learning to work with these tools and those who haven’t, or won’t.

courtesy of Microsoft

Ming frames this split with more precision than most. She said she agrees with Jevons Paradox, a concept increasingly popular on Wall Street and on the lips of Anthropic’s Dario Amodei. The problem has to do more with the resistance of our coming cyborg future, she added. “It’s going to create more jobs, but the thing no one’s saying is, who’s going to be qualified to fill these jobs?”

Explaining that she sees demand for both well-posed (low-pay, low-autonomy) and ill-posed (high-pay, high-creativity) labor, she said that she sees the labor supply for the latter as highly inelastic. Just because there’s more demand for creative problem solvers doesn’t mean workers will get more creative. “We’re acting as though demand automatically produces supply,” she said. “There’ll be lots of jobs. Most of them will be mediocre and have little autonomy. And the ones that people really want will become even more esoteric, and the competition for that elite labor will go up.” After all, she added, there is no six-week job retraining program for cyborgs.

Levanon, who has tracked white-collar labor markets longer than most in his field, sees the same bifurcation arriving in the data. His forecast is for a prolonged period of labor market “softness” — potentially spanning decades — driven not by a collapse in the number of jobs but “kind of like a race between job elimination and job creation.” He drew an analogy to the manufacturing hollowing of the Midwest in the 1990s and 2000s: devastating for the communities it hit, but invisible to everyone else precisely because it was concentrated in places and populations the professional class didn’t have to look at. “If the manufacturing thing happened to the entire population rather than just the manufacturing communities,” he told me, “it would have been a very, very big shock.”

The false productivity trap

Critics are not wrong to be worried, Ming said. They were wrong about what they were worried about. The automators in her study weren’t bad people making lazy choices — they were doing what most humans do when handed a powerful tool and no framework for using it well. They optimized for the appearance of productivity rather than its substance. The machine lowered their cognitive load, and they accepted the gift without asking what it cost them.

Unprompted, McKinsey’s Smaje separately warned me about the same problem. “You have to be careful of in this environment of not falling into the false productivity trap,” she said. Maybe you are doing so much more than you did before, “but that doesn’t mean that that more and more and more is valuable.” This is a question increasingly coming up in media circles, as the erosion of Google search results leads away from SEO-optimized trending news and toward more original reporting, like the story you’re reading now, from the industry’s supposed “AI guy.”

Ming has been arguing for a generation that education systems need to change — away from passive absorption of well-posed answers, toward active cultivation of exactly these traits. Nothing has changed. She is not sanguine about the timeline. But she is still running experiments, still building companies, still asking what she is missing.

That last part, I think, is the whole point.

Some people really are getting further ahead as cyborgs in this new economy, and I’ve talked to some of them, like the millionaire janitor in Canada who’s using AI agents to read his emails and schedule his appointments, or the three-person startup with agent colleagues that became instantly profitable selling medical aesthetics in Texas.

The backlash I received was, in its way, a gift. Not because it was fair — I don’t think it was — but because it was clarifying. The argument was never really about whether I fact-checked my quotes or disclosed my process. It was about something older: the anxiety of a professional class watching the tools of their trade become accessible to more people, in more configurations, with less gatekeeping than before.

The EEG data suggest that getting mad about it is, neurologically speaking, the equivalent of watching TV.

For this story, Fortune journalists used generative AI as a research tool. An editor verified the accuracy of the information before publishing.

Weekly Chartstopper: November 21, 2025
New Most well-liked Inventory And Child Bond IPOs, October 2025 (NYSE:DTE)
Meet the millennial YouTuber whose horror movie is beating Melania Trump at the box office
Greenback sinks as Trump’s new tariffs increase fears about U.S. debt and reserve foreign money standing
Feds are looking teenage hackers
Share This Article
Facebook Email Print

POPULAR

This Republican voted to convict Trump. Now he’s up for reelection. Can he survive? : NPR
Politics

This Republican voted to convict Trump. Now he’s up for reelection. Can he survive? : NPR

Applied Materials: AI Chip Equipment Boom Looks Priced For Perfection (NASDAQ:AMAT)
Money

Applied Materials: AI Chip Equipment Boom Looks Priced For Perfection (NASDAQ:AMAT)

Gaza flotilla sails for a third time despite threats: 500 activists depart
News

Gaza flotilla sails for a third time despite threats: 500 activists depart

Opinion | The Great Political Realignment of 2026
Opinion

Opinion | The Great Political Realignment of 2026

Anime Story 2 Traits guide
Sports

Anime Story 2 Traits guide

Dreame X60 Max Ultra Complete review: The best robot vacuum for carpet of 2026, but too pricey
Tech

Dreame X60 Max Ultra Complete review: The best robot vacuum for carpet of 2026, but too pricey

Scoopico

Stay ahead with Scoopico — your source for breaking news, bold opinions, trending culture, and sharp reporting across politics, tech, entertainment, and more. No fluff. Just the scoop.

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
  • Contact Us
  • Privacy Policy
  • Terms of Service

2025 Copyright © Scoopico. All rights reserved

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?