By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Scoopico
  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
Reading: LangChain's CEO argues that better models alone won't get your AI agent to production
Share
Font ResizerAa
ScoopicoScoopico
Search

Search

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel

Latest Stories

Trump administration’s FDA vaccine chief Vinay Prasad is leaving for the second time : NPR
Trump administration’s FDA vaccine chief Vinay Prasad is leaving for the second time : NPR
TSLY ETF Lags Tesla, SPY: Strategy Flaws and Volatility Risks
TSLY ETF Lags Tesla, SPY: Strategy Flaws and Volatility Risks
Fergie Reacts to Ex Josh Duhamel and Audra Mari’s Pregnancy
Fergie Reacts to Ex Josh Duhamel and Audra Mari’s Pregnancy
Weekly Chartstopper: March 6, 2026
Weekly Chartstopper: March 6, 2026
Amazon says Anthropic remains available outside of defense work
Amazon says Anthropic remains available outside of defense work
Have an existing account? Sign In
Follow US
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © Scoopico. All rights reserved
LangChain's CEO argues that better models alone won't get your AI agent to production
Tech

LangChain's CEO argues that better models alone won't get your AI agent to production

Scoopico
Last updated: March 7, 2026 1:08 am
Scoopico
Published: March 7, 2026
Share
SHARE



As models get smarter and more capable, the "harnesses" around them must also evolve.

This "harness engineering" is an extension of context engineering, says LangChain co-founder and CEO Harrison Chase in a new VentureBeat Beyond the Pilot podcast episode. Whereas traditional AI harnesses have tended to constrain models from running in loops and calling tools, harnesses specifically built for AI agents allow them to interact more independently and effectively perform long-running tasks.

Chase also weighed in on OpenAI's acquisition of OpenClaw, arguing that its viral success came down to a willingness to "let it rip" in ways that no major lab would — and questioning whether the acquisition actually gets OpenAI closer to a safe enterprise version of the product.

“The trend in harnesses is to actually give the large language model (LLM) itself more control over context engineering, letting it decide what it sees and what it doesn't see,” Chase says. “Now, this idea of a long-running, more autonomous assistant is viable.”

Tracking progress and maintaining coherence

While the concept of allowing LLMs to run in a loop and call tools seems relatively simple, it’s difficult to pull off reliably, Chase noted. For a while, models were “below the threshold of usefulness” and simply couldn’t run in a loop, so devs used graphs and wrote chains to get around that. Chase pointed to AutoGPT — once the fastest-growing GitHub project ever — as a cautionary example: same architecture as today's top agents, but the models weren't good enough yet to run reliably in a loop, so it faded fast.

But as LLMs keep improving, teams can construct environments where models can run in loops and plan over longer horizons, and they can continually improve these harnesses. Previously, “you couldn't really make improvements to the harness because you couldn't actually run the model in a harness,” Chase said.

LangChain’s answer to this is Deep Agents, a customizable general-purpose harness.

Built on LangChain and LangGraph, it has planning capabilities, a virtual filesystem, context and token management, code execution, and skills and memory functions. Further, it can delegate tasks to subagents; these are specialized with different tools and configurations and can work in parallel. Context is also isolated, meaning subagent work doesn’t clutter the main agent’s context, and large subtask context is compressed into a single result for token efficiency.

All of these agents have access to file systems, Chase explained, and can essentially create to-do lists that they can execute on and track over time.

“When it goes on to the next step, and it goes on to step two or step three or step four out of a 200 step process, it has a way to track its progress and keep that coherence,” Chase said. “It comes down to letting the LLM write its thoughts down as it goes along, essentially.”

He emphasized that harnesses should be designed so that models can maintain coherence over longer tasks, and be “amenable” to models deciding when to compact context at points it determines is “advantageous.”

Also, giving agents access to code interpreters and BASH tools increases flexibility. And, providing agents with skills as opposed to just tools loaded up front allows them to load information when they need it. “So rather than hard code everything into one big system prompt," Chase explained, "you could have a smaller system prompt, ‘This is the core foundation, but if I need to do X, let me read the skill for X. If I need to do Y, let me read the skill for Y.'"

Essentially, context engineering is a “really fancy” way of saying: What is the LLM seeing? Because that’s different from what developers see, he noted. When human devs can analyze agent traces, they can put themselves in the AI’s “mindset” and answer questions like: What is the system prompt? How is it created? Is it static or is it populated? What tools does the agent have? When it makes a tool call, and gets a response back, how is that presented?

“When agents mess up, they mess up because they don't have the right context; when they succeed, they succeed because they have the right context,” Chase said. “I think of context engineering as bringing the right information in the right format to the LLM at the right time.”

Listen to the podcast to hear more about:

  • How LangChain built its stack: LangGraph as the core pillar, LangChain at the center, Deep Agents on top.

  • Why code sandboxes will be the next big thing.

  • How a different type of UX will evolve as agents run at longer intervals (or continuously).

  • Why traces and observability are core to building an agent that actually works.

You can also listen and subscribe to Beyond the Pilot on Spotify, Apple or wherever you get your podcasts.

[/gpt3]

Microsoft 365, Groups, Outlook, Azure outage on Oct. 9, defined
New Android malware risk can wipe your checking account
Today’s Hurdle hints and answers for February 20, 2026
Relationship Sunday: every little thing you have to know
Is vibe coding ruining a technology of engineers?
Share This Article
Facebook Email Print

POPULAR

Trump administration’s FDA vaccine chief Vinay Prasad is leaving for the second time : NPR
Politics

Trump administration’s FDA vaccine chief Vinay Prasad is leaving for the second time : NPR

TSLY ETF Lags Tesla, SPY: Strategy Flaws and Volatility Risks
business

TSLY ETF Lags Tesla, SPY: Strategy Flaws and Volatility Risks

Fergie Reacts to Ex Josh Duhamel and Audra Mari’s Pregnancy
Entertainment

Fergie Reacts to Ex Josh Duhamel and Audra Mari’s Pregnancy

Weekly Chartstopper: March 6, 2026
Money

Weekly Chartstopper: March 6, 2026

Amazon says Anthropic remains available outside of defense work
News

Amazon says Anthropic remains available outside of defense work

Opinion | The Government’s A.I. Alignment Problem
Opinion

Opinion | The Government’s A.I. Alignment Problem

Scoopico

Stay ahead with Scoopico — your source for breaking news, bold opinions, trending culture, and sharp reporting across politics, tech, entertainment, and more. No fluff. Just the scoop.

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
  • Contact Us
  • Privacy Policy
  • Terms of Service

2025 Copyright © Scoopico. All rights reserved

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?