By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Scoopico
  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
Reading: The ‘brownie recipe problem’: why LLMs must have fine-grained context to deliver real-time results
Share
Font ResizerAa
ScoopicoScoopico
Search

Search

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel

Latest Stories

Best Buy Presidents’ Day Sale 2026: Shop TVs, laptops, and more
Best Buy Presidents’ Day Sale 2026: Shop TVs, laptops, and more
First look: Inside the Capital One Landing at LGA
First look: Inside the Capital One Landing at LGA
Breezy Johnson gets engaged after the super-G final
Breezy Johnson gets engaged after the super-G final
Top immigration officials to testify before Senate : NPR
Top immigration officials to testify before Senate : NPR
Brave Bystanders Save Women Swept Up in Powerful Floodwater, on Video
Brave Bystanders Save Women Swept Up in Powerful Floodwater, on Video
Have an existing account? Sign In
Follow US
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © Scoopico. All rights reserved
The ‘brownie recipe problem’: why LLMs must have fine-grained context to deliver real-time results
Tech

The ‘brownie recipe problem’: why LLMs must have fine-grained context to deliver real-time results

Scoopico
Last updated: February 4, 2026 8:34 pm
Scoopico
Published: February 4, 2026
Share
SHARE



Contents
Mixing reasoning, real-world state, personalizationAvoiding 'monolithic' agent systems

Today’s LLMs excel at reasoning, but can still struggle with context. This is particularly true in real-time ordering systems like Instacart. 

Instacart CTO Anirban Kundu calls it the "brownie recipe problem."

It's not as simple as telling an LLM ‘I want to make brownies.’ To be truly assistive when planning the meal, the model must go beyond that simple directive to understand what’s available in the user’s market based on their preferences — say, organic eggs versus regular eggs — and factor that into what’s deliverable in their geography so food doesn’t spoil. This among other critical factors. 

For Instacart, the challenge is juggling latency with the right mix of context to provide experiences in, ideally, less than one second’s time. 

“If reasoning itself takes 15 seconds, and if every interaction is that slow, you're gonna lose the user,” Kundu said at a recent VB event. 

Mixing reasoning, real-world state, personalization

In grocery delivery, there’s a “world of reasoning” and a “world of state” (what’s available in the real world), Kundu noted, both of which must be understood by an LLM along with user preference. But it’s not as simple as loading the entirety of a user’s purchase history and known interests into a reasoning model. 

“Your LLM is gonna blow up into a size that will be unmanageable,” said Kundu. 

To get around this, Instacart splits processing into chunks. First, data is fed into a large foundational model that can understand intent and categorize products. That processed data is then routed to small language models (SLMs) designed for catalog context (the types of food or other items that work together) and semantic understanding. 

In the case of catalog context, the SLM must be able to process multiple levels of details around the order itself as well as the different products. For instance, what products go together and what are their relevant replacements if the first choice isn't in stock? These substitutions are “very, very important” for a company like Instacart, which Kundu said has “over double digit cases” where a product isn’t available in a local market. 

In terms of semantic understanding, say a shopper is looking to buy healthy snacks for children. The model needs to understand what a healthy snack is and what foods are appropriate for, and appeal to, an 8 year old, then identify relevant products. And, when those particular products aren’t available in a given market, the model has to also find related subsets of products. 

Then there’s the logistical element. For example, a product like ice cream melts quickly, and frozen vegetables also don’t fare well when left out in warmer temperatures. The model must have this context and calculate an acceptable deliverability time. 

“So you have this intent understanding, you have this categorization, then you have this other portion about logistically, how do you do it?”, Kundu noted.

Avoiding 'monolithic' agent systems

Like many other companies, Instacart is experimenting with AI agents, finding that a mix of agents works better than a “single monolith” that does multiple different tasks. The Unix philosophy of a modular operating system with smaller, focused tools helps address different payment systems, for instance, that have varying failure modes, Kundu explained. 

“Having to build all of that within a single environment was very unwieldy,” he said. Further, agents on the back end talk to many third-party platforms, including point-of-sale (POS) and catalog systems. Naturally, not all of them behave the same way; some are more reliable than others, and they have different update intervals and feeds. 

“So being able to handle all of those things, we've gone down this route of microagents rather than agents that are dominantly large in nature,” said Kundu. 

To manage agents, Instacart has integrated with OpenAI’s model context protocol (MCP), which standardizes and simplifies the process of connecting AI models to different tools and data sources.

The company also uses Google’s Universal Commerce Protocol (UCP) open standard, which allows AI agents to directly interact with merchant systems. 

However, Kundu's team still deals with challenges. As he noted, it's not about whether integration is possible, but how reliably those integrations behave and how well they're understood by users. Discovery can be difficult, not just in identifying available services, but understanding which ones are appropriate for which task.

Instacart has had to implement MCP and UCP in “very different” cases, and the biggest problems they’ve run into are failure modes and latency, Kundu noted. “The response times and understandings of both of those services are very, very different I would say we spend probably two thirds of the time fixing those error cases.” 

[/gpt3]

Mark Zuckerberg unveils his imaginative and prescient for superintelligence
SNL Chilly Open will get Trumps ideas on the NYC mayor, SNAP, and the Grinch
In the present day’s Hurdle hints and solutions for June 25, 2025
We preserve speaking about AI brokers, however can we ever know what they’re?
Unusual rings of sunshine seem to hyperlink collectively in house in new discovery
Share This Article
Facebook Email Print

POPULAR

Best Buy Presidents’ Day Sale 2026: Shop TVs, laptops, and more
Tech

Best Buy Presidents’ Day Sale 2026: Shop TVs, laptops, and more

First look: Inside the Capital One Landing at LGA
Travel

First look: Inside the Capital One Landing at LGA

Breezy Johnson gets engaged after the super-G final
U.S.

Breezy Johnson gets engaged after the super-G final

Top immigration officials to testify before Senate : NPR
Politics

Top immigration officials to testify before Senate : NPR

Brave Bystanders Save Women Swept Up in Powerful Floodwater, on Video
Entertainment

Brave Bystanders Save Women Swept Up in Powerful Floodwater, on Video

Bloxwich Hockey Club Objects to Park Pitch Promises
world

Bloxwich Hockey Club Objects to Park Pitch Promises

Scoopico

Stay ahead with Scoopico — your source for breaking news, bold opinions, trending culture, and sharp reporting across politics, tech, entertainment, and more. No fluff. Just the scoop.

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
  • Contact Us
  • Privacy Policy
  • Terms of Service

2025 Copyright © Scoopico. All rights reserved

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?