By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Scoopico
  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
Reading: Three ways AI is learning to understand the physical world
Share
Font ResizerAa
ScoopicoScoopico
Search

Search

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel

Latest Stories

Trump weighing several options for U.S. troops inside Iran
Trump weighing several options for U.S. troops inside Iran
Opinion | Naomi Klein on the Fascism of Elite Backlash
Opinion | Naomi Klein on the Fascism of Elite Backlash
Giants Ace Logan Webb on Negative Team USA Narrative: ‘It’s Bulls—‘
Giants Ace Logan Webb on Negative Team USA Narrative: ‘It’s Bulls—‘
Pinterest CEO: Ban kids under 16 from social media
Pinterest CEO: Ban kids under 16 from social media
Attacks on Middle East energy sites deepen threat to US economy, analysts say
Attacks on Middle East energy sites deepen threat to US economy, analysts say
Have an existing account? Sign In
Follow US
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © Scoopico. All rights reserved
Three ways AI is learning to understand the physical world
Tech

Three ways AI is learning to understand the physical world

Scoopico
Last updated: March 20, 2026 8:48 pm
Scoopico
Published: March 20, 2026
Share
SHARE



Contents
JEPA: built for real-timeGaussian splats: built for spaceEnd-to-end generation: built for scaleWhat comes next: hybrid architectures

Large language models are running into limits in domains that require an understanding of the physical world — from robotics to autonomous driving to manufacturing. That constraint is pushing investors toward world models, with AMI Labs raising a $1.03 billion seed round shortly after World Labs secured $1 billion.

Large language models (LLMs) excel at processing abstract knowledge through next-token prediction, but they fundamentally lack grounding in physical causality. They cannot reliably predict the physical consequences of real-world actions. 

AI researchers and thought leaders are increasingly vocal about these limitations as the industry tries to push AI out of web browsers and into physical spaces. In an interview with podcaster Dwarkesh Patel, Turing Award recipient Richard Sutton warned that LLMs just mimic what people say instead of modeling the world, which limits their capacity to learn from experience and adjust themselves to changes in the world.

This is why models based on LLMs, including vision-language models (VLMs), can show brittle behavior and break with very small changes to their inputs. 

Google DeepMind CEO Demis Hassabis echoed this sentiment in another interview, pointing out that today's AI models suffer from “jagged intelligence.” They can solve complex math olympiads but fail at basic physics because they are missing critical capabilities regarding real-world dynamics. 

To solve this problem, researchers are shifting focus to building world models that act as internal simulators, allowing AI systems to safely test hypotheses before taking physical action. However, “world models” is an umbrella term that encompasses several distinct architectural approaches. 

That has produced three distinct architectural approaches, each with different tradeoffs.

JEPA: built for real-time

The first main approach focuses on learning latent representations instead of trying to predict the dynamics of the world at the pixel level. Endorsed by AMI Labs, this method is heavily based on the Joint Embedding Predictive Architecture (JEPA). 

JEPA models try to mimic how humans understand the world. When we observe the world, we do not memorize every single pixel or irrelevant detail in a scene. For example, if you watch a car driving down a street, you track its trajectory and speed; you do not calculate the exact reflection of light on every single leaf of the trees in the background. 

JEPA models reproduce this human cognitive shortcut. Instead of forcing the neural network to predict exactly what the next frame of a video will look like, the model learns a smaller set of abstract, or “latent,” features. It discards the irrelevant details and focuses entirely on the core rules of how elements in the scene interact. This makes the model robust against background noise and small changes that break other models.

This architecture is highly compute and memory efficient. By ignoring irrelevant details, it requires much fewer training examples and runs with significantly lower latency. These characteristics make it suitable for applications where efficiency and real-time inference are non-negotiable, such as robotics, self-driving cars, and high-stakes enterprise workflows. 

For example, AMI is partnering with healthcare company Nabla to use this architecture to simulate operational complexity and reduce cognitive load in fast-paced healthcare settings. 

Yann LeCun, a pioneer of the JEPA architecture and co-founder of AMI, explained that world models based on JEPA are designed to be "controllable in the sense that you can give them goals, and by construction, the only thing they can do is accomplish those goals" in an interview with Newsweek.

Gaussian splats: built for space

A second approach leans on generative models to build complete spatial environments from scratch. Adopted by companies like World Labs, this method takes an initial prompt (it could be an image or a textual description) and uses a generative model to create a 3D Gaussian splat. A Gaussian splat is a technique for representing 3D scenes using millions of tiny, mathematical particles that define geometry and lighting. Unlike flat video generation, these 3D representations can be imported directly into standard physics and 3D engines, such as Unreal Engine, where users and other AI agents can freely navigate and interact with them from any angle.

The primary benefit here is a drastic reduction in the time and one-time generation cost required to create complex interactive 3D environments. It addresses the exact problem outlined by World Labs founder Fei-Fei Li, who noted that LLMs are ultimately like “wordsmiths in the dark,” possessing flowery language but lacking spatial intelligence and physical experience. World Labs’ Marble model gives AI that missing spatial awareness. 

While this approach is not designed for split-second, real-time execution, it has massive potential for spatial computing, interactive entertainment, industrial design, and building static training environments for robotics. The enterprise value is evident in Autodesk’s heavy backing of World Labs to integrate these models into their industrial design applications.

End-to-end generation: built for scale

The third approach uses an end-to-end generative model to process prompts and user actions, continuously generating the scene, physical dynamics, and reactions on the fly. Rather than exporting a static 3D file to an external physics engine, the model itself acts as the engine. It ingests an initial prompt alongside a continuous stream of user actions, and it generates the subsequent frames of the environment in real-time, calculating physics, lighting, and object reactions natively. 

DeepMind’s Genie 3 and Nvidia’s Cosmos fall into this category. These models provide a highly simple interface for generating infinite interactive experiences and massive volumes of synthetic data. DeepMind demonstrated this natively with Genie 3, showcasing how the model maintains strict object permanence and consistent physics at 24 frames per second without relying on a separate memory module.

This approach translates directly into heavy-duty synthetic data factories. Nvidia Cosmos uses this architecture to scale synthetic data and physical AI reasoning, allowing autonomous vehicle and robotics developers to synthesize rare, dangerous edge-case conditions without the cost or risk of physical testing. Waymo (a fellow Alphabet subsidiary) built its world model on top of Genie 3, adapting it for training its self-driving cars.

The downside to this end-to-end generative method is the great compute cost required to continuously render physics and pixels simultaneously. Still, the investment is necessary to achieve the vision laid out by Hassabis, who argues that a deep, internal understanding of physical causality is required because current AI is missing critical capabilities to operate safely in the real world.

What comes next: hybrid architectures

LLMs will continue to serve as the reasoning and communication interface, but world models are positioning themselves as foundational infrastructure for physical and spatial data pipelines. As the underlying models mature, we are seeing the emergence of hybrid architectures that draw on the strengths of each approach. 

For example, cybersecurity startup DeepTempo recently developed LogLM, a model that integrates elements from LLMs and JEPA to detect anomalies and cyber threats from security and network logs. 

[/gpt3]

11-inch Apple iPad Professional (M5) deal: $899.99 at Amazon
When is ‘Wicked’ part 2 streaming? Everything you need to know.
AI fashions block 87% of single assaults, however simply 8% when attackers persist
Right this moment’s Hurdle hints and solutions for August 9, 2025
Can AI run a bodily store? Anthropic’s Claude tried and the outcomes had been gloriously, hilariously unhealthy
Share This Article
Facebook Email Print

POPULAR

Trump weighing several options for U.S. troops inside Iran
News

Trump weighing several options for U.S. troops inside Iran

Opinion | Naomi Klein on the Fascism of Elite Backlash
Opinion

Opinion | Naomi Klein on the Fascism of Elite Backlash

Giants Ace Logan Webb on Negative Team USA Narrative: ‘It’s Bulls—‘
Sports

Giants Ace Logan Webb on Negative Team USA Narrative: ‘It’s Bulls—‘

Pinterest CEO: Ban kids under 16 from social media
Tech

Pinterest CEO: Ban kids under 16 from social media

Attacks on Middle East energy sites deepen threat to US economy, analysts say
U.S.

Attacks on Middle East energy sites deepen threat to US economy, analysts say

Epstein’s former lawyer denies knowledge of sex offender’s crimes : NPR
Politics

Epstein’s former lawyer denies knowledge of sex offender’s crimes : NPR

Scoopico

Stay ahead with Scoopico — your source for breaking news, bold opinions, trending culture, and sharp reporting across politics, tech, entertainment, and more. No fluff. Just the scoop.

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
  • Contact Us
  • Privacy Policy
  • Terms of Service

2025 Copyright © Scoopico. All rights reserved

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?