By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Scoopico
  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
Reading: Immediate Safety's Itamar Golan on why generative AI safety requires constructing a class, not a function
Share
Font ResizerAa
ScoopicoScoopico
Search

Search

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel

Latest Stories

Find out how to maximize rewards and keep away from debt in the course of the holidays
Find out how to maximize rewards and keep away from debt in the course of the holidays
Watch highlights of the Macy’s Thanksgiving Day Parade
Watch highlights of the Macy’s Thanksgiving Day Parade
Ethiopia-Eritrea Tensions Set off Fears of a Recent Regional Battle
Ethiopia-Eritrea Tensions Set off Fears of a Recent Regional Battle
Mormon Wives’ Jessi Channels Taylor Swift in New Shot at Demi
Mormon Wives’ Jessi Channels Taylor Swift in New Shot at Demi
SEC investigates Jefferies over First Manufacturers collapse, report says
SEC investigates Jefferies over First Manufacturers collapse, report says
Have an existing account? Sign In
Follow US
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © Scoopico. All rights reserved
Immediate Safety's Itamar Golan on why generative AI safety requires constructing a class, not a function
Tech

Immediate Safety's Itamar Golan on why generative AI safety requires constructing a class, not a function

Scoopico
Last updated: November 27, 2025 4:33 pm
Scoopico
Published: November 27, 2025
Share
SHARE



VentureBeat lately sat down (just about) with Itamar Golan, co-founder and CEO of Immediate Safety, to speak by means of the GenAI safety challenges organizations of all sizes face.

We talked about shadow AI sprawl, the strategic selections that led Golan to pursue constructing a market-leading platform versus competing on options, and a real-world incident that crystallized why defending AI functions isn't non-compulsory anymore. Golan supplied an unvarnished view of the corporate's mission to empower enterprises to undertake AI securely, and the way that imaginative and prescient led to SentinelOne's estimated $250 million acquisition in August 2025.

Golan's path to founding Immediate Safety started with tutorial work on transformer architectures, effectively earlier than they turned foundational to right this moment's massive language fashions. His expertise constructing one of many earliest GenAI-powered security measures utilizing GPT-2 and GPT-3 satisfied him that LLM-driven functions have been creating a completely new assault floor. He based Immediate Safety in August 2023, raised $23 million throughout two rounds, constructed a 50-person workforce, and achieved a profitable exit in below two years.

The timing of our dialog couldn’t be higher. VentureBeat evaluation exhibits shadow AI now prices enterprises $4.63 million per breach, 16% above common, but 97% of breached organizations lack fundamental AI entry controls, in keeping with IBM's 2025 information. VentureBeat estimates that shadow AI apps may double by mid-2026 based mostly on present 5% month-to-month development charges. Cyberhaven information reveals 73.8% of ChatGPT office accounts are unauthorized, and enterprise AI utilization has grown 61x in simply 24 months. As Golan advised VentureBeat in earlier protection, "We see 50 new AI apps a day, and we've already cataloged over 12,000. Round 40% of those default to coaching on any information you feed them, that means your mental property can turn into a part of their fashions."

The next has been edited for readability and size.

VentureBeat: What made you acknowledge that GenAI safety wanted a devoted firm when most enterprises have been nonetheless determining how one can deploy their first LLMs? Was there a selected second, buyer dialog, or assault sample you noticed that satisfied you this was a fundable, venture-scale alternative?

Itamar Golan: From an early age, I used to be drawn to arithmetic, information, and the rising world of synthetic intelligence. That curiosity formed my tutorial path, culminating in a examine on transformer architectures, effectively earlier than they turned foundational to right this moment's massive language fashions. My ardour for AI additionally guided my early profession as a knowledge scientist, the place my work more and more intersected with cybersecurity.

Every thing accelerated with the discharge of the primary OpenAI API. Round that point, as a part of my earlier job, I teamed up with Lior Drihem, who would later turn into my co-founder and Immediate Safety's CTO. Collectively, we constructed one of many earliest security measures powered by generative AI, utilizing GPT-2 and GPT-3 to generate contextual, actionable remediation steps for safety alerts. This lowered the time safety groups wanted to know and resolve points.

That have made it clear that functions powered by GPT-like fashions have been opening a completely new and susceptible assault floor. Recognizing this shift, we based Immediate Safety in August 2023 to handle these rising dangers. Our purpose was to empower organizations to experience this wave of innovation and unleash the potential of AI with out it turning into a safety and governance nightmare.

Immediate Safety turned identified for immediate injection protection, however you have been fixing a broader set of GenAI safety challenges. Stroll me by means of the complete scope of what the platform addressed: information leakage, mannequin governance, compliance, pink teaming, no matter else. What capabilities ended up resonating most with clients which will have shocked you?

From the start, we designed Immediate Safety to cowl a broad vary of use circumstances. Focusing solely on worker monitoring or prompt-injection safety for inner AI functions was by no means sufficient. To actually give safety groups the boldness to undertake AI safely, we wanted to guard each touchpoint throughout the group, and do all of it at runtime.

For a lot of clients, the true turning level was discovering simply what number of AI instruments their staff have been already utilizing. Early on, firms usually discovered not simply ChatGPT however dozens of unmanaged AI companies in lively use utterly exterior IT's visibility. That made shadow AI discovery a essential a part of our answer.

Equally necessary was real-time sensitive-data sanitization. As a substitute of blocking AI instruments outright, we enabled staff to make use of them safely by routinely eradicating delicate info from prompts earlier than it ever reached an exterior mannequin. It struck the steadiness organizations wanted: sturdy safety with out sacrificing productiveness. Staff may preserve working with AI, whereas safety groups knew that no delicate information was leaking out.

What shocked many shoppers was how enabling protected utilization — somewhat than limiting it — drove quicker adoption and belief. As soon as they noticed AI as a managed, safe channel as a substitute of a forbidden one, utilization exploded responsibly.

You constructed Immediate Safety right into a market chief. What have been the 2 to 3 strategic selections that truly accelerated your development? Was it specializing in a selected vertical?

Trying again, the true acceleration didn't come from luck or timing: It got here from just a few deliberate selections I made early. These selections have been uncomfortable, costly, and slowed us down within the brief time period, however they created large leverage over time.

First, I selected to construct a class, not a function. From day one, I refused to place Immediate Safety as "simply" safety in opposition to immediate injection or information leakage, as a result of I noticed that as a lifeless finish.

As a substitute, I framed Immediate because the AI safety management layer for the enterprise, the platform that governs how people, brokers, and functions work together with LLMs. That call was basic, permitting us to create a finances as a substitute of preventing for it, sit on the CISO desk as a strategic layer somewhat than a software, and construct platform-level pricing and long-term relevance as a substitute of a slender level answer. I wasn't attempting to win a function race; I used to be constructing a brand new class.

Second, I selected enterprise complexity earlier than it was comfy. Whereas most startups keep away from complexity till they're compelled into it, I did the alternative: I constructed for enterprise deployment fashions early, together with self-hosted and hybrid; lined actual enterprise surfaces like browsers, IDEs, inner instruments, MCPs, and agentic workflows; and accepted longer cycles and extra complicated engineering in trade for credibility. It wasn't the simplest route, however it gave us one thing opponents couldn't pretend: enterprise readiness earlier than the market even knew it might want it.

Third, I selected depth over logos. Slightly than chasing quantity or self-importance metrics, I went deep with a smaller variety of very severe clients, embedding ourselves into how they rolled out AI internally, how they thought of threat, coverage, and governance, and the way they deliberate long-term AI adoption. These clients didn't simply purchase the product: they formed it. That created a product that mirrored enterprise actuality, produced proof factors that moved boardrooms and never simply safety groups, and constructed a degree of defensibility that got here from entrenchment somewhat than advertising and marketing.

You have been educating the market on threats most CISOs hadn't even thought of but. How did your positioning and messaging evolve from yr one to the acquisition?

Within the early days, we have been educating a market that was nonetheless attempting to know whether or not AI adoption prolonged past just a few staff utilizing ChatGPT for productiveness. Our positioning targeted closely on consciousness, exhibiting CISOs that AI utilization was already sprawling throughout their organizations and that this created actual, quick dangers they hadn't accounted for.

I wasn't attempting to win a function race; I used to be constructing a brand new class.

Because the market matured, our messaging shifted from "that is taking place" to "right here's the way you keep forward." CISOs now totally acknowledge the size of AI sprawl and know that easy URL filtering or fundamental controls received't suffice. As a substitute of debating the issue, they're on the lookout for a method to allow protected AI use with out the operational burden of monitoring each new software, web site, copilot, or AI agent staff uncover.

By the point of the acquisition, our positioning centered on being the protected enabler: an answer that delivers visibility, safety, and governance on the velocity of AI innovation.

Our analysis exhibits that enterprises are struggling to get approvals from senior administration to deploy GenAI safety instruments. How are safety departments persuading their C-level executives to maneuver ahead?

Probably the most profitable CISOs are framing GenAI safety as a pure extension of current information safety mandates, not an experimental finances line. They place it as defending the identical belongings, company information, IP, and person belief, in a brand new, quickly rising channel.

What's probably the most severe GenAI safety incident or near-miss you encountered whereas constructing Immediate Safety that actually drove residence how essential these protections are? How did that incident form your product roadmap or go-to-market method?

The second that crystallized every part for me occurred with a big, extremely regulated firm that launched a customer-facing GenAI help agent. This wasn't a sloppy experiment. That they had every part the safety textbooks advocate: WAF, CSPM, shift-left, common pink teaming, a safe SDLC, the works. On paper, they have been doing every part proper.

What they didn't totally account for was that the AI agent itself had turn into a brand new, uncovered assault floor. Inside weeks of launch, a non-technical person found that by rigorously crafting the best dialog move (not code, not exploits, simply pure language) they might prompt-inject the agent into revealing info from different clients' help tickets and inner case summaries. It wasn't a nation-state attacker. It wasn't somebody with superior abilities. It was basically a curious person with time and creativity. And but, by means of that single conversational interface, they managed to entry a number of the most delicate buyer information the corporate holds.

It was each fascinating and terrifying: realizing how creativity alone may turn into an exploit vector.

That was the second I actually understood what GenAI modifications in regards to the menace mannequin. AI doesn't simply introduce new dangers, it democratizes them. It makes techniques hackable by individuals who by no means had the ability set earlier than, compresses the time it takes to find exploits, and massively expands the injury radius as soon as one thing breaks. That incident validated our authentic method, and it pushed us to double down on defending AI functions, not simply inner use. We accelerated work round:

• Runtime safety for customer-facing AI apps

• Immediate injection and context manipulation detection

• Cross-tenant information leakage prevention on the mannequin interplay layer

It additionally reshaped our go-to-market. As a substitute of solely speaking about inner AI governance, we started exhibiting safety leaders how GenAI turns their customer-facing surfaces into high-risk, high-exposure belongings in a single day.

What's your function and focus now that you simply're a part of SentinelOne? How has working inside a bigger platform firm modified what you're in a position to construct in comparison with working an impartial startup? What obtained simpler, and what obtained more durable?

The main focus now’s on extending AI safety throughout the complete platform, bringing runtime GenAI safety, visibility, and coverage enforcement into the identical ecosystem that already secures endpoints, identities, and cloud workloads. The mission hasn't modified; the attain has.

In the end, we're constructing towards a future the place AI itself turns into a part of the protection cloth: not simply one thing to safe, however one thing that secures you.

The larger image

M&A exercise continues to speed up for GenAI startups which have confirmed they’ll scale to enterprise-level safety with out sacrificing accuracy or velocity. Palo Alto Networks paid $700 million for Defend AI. Tenable acquired Apex for $100 million. Cisco purchased Sturdy Intelligence for a reported $500 million. As Golan famous, the businesses that survive the following wave of AI-enabled assaults will probably be those who embedded safety into their AI adoption technique from the start.

Submit-acquisition, Immediate Safety's capabilities will prolong throughout SentinelOne's Singularity Platform, together with MCP gateway safety between AI functions and greater than 13,000 identified MCP servers. Immediate Safety can be delivering model-agnostic protection throughout all main LLM suppliers, together with OpenAI, Anthropic, and Google, in addition to self-hosted or on-prem fashions as a part of the corporate's integration into the Singularity Platform.

[/gpt3]

Emily Blunt trolls Jimmy Kimmel with a horrible trip photograph of their fishing journey
Ought to we share our STI standing on relationship apps?
Gamescom 2025: What to anticipate, easy methods to livestream
Greatest transportable speaker deal: Get $40 off the Sonos Roam 2
This credit-card–sized tracker retains tabs in your pockets, passport, and IDs — and now it’s underneath $24
Share This Article
Facebook Email Print

POPULAR

Find out how to maximize rewards and keep away from debt in the course of the holidays
Travel

Find out how to maximize rewards and keep away from debt in the course of the holidays

Watch highlights of the Macy’s Thanksgiving Day Parade
U.S.

Watch highlights of the Macy’s Thanksgiving Day Parade

Ethiopia-Eritrea Tensions Set off Fears of a Recent Regional Battle
Politics

Ethiopia-Eritrea Tensions Set off Fears of a Recent Regional Battle

Mormon Wives’ Jessi Channels Taylor Swift in New Shot at Demi
Entertainment

Mormon Wives’ Jessi Channels Taylor Swift in New Shot at Demi

SEC investigates Jefferies over First Manufacturers collapse, report says
News

SEC investigates Jefferies over First Manufacturers collapse, report says

Contributor: Tulsa Distant helps the town — however what in regards to the folks it brings in?
Opinion

Contributor: Tulsa Distant helps the town — however what in regards to the folks it brings in?

Scoopico

Stay ahead with Scoopico — your source for breaking news, bold opinions, trending culture, and sharp reporting across politics, tech, entertainment, and more. No fluff. Just the scoop.

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
  • Contact Us
  • Privacy Policy
  • Terms of Service

2025 Copyright © Scoopico. All rights reserved

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?