For the previous 40 years, Henry and Margaret Tanner have been crafting leather-based sneakers by hand from their small workshop in Boca Raton, Florida. “No shortcuts, no low-cost supplies, simply trustworthy, prime notch craftsmanship,” Henry says in a YouTube commercial for his enterprise Tanner Footwear.
What’s much more outstanding?
Henry has been in a position to do all this regardless of his mangled, twisted hand. And poor Margaret solely has three fingers, as you may see on this photograph of the couple from their web site.
An AI-generated picture just lately deleted from the Tanner Footwear web site.
Credit score: Tanner Footwear
I found Tanner Footwear via a sequence of YouTube video adverts. Having written about males’s trend for years, I used to be inquisitive about these bespoke leather-based shoemakers. In a typical YouTube advert for Tanner Footwear, a video of an older man, presumably Henry, is imposed over footage of “handmade” leather-based sneakers, as he wearily intones, “They don’t make them like they used to, however for 40 years we did…Clients say our sneakers have a timeless look, and that they’re price each penny. However now, you received’t must spend a lot in any respect as a result of we’re retiring. For the primary and final time, each final pair is 80 % off.”
I believe the Tanner Footwear “retirement” sale is each bit as actual because the images of Henry and Margaret Tanner. Outdoors of this commercial, I’ve discovered no on-line presence for Henry and Margaret Tanner and no proof of the Tanner Footwear enterprise current in Boca Raton. I reached out to Tanner Footwear to ask if its namesake house owners exist, the place the corporate is situated, and if it is actually closing quickly, however I’ve not obtained a response.
Unsurprisingly, Reddit customers have noticed almost similar YouTube video adverts for different phony mom-and-pop outlets, displaying that these deceptive adverts aren’t a one-off. As one Reddit person stated, “I’ve seen adverts like this in German with an AI grandma supposedly closing her jewellery retailer and promoting her ‘hand-made’ items at a reduction.” After I requested YouTube in regards to the Tanner Footwear adverts, the corporate suspended the advertiser’s account for violating YouTube insurance policies.

A screenshot of a Tanner Footwear advert that includes a possible AI “actor.”
Credit score: Tanner Footwear / YouTube
These adverts are a part of a rising development of YouTube video commercials that includes AI-generated content material. AI video adverts exist on Instagram and TikTok too, however as the unique and most well-established video platform, I targeted my investigation on YouTube, which is owned by Google.
Whereas AI has professional makes use of in promoting, lots of the AI video adverts I discovered on YouTube are misleading, designed to trick the viewer into shopping for leather-based sneakers or weight loss supplements. Whereas dependable stats on AI scams are exhausting to search out, the FBI warned in 2024 that cybercrime using AI is on the rise. Total, on-line scams and phishing have elevated 94 % since 2020, in response to a Bolster.ai report.
AI instruments can shortly generate lifelike movies, footage, and audio. Utilizing instruments like this, scammers and hustlers can simply create AI “actors,” for lack of a greater phrase, to seem of their adverts.
In one other AI video advert Mashable reviewed, an AI actor pretends to be a monetary analyst. I obtained this commercial repeatedly over a sequence of weeks, as did many Reddit and LinkedIn customers.
Within the video, the anonymous monetary analyst guarantees, “I am most likely the one monetary advisor who shares all his trades on-line,” and that “I’ve received 18 of my final 20 trades.” Simply click on the hyperlink to affix a secret WhatsApp group. Different AI actors promise to assist watchers uncover a tremendous weight reduction secret (“I misplaced 20 kilos utilizing simply three substances I already had behind my fridge!”). And others are simply straight-up celeb deepfakes.

An AI-generated monetary advisor that appeared in YouTube commercials.
Credit score: YouTube / Mashable Picture Composite
Movie star deepfakes and misleading AI video adverts
I used to be stunned to search out former Right now host Hoda Kotb selling sketchy weight reduction tips on YouTube, however there she was, casually talking to the digital camera.
“Girls, the brand new viral recipe for pink salt was featured on the Right now present, however for these of you who missed the reside present, I am right here to show you the way to do that new 30-second trick that I get so many requests for on social media. As a solo mother of two women, I barely have time for myself, so I attempted the pink salt trick to drop pounds sooner, solely I needed to cease, as a result of it was melting too quick.”

Sadly, pink salt will not magically make you skinny, it doesn’t matter what pretend Hoda Kotb says. (AI-generated materials)
Credit score: YouTube
This pretend Kotb guarantees that regardless that this weight reduction secret sounds too good to be true, it is positively legit. “This is similar recipe Japanese celebrities use to get skinny. Once I first discovered about this trick, I did not consider it both. Harvard and Johns Hopkins say it is 12 instances simpler than Mounj (sic)…In the event you do not lose at the very least 4 chunks of fats, I am going to personally purchase you a case of Mounjaro pens.”
Click on the advert, and you will be taken to yet one more video that includes much more celeb deepfakes and sketchy buyer “testimonials.” Spoiler alert: This video culminates not within the promised weight reduction recipe, however in a promotion for Exi Shred weight loss supplements. Representatives for Kotb did not reply to a request for remark, however I discovered the unique video used to create this deepfake. The true video was initially posted on April 28 on Instagram, and it was already being utilized in AI video adverts by Could 17.
Kotb is simply one other sufferer of AI deepfakes, that are subtle sufficient to slide previous YouTube’s advert assessment course of.
Generally, these AI creations seem actual at first, however listen, and you will usually discover a clear inform. As a result of the Kotb deepfake used an altered model of an actual video, the pretend Kotb cycles via the identical facial expressions and hand actions repeatedly. One other lifeless giveaway? These AI impersonators will usually inexplicably mispronounce a typical phrase.
The AI monetary analyst guarantees to livestream trades on Twitch, solely it mispronounces livestream as “give-stream,” not “5-stream.” And in AI movies about weight reduction, AI actors will journey up over easy phrases like “I misplaced 35 lbs,” awkwardly announcing “lbs” as “ell-bees.” I’ve additionally seen phony Elon Musks pronounce “DOGE” like “doggy” in crypto scams.
Nevertheless, there is not all the time a inform.
Are you able to inform what’s actual? Are you certain?

Are you able to inform what’s actual?
Credit score: Screenshot courtesy of YouTube
As soon as I began investigating AI video adverts on YouTube, I started to scrutinize each single actor I noticed. It is not all the time simple to inform the distinction between a fastidiously airbrushed mannequin and a shiny AI creation, or to separate unhealthy appearing from a digitally altered influencer video.
So, each time YouTube performed a brand new advert, I questioned each little element — the voice, the garments, the facial tics, the glasses. What was actual? What was pretend?
Absolutely, I believed, that is not Fox Information host Dr. Drew Pinsky hawking overpriced dietary supplements, however one other deepfake? And is that basically Bryan Johnson, the “I wish to reside without end” viral star, promoting “Longevity protein” and further virgin olive oil? Truly, sure, it seems they’re. Remember, loads of celebrities actually do seem in commercials and YouTube adverts.
Okay, however what about that shiny bald man with a brilliant secret method for decreasing ldl cholesterol that the pharmaceutical corporations don’t desire you to learn about? And is that girl-next-door sort within the glasses actually promoting software program to automate my P&L and stability sheets? I genuinely do not know what’s actual anymore.
Mashable Gentle Pace
Watch sufficient YouTube video adverts, and the overly filtered fashions and influencers all begin to appear to be synthetic individuals.

Are you able to inform which of those movies are actual?
Credit score: YouTube / TikTok / Mashable Picture Composite
The best way to determine AI-generated movies
To make issues extra difficult, many of the AI video adverts I discovered on YouTube did not characteristic characters and units created from scratch.
Quite, the advertisers take actual social media movies and alter the audio and lip actions to make the topics say no matter they need. Henry Ajder, an professional on AI deepfakes, advised me that some of these AI movies are widespread as a result of they’re low-cost and straightforward to make with extensively obtainable artificial lip synchronization and voice cloning instruments. These extra refined AI movies are nearly inconceivable to definitively determine as AI at a look.
“With simply 20 seconds of an individual’s voice and a single {photograph} of them, it’s now doable to create a video of them saying or doing something,” Hany Farid, a professor on the College of California Berkeley and an professional in synthetic intelligence, stated in an e mail to Mashable.
Ajder advised me there are additionally a number of instruments for “the creation of solely AI-generated influencer model content material.” And simply this week, TikTok introduced new AI-generated influencers that advertisers can use to create AI video adverts.

TikTok now gives a number of “digital avatars” for creating influencer-style video adverts.
Credit score: TikTok
YouTube is meant to have options for misleading adverts. Google’s generative AI insurance policies and YouTube’s guidelines in opposition to misrepresentation prohibit utilizing AI for “misinformation, misrepresentation, or deceptive actions,” together with for “Frauds, scams, or different misleading actions.” The insurance policies additionally forbid “Impersonating a person (residing or lifeless) with out express disclosure, with a purpose to deceive.”
So, what provides?
Customers deserve clear disclosures for AI-generated content material
For viewers who wish to know the distinction between actuality and unreality, clear AI content material labels in video commercials might assist.
When scrolling YouTube, you could have observed that sure movies now carry a tag, which reads “Altered or artificial content material / Sound or visuals had been considerably edited or digitally generated.” As an alternative of inserting a outstanding tag over the video itself, YouTube usually places this label within the video description.
You would possibly assume {that a} video commercial on YouTube generated by AI could be required to make use of this disclosure, however in response to YouTube, that is not really the case.
Utilizing AI-generated materials doesn’t violate YouTube advert insurance policies (actually, it is inspired), neither is disclosure required generally. In truth, YouTube solely requires AI disclosures for adverts that use AI-generated content material in election-related movies or political content material.

The artificial content material label within the description of an AI quick movie on YouTube.
Credit score: YouTube
In response to Mashable’s questions on AI video adverts, Michael Aciman, a Google Coverage Communications Supervisor, supplied this assertion: “We’ve got clear insurance policies and transparency necessities for using AI-generated content material in adverts, together with disclosure necessities for election adverts and AI watermarks on advert content material created with our personal AI instruments. We additionally aggressively implement our insurance policies to guard individuals from dangerous adverts — together with scams — no matter how the advert is created.”
There’s one more reason why AI video adverts that violate YouTube’s insurance policies slip via the cracks — the sheer quantity of movies and adverts uploaded to YouTube every day. How huge is the issue? A Google spokesperson advised Mashable the corporate completely suspended greater than 700,000 rip-off advertiser accounts in 2024 alone. Not 700,000 rip-off movies, however 700,000 rip-off advertiser accounts. In keeping with Google’s 2024 Adverts Security Report, the corporate stopped 5.1 billion “unhealthy adverts” final 12 months throughout its expansive advert community, together with nearly 147 million adverts that violated the misrepresentation coverage.
YouTube’s answer to misleading AI content material on YouTube? Extra AI, after all. Whereas human reviewers are nonetheless used for some movies, YouTube has invested closely in automated methods utilizing LLM know-how to assessment advert content material. “To handle the rise of public determine impersonation scams during the last 12 months, we shortly assembled a devoted crew of over 100 consultants to investigate these scams and develop efficient countermeasures, akin to updating our Misrepresentation coverage to droop the advertisers that promote these scams,” a Google consultant advised Mashable.
After I requested the corporate about particular AI movies described on this article, YouTube suspended at the very least two advertiser accounts; customers may also report misleading adverts for assessment.
Nevertheless, whereas celeb deepfakes are a transparent violation of YouTube’s advert insurance policies (and federal regulation), the foundations governing AI-generated actors and adverts basically are far much less clear.
AI video is not going away
If YouTube fills up with AI-generated movies, you will not must look far for an evidence. The decision may be very a lot coming from inside the home. At Google I/O 2025, Google launched Veo 3, a breakthrough new mannequin for creating AI video and dialogue. Veo 3 is a formidable leap ahead in AI video creation, as I’ve reported beforehand for Mashable.
To be clear, Veo 3 was launched too just lately to be behind any of the misleading movies described on this story. On prime of that, Google features a hidden watermark in all Veo 3 movies for identification (a visible watermark was just lately launched as nicely). Nevertheless, with so many AI instruments now obtainable to the general public, the quantity of faux movies on the internet is definite to develop.
One of many first Veo 3 viral movies I noticed was a mock pharmaceutical advert. Whereas the fake industrial was meant to be humorous, I wasn’t laughing. What occurs when a pharmaceutical firm makes use of an AI actor to painting a pharmacist or physician?
Deepfake professional Henry Ajder says AI content material in adverts is forcing us to confront the deception that already exists in promoting.
“One of many huge issues that it is executed is it is held up a trying glass for society, as sort of how the sausage is already being made, which is like, ‘Oh, I do not like this. AI is concerned. This feels not very reliable. The feels misleading.’ After which, ‘Oh, wait, really, that particular person within the white lab coat was just a few random particular person they employed from an company within the first place, proper?'”
In the USA, TV commercials and different commercials must abide by client safety legal guidelines and are topic to Federal Commerce Fee laws. In 2024, the FTC handed a rule banning using AI to impersonate authorities and enterprise companies, and Congress just lately handed a regulation criminalizing deepfakes, the “Take It Down” Act. Nevertheless, many AI-generated movies fall right into a authorized gray space with no express guidelines.
It is a difficult query: If a whole industrial is made with AI actors and no clear disclosure, is that commercial definitionally misleading? And is it any extra misleading than hiring actors to painting fake pharmacists, paying influencers to advertise merchandise, or utilizing Photoshop to airbrush a mannequin?
These are not hypothetical questions. YouTube already promotes utilizing Google AI know-how to create promoting supplies, together with video adverts for YouTube, to “save time and assets.” In a weblog submit, Google promotes how its “AI-powered promoting options can help you with the creation and adaptation of movies for YouTube’s wide selection of advert codecs.” And primarily based on the success of Google Veo 3, it appears inevitable that platforms like YouTube will quickly permit advertisers to generate full-length adverts utilizing AI. Certainly, TikTok just lately introduced precisely this.
“With simply 20 seconds of an individual’s voice and a single {photograph} of them, it’s now doable to create a video of them saying or doing something.”
The FTC says that whether or not or not an organization should disclose that it is utilizing “AI actors” will depend on the context, and that many FTC laws are “know-how impartial.”
“Typically talking, any disclosures that an advertiser must make about human actors (e.g., that they’re solely an actor and never a medical skilled) would even be required for an AI-generated persona in an identical scenario,” an FTC consultant with the Bureau of Shopper Safety advised Mashable by e mail.
The identical is true for an AI creation offering a “testimonial” in an commercial. “If the AI-generated particular person is offering a testimonial (which might essentially be pretend) or claiming to have particular experience (akin to a medical diploma or license or monetary expertise) that impacts shoppers’ notion of the speaker’s credibility, which may be misleading,” the consultant stated.
The FTC Act, a complete statute that governs points akin to client opinions, prohibits the creation of faux testimonials. And in October 2024, the FTC regulation titled “Rule on the Use of Shopper Critiques and Testimonials” particularly banned pretend celeb testimonials.
Nevertheless, some consultants on deepfakes and synthetic intelligence consider new laws is urgently wanted to guard shoppers.
“The present U.S. legal guidelines on using one other particular person’s likeness are — at finest — outdated and weren’t designed for the age of generative AI,” Professor Farid stated.
Once more, the sheer quantity of AI movies, and the benefit of creating them, will make enforcement of current guidelines extraordinarily troublesome.
“I’d go additional and say that along with needing federal regulation round this situation, YouTube, TikTok, Fb, and the others must step up their enforcement to cease some of these fraudulent and deceptive movies,” Farid stated.
And with out clear, necessary labels for AI content material, misleading AI video adverts might quickly turn into a truth of life.