When AI looks like AI: The marketing case for knowing when to say no
JJ
The public is getting better at noticing synthetic AI visuals and audio. Is it time to hit pause on these types of AI-generated assets, or are we just doing it wrong?
Last week I was scrolling through my YouTube feed and stopped dead at an ad for a snack product. The spokesperson looked like they'd been sculpted from matte plastic. The skin had that tell-tale AI smoothness. It was too perfect, too even, completely devoid of the micro-texture that makes a human face feel real. The hands and eyes were subtly wrong. I stared at it longer than I would have stared at a good ad, but not for the right reasons.
It raised two questions that I haven't been able to shake. First: was this a budget problem, a skills problem, or is some level of obvious AI "fakeness" quietly becoming an accepted aesthetic, in the way that Pixar's stylized sheen became the defining look of animated cinema for a generation? And second: has the industry's rush to deploy AI-generated imagery outpaced the technology's ability to actually deliver on the implicit promise of photo-realism?
I think we're at an inflection point. And the industry is not handling it gracefully.
The Uncanny Valley Has a PR Problem
There's a well-documented psychological phenomenon called the "uncanny valley," that deeply uncomfortable feeling you get when something looks almost human, but not quite. Robotics researchers identified it decades ago, but it has become a defining challenge of AI-generated marketing in 2025 and 2026.
South Korea's GS25 convenience store released an AI-generated campaign that drew immediate public backlash; "creepy," "soulless," and "plastic" were among the reactions, and it became a case study in what happens when a brand outsources emotional nuance to a generator. Coca-Cola, with all its resources, has now stumbled twice. The comparison between its 2024 and 2025 holiday campaigns reads like a strategy shift from technical ambition to aesthetic course correction, driven by the need to resolve the very same uncanny valley problem. Yet critics still found the end product "crummy looking" despite a year of significant advances in AI video technology.
This is not a small-brand problem. This is an industry-wide reckoning with the gap between what AI image and video tools promise and what they actually deliver when the ask is photorealistic human beings.
Are Consumers Getting Smarter, or Are We Getting Sloppier?
Here's where the data gets genuinely interesting, and a little contradictory.
A Clutch survey found that 57% of consumers couldn't identify AI-generated photos when tested, even though 66% were confident they could spot them beforehand. After the test, their confidence dropped to 56%. On the surface, that might sound like good news for marketers using AI. But read it again: confidence dropped. People walked away from the test more suspicious than when they started. And suspicion, once it enters the frame, is corrosive.
The same study found that 84% of consumers want brands to disclose AI imagery, and trust drops sharply when disclosure is missing. Meanwhile, research from the Nuremberg Institute for Market Decisions found that ads described as AI-generated were perceived more negatively than identical ads presented as human-made, with participants showing less inclination to click or engage with the product.
So even when consumers can't tell the difference, the mere knowledge that AI was involved changes their behavior. That's a strategic problem that no improvement in render quality is going to solve.
The Style Question: Sloppy Marketing or a New Aesthetic?
Back to my plastic snack spokesperson. I keep coming back to that first question: is there an emerging AI aesthetic that audiences might eventually accept the way they accept animation or CGI?
It's not a crazy premise. Every visual medium develops its own signature look, and sometimes that look becomes the point rather than a limitation. Early CGI in film was laughable compared to live action, and then Toy Story made the artificiality part of the magic. Could there be a version of that for AI-generated advertising?
There are already cases that suggest yes — but with a crucial caveat. Dior's collaboration with virtual influencer Noonoouri, known for her animated, illustrative style rather than hyperrealism, reportedly exceeded human influencer benchmarks by 42% in campaign ROI while cutting content production costs by 76%. Marketing Agent Blog The key was that Noonoouri doesn't try to be real. She's a fashion cartoon, and she's transparent about it. Experts in the space now recommend stylized realism over photorealistic replication specifically to avoid the uncanny valley effect.
That's a meaningful distinction. The brands getting punished aren't those leaning into a clearly synthetic aesthetic, they're instead the ones attempting photorealism and falling short. There's a massive difference between "this is obviously a stylized AI character" and "this is supposed to look like a real person but something is deeply wrong."
The "AI Phobia" in the Room
I want to name something that doesn't get talked about enough: the irrational end of the anti-AI backlash. I call it AI phobia, and I see it regularly among senior marketing colleagues who have written off all AI-generated visual and audio content as categorically unacceptable. The fear is real, but it's often rooted in a misunderstanding of what the technology can and can't do, and in which contexts.
Not all AI-generated visual work is a plastic spokesperson hawking snacks. There are AI-generated campaigns out there in artistic, illustrative, or abstract styles that are extraordinary, work that would be prohibitively expensive or technically impossible to achieve by traditional means. Notably, the ability to detect AI imagery diminishes considerably among consumers aged 40 and above Insights, which means the generation most likely to be making marketing decisions is also the most likely to overestimate how much the average consumer can spot. That mismatch leads to overcorrection.
The solution isn't to ban AI from your creative workflow. It's to be strategic about where you deploy it and honest about what it can currently deliver.
So, Should We Hit Pause?
Here's my honest take: no, but with some real caveats.
We should absolutely pause on AI-generated photorealistic human beings in consumer-facing advertising until the technology genuinely closes the gap. The uncanny valley in AI advertising isn't a glitch, it's a signal that emotional intelligence is failing. When a brand outsources emotional nuance to a generator, the audience notices instantly. Using AI to produce a "real-looking" person who ends up looking like a mannequin is not a budget saving, instead a trust cost. And trust, once lost, is expensive to rebuild.
But that's a very specific use case. For concepting, storyboarding, and internal comping, AI is a gift. For stylized, illustrative, or clearly non-photorealistic work, it can be exceptional. For abstract, environmental, or product-focused visuals that don't involve people, the quality bar is already high enough that the risk is much lower. The rule of thumb I'm applying in 2026: if the AI asset needs to fool the eye, it's likely not correct for the final media (use careful judgement). If it's making no such claim, proceed with strategy and disclosure.
The Covenant Still Applies
I've written before about the need for an AI covenant with customers. This is a published framework for how and when you'll use AI, and what you'll disclose. That thinking applies here more urgently than ever. When AI is paired with strong human insight, cultural clarity, and transparent intent, it stops feeling uncanny and starts feeling inevitable. That's the creative moment the best marketers are operating in right now.
The brands getting hurt aren't the ones using AI. They're the ones using it carelessly to chase the cost savings without asking whether the output actually serves the audience.
The technology will get better. It always does. But in the meantime, the smartest move is to match the tool to the task, stop asking AI to fake things it can't convincingly fake, and be honest with your audience about what they're looking at. They're watching more closely than you think.
