

Aside from the odd six-fingered hand and otherworldly movements, AI-generated imagery and video is increasingly indiscernible from real life. Alongside this, ad targeting has become ever more granular, serving up tailor-made versions of reality to different groups, feeding into our biases.
In this uncertain space where we begin to question what’s real and what’s not, what happens to audience trust and how can brands navigate the challenge?
“AI-generated imagery and hyper-targeted content are making reality feel editable,” says Pooja Rawat, chief strategy officer at Edelman APAC. “As synthetic visuals become indistinguishable from the real, audiences won’t just question what they see; they’ll question the intent behind what they’re shown by brands.”
“There’s a lot of panic right now about the ‘reality wars’,” Abigail Kwek, group strategy director at R/GA, tells LBB. “As if audiences are one AI video away from losing all sense of what’s real. But for brands, the risk isn’t abstract. It’s practical. The real problem isn’t that people will spot AI-generated content. It’s that without clear signals, they’ll stop trusting anything at all. When you know anything can be made, you don’t look harder. You tune out.”
This shift is already happening. “People are leaning back into physical spaces and real-world experiences because they’re easier to trust,” Abigail says. “But brands can’t only exist offline. If you want to earn attention in feeds, you have to earn belief first. Audiences love fiction. We just don’t like being quietly steered. The moment AI shifts from telling stories to optimising realities, especially when different people are shown different versions of the truth, trust doesn’t just fade over time. It drops.”
To mitigate this, intent and consistency are key, states Ricardo Casal, CCO of GUT Miami. “People will stop trusting how things look, and start trusting who they come from. People will care more about transparency, about knowing when AI is used, and about whether there’s a real human or brand willing to stand behind the work.”
“In a media landscape of personalised realities, the brands that win will act as stable reference points, not shape-shifting mirrors,” agrees Callum McCahon, chief strategy office at Born Social. “As feeds fill with technically flawless but emotionally hollow content, we’re seeing imperfection becoming a genuine trust signal. Rough edges, rawness, and even typos act as ‘proof of life’ in an increasingly synthetic environment.”
And when searching for intent, audiences will look for signals including “provenance markers on AI assets, editorial guardrails, context disclosures, and the presence of real humans who can be seen, named, and engaged with,” Pooja says. “Credibility becomes an infrastructure, not a tone of voice.”
“Brands that simply spin bespoke realities for every micro-segment will win the short-term and likely lose in the long because you can’t build lasting trust on shifting ground,” says Annie Hou, global head of data and AI at McCann US. “The new signals of trust are roots, resonance, and responsibility – staying true to the brand, creating work that genuinely means something, and behaving with integrity.”
Trinh Nguyen Khanh, strategic planner at Happiness Saigon, notes, “AI isn’t the threat here, feeling powerless is. Most audiences aren’t anti-AI, and they won’t reject brands simply for using it. They’re reacting to what the ‘reality wars’ expose: how easily blurred truth can make them feel vulnerable. It’s the sense that when reality becomes easier to manufacture, they themselves become easier to be swayed, nudged, or misled without realising it.”
From a communications perspective, Trinh says, “AI won’t kill trust, but it will relocate it. Belief will move away from what brands show and towards what they consistently do. Product quality, service behaviour, community actions, and long-term choices become the proof points marketing can no longer simulate.”
“AI isn’t collapsing reality but exposing how fragile and negotiable it has always been,” Zoe Chen, strategy director at VIRTUE Asia, says. “Long before generative AI, we were already navigating competing versions of the world, shaped by culture, belief systems, fandoms, politics and algorithms. What’s changed is speed and scale. Our 24/7 media lets us watch these ‘worlds’ form, collide and shift in real time, and AI supercharges that by making creation and distribution almost instantaneous.
“Messages, narratives and versions of reality can now be produced and amplified by anyone, at speed, making it harder to hold on to a shared sense of what’s real,” Zoe says. “This is breeding mistrust and leaving audiences exhausted.”
With few clear regulations currently in place, brands are operating in a largely self-policed environment. This raises a critical question: what responsibility do brands hold in ensuring AI is used ethically, transparently, and in ways that protect consumers?
“With no external rulebook, brands had better become their own regulators,” warns Trinh. “That responsibility lives in their decisions: where AI is used and isn’t, and which shortcuts are refused even when no one is watching. Brands shouldn’t aim to own truth, but they are accountable for the realities they put into the world. The question isn’t, ‘is this realistic enough?’ but ‘is this consistent with how we actually behave through our product and service?’ In a world where reality is malleable, judgment becomes the real creative advantage.”
It’s about “using AI to move faster, not to pretend,” says Callum. “In the age of synthetic reality, credibility beats believability.”
“AI is changing how many versions of reality a brand can put out at once,” Zoe notes. “This makes it dangerously easy to become lazy: to let systems optimise media, messaging and imagery, watch the metrics rise, and mistake efficiency for progress. We’re already seeing different responses to this pressure. Aerie has chosen to limit fragmentation by rejecting AI-generated bodies, anchoring itself to a consistent, human version of beauty (its Instagram bio reads: ‘Real people only. No retouching. No AI. 100% Aerie Real.’).
“Duolingo, meanwhile, has faced criticism after replacing human translators with AI, with users reporting content becoming more robotic and less culturally sensitive, a reminder that scaling fast can pull brands away from lived reality,” she says. “From now on, credibility won’t just come from how advanced a brand’s AI is, but from how intentionally it’s used. That means being clear about what you will and won’t optimise for, when AI adds value and when human judgment matters more. The brands that win will be the ones led by a human point of view, not one generated and optimised by AI.”
“The opportunity is to get ahead of governance,” Pooja says. “Set your own internal rules when it comes to AI use, watermark synthetic content before you’re forced to, and treat transparency like a positive UX – ensuring it is designed, tested and optimised.”
This also raises a deeper responsibility question, she says: “If brands can shape not just perception but versions of reality, what role should they play? Reflection is passive, curation is selective, but stance is active. The most resonant brands won’t mirror reality; they’ll declare their version and be accountable to it.”
“On top of this, there’s a lot of grey area around ownership, likeness, and consent,” Ricardo says. “Brands can get burned even when they’re trying to do the right thing. The answer isn’t to freeze or play it ultra safe. The answer is to take smart risks, be clear about how AI is being used, and treat ethics as part of the creative process and not just a legal step at the end.”
CSO at BarkleyOKRP, Chris Cardetti puts it well: “Producing AI content is as much an editorial decision as it is a budgetary decision. Does it fit the values of your brand? Will your consumer care or be put off? Will this erode trust or just deliver news? In transactional parts of the consumer journey, AI can scale ideas. But in other parts, AI must enable, not create.”
On this, VaynerMedia APAC’s strategy director (consulting), Cheryl Teng, says, “AI offers PhD-level reasoning but internship-level context. If we feed it zero original thought, we simply get a faster version of nothing – because zero times zero is still zero. In a slippery world, the brands that win will be those brave enough to amplify curiosity, not just output.”
“The era of AI content has actually made real-world actions by brands more important,” says Chris. “In an era where you can message 1000 things in 100 channels to a tiny group of consumers, brands must try to stay true to a single, unifying idea to attract and keep consumers. Use AI, but don't let it let you go flat. Look to real-world actions. Stay true to a unifying idea.”
“The real opportunity for brands is to build a common reality,” Annie says. “A shared structure of truths, values and promises that everyone can stand on, while using data and AI to meet people as individuals. That means creating a stable, truthful frame for how the brand shows up, then letting people explore, play and grow within it – different paths through the same house. Reflect the brand’s reality clearly, curate it thoughtfully, and be explicit about the beliefs that shape the brand world.”
“And, yes niche audiences will get niche realities,” Pooja says, “the task isn’t to avoid that; it’s to prevent incoherence and ensure overarching consistency in the larger brand narrative. Personalisation must evolve into one truth, rendered differently: localisation without distortion; segmentation without alternate worlds. The operating question for 2026: does personalisation create relevance or alternate realities? If the answer is the latter: Pause. Rebuild. Re-anchor.”
Rather than a threat, Cheryl sees reality bubbles as an opportunity for brands to earn “radical relevance – if they show up with intent.”
“For decades, marketing has made the mistake of chasing the middle ground because it felt safe,” she says. “Think about the Air Force’s failed attempt in the 1950s to design a cockpit for the ‘average pilot’, only to discover none of the 4,000 pilots actually fit. But truth lives on the edges, in cohorts defined by real human tensions and quirks, like ‘first-time moms struggling to reclaim their identities’. AI now allows us to understand and serve these niche needs at a scale that once required an army, enabling genuine inclusivity.
“Today, deep relevance is what defines ‘premium’. In a world where AI manufactures high-fidelity visuals instantly, craft alone is no longer the differentiator. What matters is human signal. As audiences grow sceptical of synthetic perfection, they seek the judgment, taste, and intent that prove a soul is behind the message. For brands, the goal is to be a human anchor of truth.”