A troubling trend from Brighton SEO
At a recent Brighton SEO conference, one talk stood out – for all the wrong reasons. In front of a room of marketers and SEO professionals, a speaker explained how businesses could use AI to create a fake expert as a mouthpiece for their content.
As a digital-first PR agency, this immediately set alarm bells ringing. We help our clients build credibility, authority and trust. So the idea of creating a fictional person to front your brand’s expertise felt like a dangerous step in the wrong direction.
We discussed it with peers, our internal team and a legal expert client. The consensus? This isn’t clever marketing. It’s a credibility crisis waiting to happen.
Fake experts are already making it into the media
A recent article in PR Moment, ‘Hoodwinked by AI’, explored the rise of fake expert commentators, and journalist Rob Waugh’s recent Press Gazette feature unearthed some worrying examples of AI ‘experts’ being quoted in mainstream media – sometimes hundreds of times – before being exposed as fakes.
As a PR agency, we’re anticipating an increase of journalists requesting video or phone interviews with our spokespeople to make sure they are the real-deal.
The rise of AI ‘influencers’
AI-generated personalities are already making waves, particularly in the influencer space. Some major global brands are trialling virtual ambassadors. It’s not hard to see why: these AI personas can represent the brand flawlessly, never tire, never go off-message, and never end up in the tabloids for inappropriate behaviour.
But is this the future we want? Where audiences connect with avatars instead of real people? Where brands build trust on a foundation of code and composite images?
AI has incredible potential. At Brighton SEO, we saw impressive proprietary tools for everything from content analysis to site audits. But creating a fake expert isn’t a smart innovation. It’s a trust issue in disguise.
Why fake experts are bad for your brand
From a comms and PR perspective, this trend is deeply problematic. Google’s algorithm updates increasingly favour content that demonstrates E-E-A-T – Experience, Expertise, Authoritativeness and Trust.
If your ‘expert’ doesn’t exist, you fail on all four counts.
Fake experts also risk:
- Damaging relationships with journalists who publish quotes assuming they’re real
- Undermining genuine subject matter experts within your organisation
- Leading to reputational crises when the truth comes out
- Inviting legal action, depending on how they’re used and promoted
As our Account Director Samantha Clark puts it:
“Using fake experts would erode trust in the brand and the agency – and could seriously backfire if the journalist found out about it. It would damage the credibility of any genuine experts the brand does have.”
The legal risks of fake personas
We asked James O’Connell, Partner at law firm Mayo Wynne Baxter, to outline the potential legal consequences. His response should give any brand serious pause for thought:
“Using fake AI experts is very often a deliberate attempt to manipulate people over matters of importance...”
Well, people don’t like being manipulated at the best of times, and if things go wrong they usually go looking for someone to blame or to pay recompense – in which case the person who manipulated them based upon a deceit is much more likely to be the scapegoat.
Mistrust, anger and even outrage is the usual response where dishonesty and deceit are brought into a situation where there is an expectation of, at the very least, impartiality and professionalism, such as in the media.
Starting with a deliberate mistruth designed to win trust makes it all the more important that whatever the fake expert goes on to say had better be correct.
The risks include:
- In very serious cases the AI fakery could lead to a criminal conviction for misleading consumers under the Consumer Protection from Unfair Trading Regulations 2008, or even in more serious cases, prosecution for fraud under the Fraud Act 2006.
- From a regulatory standpoint, the Advertising Standards Authority (ASA) requires all advertisements to be honest and not misleading. Using a fake expert in a commercial context may result in a banned campaign or further action from Trading Standards.
- If broadcast, Ofcom’s Broadcasting Code demands accuracy and transparency, particularly in news and current affairs content.
- The Online Safety Act 2023 may also apply to online platforms spreading such content, particularly where it causes harm or contributes to misinformation.
- And if real people’s data or likenesses are used to train or shape the AI-generated expert, there could be data protection implications under the UK GDPR.
- Finally, the entity generating fake human experts always potentially runs the risk of defamation if there is a real expert who, because of their name or what is written, is at risk of suffering reputational damage as a consequence of the AI fake expert’s comments.
“In summary, people using fake AI experts will no doubt get a short-term gain, but there will be a much wider long-term loss of trust and credibility.
Those most at risk from the use of such a deceptive tactic are those who are using it to sell things or manipulate people on matters where feelings run high. Talk about a slippery path.”
What about anonymity and avatars?
There’s a long tradition of anonymity in journalism — think Honest John or agony aunts. But as our Director of Strategy and Content, Chris Hatherall, pointed out:
“The difference is, the advice is still human. It may be anonymous, but it’s not fake. It’s based on real-world knowledge and experience. That’s what makes it trustworthy.”
Chris also raised an interesting point about DEI and inclusion. AI avatars could allow brands to present a more diverse public face. But there’s a risk that these personas simply mask the lack of diversity in the business itself. A diversity-washed avatar doesn’t solve the root issue – it just papers over the cracks.
AI should support credibility – not fake it
We’re not anti-AI. Far from it. We use it every day to speed up admin tasks, explore ideas and streamline analysis. But using it to pretend your brand is something it’s not is fundamentally dishonest.
Your customers and partners deserve better. Journalists and media platforms deserve better. And your brand’s long-term reputation depends on it.
Let’s use AI to enhance our credibility, not manufacture it.
Let’s lead with transparency, not trickery.
Let’s keep the ‘human’ in human connection.
Want to build genuine authority and trust in your brand? Talk to our team about how we can support your objectives today.