The disinformation storm is now hitting companies harder - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
网络安全

The disinformation storm is now hitting companies harder

Businesses need new playbooks for dealing with online falsehoods as AI intensifies the risks

For major companies, managing the fallout from internal mis-steps — from botched responses to political crises to executive misconduct — has always been tricky. But safeguarding reputations is only getting tougher. 

It was difficult enough when corporate crises were more tethered to true events or decisions. Now, falsehoods and disinformation can derail companies as online fakery can morph and multiply quicker than ever before. The use of artificial intelligence to produce false material such as deepfake videos more easily has only heightened the risks.

Governments have long complained about disinformation undermining elections, inciting unrest and deepening societal divides. Businesses are now in the crosshairs, too. “Over the last couple of years, disinformation has been on the rise and now it is spreading like knotweed,” said Julian Payne, global chair of crisis and risk at consultancy Edelman.

Arla Foods, the owner of the UK’s biggest dairy co-operative, has learnt the hard way. After recently announcing a trial of a feed additive aiming to reduce methane emissions in dairy cows, some customers pushed to boycott milk products. A social media storm ensued featuring unfounded and bizarre claims that the additive was part of a plot to depopulate the world by creating fertility issues, and conspiracy theories linking the additive to Bill Gates. UK regulators have approved the additive, Bovaer, as safe, while its manufacturer has blamed “mistruths and misinformation” for the frenzy. 

In a separate instance a few years ago, US retailer Wayfair was targeted by a campaign making wildly spurious allegations that its cabinets “listed with girls’ names” had children hidden in them as part of a child trafficking ring.

A survey by Edelman of almost 400 top communications and marketing executives found that eight in 10 worry about the impact of disinformation on their businesses. Fewer than half feel prepared to tackle these risks.

And it is not just from outright disinformation — the deliberate spread of falsehoods to deceive. Companies also need to be mindful of misinformation, which is accidental, and malinformation, which exaggerates truths or changes their context to cause harm. Threats can take many forms — fabricated news, phoney social media accounts and fake text, audio or video content. Online superspreaders, often aided by AI, intensify these attacks, allowing disinformation to ricochet unpredictably. After all, lies spread faster than the truth

Boycotts can hit finances, but the damage to reputations is just as significant. Employee safety is also an issue. In 2020, unfounded 5G conspiracies that linked the new mobile phone technology to health risks led to attacks on telecom engineers working for BT’s Openreach in the UK.

While hate-spewing internet warriors are not new, the speed at which generative AI is now used to create both fake messages and the accounts behind them means companies have to “increase their capabilities to defend themselves”, said Payne. Companies had gone from having a day to “get your ducks in a row” to four hours, he added. “Then it’s like taking a water pistol to a raging inferno.” 

Boardrooms appear delayed in accepting disinformation as a priority, just as they were with cyber security. “We are seeing a similar journey,” said one executive who helps companies track such campaigns. Top ranks “are often out of touch with the polarising dynamics of digital world conversations”. 

So what should a company do? Bosses should consider whether their policies on geopolitics, climate change or other issues make them likely targets. Finding so-called “tripwires” and testing crisis responses can highlight weaknesses. 

Instinctively, calling out falsehoods quickly after an attack makes sense, but this can amplify the fabricated message, pushing it from obscure corners of the internet into the open.

Mapping and tracking disinformation sources must become strategic imperatives, said Jack Stubbs, chief intelligence officer at Graphika, which specialises in understanding online communities. “Your operating environment is this online information space,” he said.

Equally important is building credibility in advance. If executives have repeatedly lied to shareholders or the public, it will only undermine any crisis response. The hope is that consistent transparency means they are more likely to be believed when it counts. In a digital world where false narratives spread faster than ever, companies can no longer rely on outdated playbooks.

anjli.raval@ft.com

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

一图看懂特斯拉是如何脱离现实的

当特朗普是你的朋友时,收益并不重要。

OpenAI在软银领投的融资轮中目标估值3000亿美元

人工智能集团正寻求筹集400亿美元。

特朗普将华盛顿致命的空中相撞事故归咎于民主党和DEI

在直升机与商用飞机相撞导致67人死亡后,总统称这是一个“痛苦的时刻”。

深度求索:中国创新,美国模仿

这家初创公司的突破颠覆了人们对这两个国家的陈旧偏见。

欧洲对人工智能的巨大希望是否错过了时机?

Mistral AI曾被誉为该技术领域的潜在全球领导者。但它已经落后于美国的竞争对手,现在又被中国的新星赶超。

美国经济第四季度增长2.3%

这一数据是在美联储主席杰伊•鲍威尔表示不急于降息之后公布的。
设置字号×
最小
较小
默认
较大
最大
分享×