close
close

Latest Post

Improving the cold resistance of modified plants using cherry tree genes Use a roaming eSIM on your summer travels to avoid mobile charges

AI deepfakes are cheap, relatively easy to create, and can damage your company’s reputation, so it’s important to develop a comprehensive defense and response strategy now.

The threat of deepfakes is a big problem, made even bigger by easy access to AI tools and services, says Ari Lightman, a professor of digital media at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy, in an email interview. “Part of the problem is that it’s hard to know the intent,” he notes. “In many cases, they are deliberately designed to deceive for political, ideological or financial reasons – in other cases, the intent is harder to know.”

Deepfake technology has evolved rapidly, making it increasingly difficult to distinguish between real and manipulated content, says Rob Rendell, global head of fraud market strategy and fraud prevention at financial crime compliance support provider NICE Actimize, in an email interview. “This poses serious risks to various aspects of society, including politics, business and personal reputation,” he explains. “Developments in deepfakes and AI have triggered a wave of misinformation and confusion, with many consumers falling victim to AI-generated phone calls.”

The technology has now been democratized to the point where virtually anyone with a standard computer or smartphone and an internet connection can create a passable fake, noted Arik Atar, senior threat intelligence researcher at security technology provider Radware, via email. “We are rapidly approaching an era where audiovisual content is no longer inherently trustworthy.”

Related:What CIOs can learn from an attempted deepfake call

Multiple threats

Deepfakes can harm companies in a number of ways. “They can damage the reputation of the company or its executives by spreading false information or creating fake videos or audio recordings,” says Rendell. “Deepfakes can also be used to impersonate employees, executives or customers, leading to fraudulent activity or harmful interactions with the intended parties.”

Rendell points out that a deepfake generally falls into one of four basic categories:

Face swap. Replacing one person’s face in videos or pictures with another.

Speech synthesis. Generate realistic speech from text, allowing the creation of fake audio recordings.

Contextual manipulation. Changing the context of a video or audio clip to alter its meaning or impact.

Full body deepfakes. Creating completely fake videos of people engaging in activities they never participated in.

Related:Cyber ​​risks: When job seekers become the hunted

Faced with a combination of social media posts, public polarization and loss of trust, brands are now struggling to monitor their online brand perception while countering misconceptions, Lightman says. “In many cases, AI can misconstrue satire as information and result in real-world consequences,” he notes. Meanwhile, using deepfake AI often succeeds in tricking employees and compromising potentially confidential information.

Prevention tactics

Preventing or quickly neutralizing deepfakes requires a multi-pronged approach, Rendell says. “This may include implementing authentication mechanisms to verify the authenticity of media content, educating employees and customers about the existence of deepfakes and how to detect them, and developing advanced detection technologies to identify and contain the spread of deepfake content.”

Both manual and automated methods can help detect deepfakes by analyzing unnatural motion, visual artifacts, audio distortions, contextual inaccuracies and other signatures, Atar says. “AI-based detection systems can identify fakes in large datasets, but it’s an arms race as deepfake creators learn to overcome imperfections.” He warns that some security experts now estimate that current deepfake detection methods will become unreliable within 12 to 18 months.

Related:The next generation will be the driving force behind AI regulation

Damage limitation

Rendell said IT leaders can take proactive steps now and in the near future to mitigate the impact of deepfakes by implementing robust anti-fraud measures across all of their transaction channels. “This includes having multiple layers of fraud controls in place at every stage of a transaction, from initiation to completion, and ensuring those controls are functioning in real time.”

By continuously monitoring transactions for suspicious activity, identifying anomalies and taking quick action when necessary, companies can effectively mitigate their overall risk of financial fraud, Rendell says. “In addition, investing in advanced technologies such as AI-powered fraud detection systems and biometric authentication methods can further strengthen a company’s ability to detect and prevent deepfake-enabled fraud attempts.”

Undoing the damage caused by a deepfake attack can be challenging, says Damir J. Brescic, CISO at security technology and services provider Inversion6 via email. “Companies may need to invest in public relations efforts to restore their reputation, compensate affected parties, and work with law enforcement to hold attackers accountable,” he explains. “It’s important to take a proactive approach to cybersecurity and invest in the necessary tools and training to prevent deepfake attacks in the first place.”

Last thought

The key to effectively defending against deepfakes is to act quickly and decisively – like a fire crew that needs to get a fire under control before it gets out of control, says Atar. The longer false information circulates unchecked, the more damage it can do, he notes. “Companies need a rapid response manual that is ready at the first sign of smoke.”

Leave a Reply

Your email address will not be published. Required fields are marked *