Jump to content
  • Sign Up
×
×
  • Create New...

CEO of world’s biggest ad firm targeted by deepfake scam | Technology


Recommended Posts

  • Diamond Member



CEO of world’s biggest ad firm targeted by deepfake scam | Technology

The head of the world’s biggest advertising group was the target of an elaborate deepfake scam that involved an artificial intelligence voice clone. The CEO of WPP, Mark Read, detailed the attempted ****** in a recent email to leadership, warning others at the company to look out for calls claiming to be from top executives.

Fraudsters created a WhatsApp account with a publicly available image of Read and used it to set up a

This is the hidden content, please
Teams meeting that appeared to be with him and another senior WPP executive, according to the email obtained by the Guardian. During the meeting, the imposters deployed a voice clone of the executive as well as
This is the hidden content, please
footage of them. The scammers impersonated Read off-camera using the meeting’s chat window. The scam, which was unsuccessful, targeted an “agency leader”, asking them to set up a new business in an attempt to solicit money and personal details.

“Fortunately the attackers were not successful,” Read wrote in the email. “We all need to be vigilant to the techniques that go beyond emails to take advantage of virtual meetings, AI and deepfakes.”

A WPP spokesperson confirmed the phishing attempt bore no fruit in a statement: “Thanks to the vigilance of our people, including the executive concerned, the incident was prevented.” WPP did not respond to questions on when the ******* took place or which executives besides Read were involved.

Once primarily a concern related to online harassment, ************ and political disinformation, the

This is the hidden content, please
in the corporate world has surged over the past year. AI voice clones have fooled banks, duped financial firms out of millions and put cybersecurity departments on alert. In one high-profile example, an executive of the defunct digital media startup Ozy pled guilty to ****** and identity theft after it was reported he used voice-faking software to
This is the hidden content, please
in a bid to fool Goldman Sachs into investing $40m in 2021.

The attempted ****** on WPP likewise appeared to use generative AI for voice cloning, but also included simpler techniques like taking a publicly available image and using it as a contact display picture. The ******* is representative of the many tools that scammers now have at their disposal to mimic legitimate corporate communications and imitate executives.

“We have seen increasing sophistication in the cyber-attacks on our colleagues, and those targeted at senior leaders in particular,” Read said in the email.

Read’s email listed a number of bullet points to look out for as red flags, including requests for passports, money transfers and any mention of a “secret acquisition, transaction or payment that no one else knows about”.

“Just because the account has my photo doesn’t mean it’s me,” Read said in the email.

WPP, a publicly traded company with a market cap of around $11.3bn, also stated on its website that it has been dealing with fake sites using its brand name and is working with relevant authorities to stop the ******.

“Please be aware that WPP’s name and those of its agencies have been fraudulently used by third parties – often communicating via messaging services – on unofficial websites and apps,” a pop-up message on the company’s contact page states.

Many companies are grappling with the ***** of generative AI, pivoting resources toward the technology while simultaneously facing its potential harms. WPP

This is the hidden content, please
that it was partnering with chip-maker Nvidia to create advertisements with generative AI, touting it as a sea change in the industry.

“Generative AI is changing the world of marketing at incredible speed. This new technology will transform the way that brands create content for commercial use,” Read said in a statement last May.

In recent years, low-cost audio deepfake technology has become widely available and far more convincing. Some AI models can generate realistic imitations of a person’s voice using only a few minutes of audio, which is easily obtained from public figures, allowing scammers to create manipulated recordings of almost anyone.

The rise of deepfake audio has targeted political candidates around the world, but also crept into other less prominent targets. A school principal in Baltimore was

This is the hidden content, please
over audio recordings that sounded like he was making ******* and antisemitic comments, only for it to turn out to be a deepfake perpetrated by one of his colleagues. Bots have impersonated Joe Biden and former presidential candidate Dean Phillips.





This is the hidden content, please

#CEO #worlds #biggest #firm #targeted #deepfake #scam #Technology

This is the hidden content, please

For verified travel tips and real support, visit: https://hopzone.eu/

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.