“Deepfaking” is technology that allows human threat actors to create and manipulate human audio-visual syntheses through the use of artificial intelligence and machine learning. The technique is known to have been weaponized by cybercriminals and activists to digitally perform a “face swap”, whereby one person’s face is convincingly replaced with the image of another. This can then be enhanced with fake audio and “lip syncing” cues using machine learning technology. The end result is a manipulation of media content, which has traditionally been used to generate fake pornographic content, as well as distribute political misinformation, or “fake news”, online. Deepfake technology now has the potential to target corporate spheres by generating and proliferating imagery or footage that can be used in extortions, cyber-fraud and brand sabotage, all of which can severely disrupt business operations.
Although this type of attack on corporate entities is rarely reported to the public, they are expected to significantly increase in frequency in the medium to long term, as the cost of the technology required to conduct such attacks decreases and becomes more accessible. According to the leading cybersecurity firm Symantec, in 2019, at least three companies have been targeted in attacks using deepfake technology to con employees into making fraudulent financial transfers. In one reported incident, a company employee, convinced he was speaking with a C-suite executive over the phone, misguidedly wired $10 million USD to cybercriminals. Similarly, in an extortion incident in March 2019, a group of Israeli perpetrators stole $8.8million USD by impersonating the French foreign minister over phone calls, Skype and email, using deepfake technologies to bolster the con.
WEAPONIZED MISINFORMATION AND BRAND SABOTAGE
Weaponized misinformation is the most advanced means by which deepfake technology can be used to target and tarnish a business’s reputation. A 2019 report by the information integrity company New Knowledge showed that 78% of consumers believe that disinformation significantly damages brand reputation, highlighting the major repercussions of brand sabotage on business operations. Deepfaking technology gives cybercriminals a social engineering advantage over their potential victims, which is essential to spreading disinformation and committing cyber extortion.
HOW NYA, A GARDAWORLD COMPANY, CAN HELP
Our Digital Trace service, operated by our 24/7 global operations center, can provide enhanced analysis and insights into potential threat actors and cybercriminal modus operandi used in instances of social engineering. These types of attacks, especially those using deepfake technologies, rely on human error. In these situations, the key to overcoming such a crisis is to prepare for it. Our cyber-consulting service will train clients and their employees to identify and address gaps in a company’s security plan, which can be capitalized on by threat actors using social engineering and deepfake technology to target their assets and personnel. Through simulated crisis exercises and expert cyber security consultation, clients can improve their organizational resilience and mitigate negative impacts on business continuity.
Inform yourself about rising threats. Sign up to receive our 2020 Risk maps and Global Risk Overview Report as soon as they’re available.
Contact us to learn more about our Travel security and Crisis management services.