Individuals, businesses, and governments are unprepared for the coming wave of deepfake attacks by malicious cyber criminals. For the unaware, “deepfake” refers to artificial intelligence generated false media that pretends to be the authentic version of what it emulates. In English…it is fake pictures, videos, and audio of real people or the creation of fake people in the same mediums.
Commanding several of the seven traditional patterns of artificial intelligence, deepfakes use algorithms that mimic voice, mannerisms, facial expressions, body language, and lip movements to look deceptively real; creating audio and video clips of events that never occurred. These images are spread on social media and in the news, as most viewers are totally unaware of the lack of authenticity.
Fearing use in the upcoming national election, Sen. Rob Portman of Ohio introduced the Deepfake Report Act of 2019 last summer, which is a proposed amendment to the National Defense Authorization Act. Currently pending as S.2065, the bill assigns 8 tasks to the Department of Homeland Security:
“(1) an assessment of the underlying technologies used to create or propagate digital content forgeries, including the evolution of such technologies;
(2) a description of the types of digital content forgeries, including those used to commit fraud, cause harm, or violate civil rights recognized under Federal law;
(3) an assessment of how foreign governments, and the proxies and networks thereof, use, or could use, digital content forgeries to harm national security;
(4) an assessment of how non-governmental entities in the United States use, or could use, digital content forgeries;
(5) an assessment of the uses, applications, dangers, and benefits of deep learning technologies used to generate high fidelity artificial content of events that did not occur, including the impact on individuals;
(6) an analysis of the methods used to determine whether content is genuinely created by a human or through digital content forgery technology and an assessment of any effective heuristics used to make such a determination, as well as recommendations on how to identify and address suspect content and elements to provide warnings to users of the content;
(7) a description of the technological counter-measures that are, or could be, used to address concerns with digital content forgery technology; and
(8) any additional information the Secretary determines appropriate.”
Texas, which is the only state attempting to legislate, regulate, or criminalize deepfakes, renders the publication of a deepfake with the intent to influence an election or injure a candidate a misdemeanor under Tex. Elec. Code § 255.004.
However, deepfakes not only threaten elections; a narrow view of their utility is a massive mistake. Indeed, deepfakes are a potential WMD in business. Similar to spear-phishing, deepfakes advance the same types of threats by moving away from the traditional battlefield of emails to telephones, voicemail, and now with remote work – video-conferences.
Consider the following scenarios:
Stock Price Manipulation: a deepfake of an investment bank’s CEO announces a massive data breach and fraud that resulted in monetary losses, perpetrated by cyber-attacks. A follow-on deepfake by a SEC spokesperson states that the same investment bank failed to follow SOX cybersecurity guidance, will be facing action by the Federal Trade Commission, and violated the New York Shield Act. Released in tight succession, these two videos would undoubtedly cause the investment bank’s stock price to plummet, at least temporarily, as investors would lose confidence in its longevity.
Basic Misinformation: Employees at company all receive the same or similar voicemails from c-suite executive informing them of a suspected malware incident and advising all employees to take certain actions on their computers. The voicemail recording states that the warning was not sent via email for “security reasons.” Before the company’s CISO/CSO/CTO can verify what is happening, as he/she may be one of the few to realize that the voicemail, it’s contents, and manner of delivery are amiss, it’s too late. The criminals behind the deepfake have caused the employees to take specific, desired actions on their computers allowing the criminals easy access to their desired endpoint.
Presently, options for combating deepfakes are limited, but evolving (see article re: Facebook’s 2019 competition as example). Within a business, the best defensive tools are the "old-faithful" tenets of advice:
1. Employee training: Advise your employees that deepfakes exist and provide examples that are imperceptibly different than the authentic version. Tell your employees the type of instructions and materials that they will never receive from you (based on nature of your business), but that deepfakes may seek to falsify.
2. Monitor Online Activity: Monitor the internet through Google alerts, Twitter feeds, and social media postings to find any videos or audio recordings that falsely represent your business to mitigate potential damage.
3. Invest in Cybersecurity: Whether utilizing qualified in-house talent or MSSP/MSPs, these individuals should stay abreast of the latest and most effective software and compilations thereof to filter and detect unwanted materials from being sent across your business’s network to its employees.
Comments