As deepfake technology becomes more advanced and accessible, it poses a growing threat to cybersecurity, particularly in the form of disinformation, social engineering, and phishing attacks. In response to this evolving risk landscape, several authoritative organizations have released guidance and resources to help understand and mitigate these threats.
NSA, FBI, and CISA Joint Cybersecurity Information Sheet (CSI) on Deepfake Threats
Released on September 12, 2023, this alert introduces a joint publication by the NSA, FBI, and CISA titled “Contextualizing Deepfake Threats to Organizations”.
The official PDF document outlines the risks associated with synthetic media, including how malicious actors may use deepfakes for:
-
- Impersonating executives in video or audio calls
-
- Influencing public opinion and trust
-
- Manipulating or fabricating evidence
-
- Enabling sophisticated social engineering or phishing attacks
The guide encourages organizations to adopt a proactive approach by enhancing media authentication processes, promoting user awareness, and incorporating deepfake detection capabilities into their cybersecurity strategies.
University of Florida’s Deepfake Phishing Awareness
The University of Florida’s IT Security Office expands on the concept by providing educational resources on how deepfakes are used in phishing. It emphasizes the growing risk of attackers using voice and video impersonation in business email compromise (BEC) and similar scams. The page offers practical steps to recognize potential deepfake-based phishing attempts, including:
-
- Being cautious with urgent or unusual voice/video messages
-
- Verifying identities through multiple channels
-
- Training staff to identify inconsistencies in speech, tone, or lip-syncing
To prevent/ stop deep fakes:
Verification Protocols
-
- Multi-channel validation: Always verify requests (especially financial or sensitive) using multiple communication methods (e.g., follow up a video call with a phone call).
-
- Out-of-band authentication: Use known trusted channels, especially for executive or high-value communications.
AI Detection Tools
-
- Implement deepfake detection software that analyzes facial expressions, blinking patterns, audio mismatches, and inconsistencies in lighting or shadows.
-
- Examples: Microsoft Video Authenticator, Deepware Scanner, Reality Defender, Truepic.
-
- Implement deepfake detection software that analyzes facial expressions, blinking patterns, audio mismatches, and inconsistencies in lighting or shadows.
-
- Monitor for synthetic media using content authentication tools like:
-
- Content Provenance and Authentication (CPA) standards
-
- C2PA (Coalition for Content Provenance and Authenticity)-enabled tools
-
- Monitor for synthetic media using content authentication tools like:
Email & Communication Filtering
-
- Use advanced threat protection (ATP) systems and email filters to detect phishing and spoofing attempts.
-
- Monitor for anomalous patterns in video or audio calls (e.g., executive voice showing up from an unknown IP).
Digital Watermarking & Content Authentication
-
- Encourage the use of digital watermarks or cryptographic signatures on legitimate media.
-
- Platforms and media creators can implement tools that verify the authenticity and origin of a video or image.