The X handle of Aam Aadmi Party (AAP) recently shared a video purportedly showing Bollywood actor Pankaj Tripathi in an advertisement urging people not to vote for the Bharatiya Janata Party (BJP). However, this video turned out to be an edited version or a deepfake created using artificial intelligence (AI). In the original footage, the actor can be seen raising awareness about avoiding fraudulent links received in the unified payment interface (UPI). AAP has now deleted the video from X.
A few months ago, scammers even used
deepfake videos of Ashaishkumar Chauhan, managing director and chief executive officer (MD&CEO) of the National Stock Exchange (NSE), to recommend some stocks for investment.
Deepfake technology leverages artificial intelligence (AI), machine learning (ML) and large language models (LLMs) to incorporate hyper-realistic digital falsification in fake images, audio or videos.
Deepfakes are used in movies and TV shows to create realistic special effects and make educational content more engaging and interactive.
On the flip side, deepfake scams are a growing concern as they use AI and ML to create highly realistic but fake videos, images, and audio. These scams can be used for various malicious purposes, including disinformation, financial fraud, and social engineering attacks.
Scammers exploit deepfake tools to manipulate content, making it appear that someone is endorsing or performing actions they never did.
As we have seen over the past few years, deepfakes, created with new tools, are becoming increasingly sophisticated. And that is precisely why deepfake poses significant risks, including the potential for disinformation, fraud and social engineering attacks.
Deepfake frauds refer to scams or crimes that involve the use of deepfake technology, where AI and ML tools are employed to manipulate videos, audio recordings, or images in such a way that they appear to be real but are actually fake. These fakes can mimic the voices, faces, and actions of real individuals, making it incredibly difficult for the average person to distinguish between what is real and what is fake.
Let us first see some common frauds committed using deepfake. It includes impersonation of individuals (like Mr Tripathi, the RBI governor or the NSE chief), social engineering and phishing, fake news or misinformation and last but not the least, blackmail and extortion.
Fraudsters can use deepfakes to create and send a message that appears as though a trusted colleague or family member is sending it and convinces the victim to provide sensitive information or engage in fraudulent transactions.
Cybercriminals are also creating fake videos (mostly using photos or videos posted on social media by the victims or by their near and dear ones) to blackmail individuals. Deepfake can be exploited to produce explicit content involving individuals without their consent, leading to significant privacy breaches and emotional distress. Cybercriminals threaten to release compromising content (deepfake) that is not real but could damage the person’s reputation, sometimes beyond repair.
As AI and ML tools continue to evolve, scams or fraud using deepfakes are becoming very sophisticated, making them harder to detect. It poses a significant challenge for individuals and organisations seeking to protect themselves from deepfake threats.
How To Identify and Protect Yourself from Deepfakes
With rapid technological development and easy availability of AI and MI tools, criminals are creating new and more realistic deepfake content. It also means there are no permanent solutions or guidelines to identify deepfake from the original content.
However, you can use some pointers below to identify and avoid falling victim to deepfake frauds. It includes examining the quality of media, checking source and context, looking for emotional manipulation, using tools to detect deepfakes and tightening the security of your digital world.
1. Examine the quality of the media
Pay attention to unnatural blinking, awkward facial expressions, lip-sync issues, synthetic hair, abnormal skin colours, awkward head and body movements, inconsistent lighting, robotic-sounding voices and distorted or misaligned visuals that may indicate manipulation.
Inconsistent lighting and shadows: Deepfakes often struggle with natural-looking lighting and shadows. If the lighting on a person’s face seems off, it may be a red flag.
Unusual eye movement: Look for odd or unnatural eye movements, such as blinking inconsistently or faces that seem too still or not responsive.
Mouth and lip syncing: Deepfakes sometimes fail to sync their mouth movements perfectly with spoken words. Pay close attention to these details.
Audio distortions: In deepfake audio, there may be unnatural tones or glitches, such as robotic-sounding voices or abrupt shifts in intonation.
2. Check the source and context
Always ensure the genuineness of digital content before sharing or acting on it. Check if the content is from a reputable and trustworthy source. Be sceptical of unsolicited messages or requests.
Verify the authenticity of the media: If you receive a video or voice recording from an unfamiliar or unexpected source, especially if it includes a request for money or sensitive information, verify the source before taking any action.
Cross-check Information: Look for the same content on trusted platforms or websites. If it’s something shocking or controversial, see if other reputable news outlets or people are talking about it.
Examine URLs and social media accounts: Fraudsters may impersonate legitimate websites or accounts. Ensure the website URL is correct and the social media profiles are verified.
3. Look for emotional manipulation
Deepfakes often prey on your emotions, such as fear or excitement. They may try to scare you into making decisions quickly (like transferring money) or create a sense of urgency. Be cautious if you feel pressured.
4. Use technology to detect deepfakes
There are emerging tools and apps designed to detect deepfakes. These tools analyse inconsistencies in videos or audio files that may not be obvious to the human eye.
Some examples of deepfake detection tools include Microsoft Video Authenticator (it analyses images and videos for signs of manipulation), Deepware Scanner (an app that identifies deepfakes on smartphones) and InVID, which is a browser plugin that helps verify the authenticity of videos.
5. Be cautious with personal information
Always remember that social media is not and will never be your friendly neighbourhood katta or meeting spot. Also, be cautious when sharing personal information online. Fraudsters use deepfakes to gain trust and often use your data for targeted attacks. It means you must limit the amount of personal data you share online, especially high-resolution photos and videos that could be easily exploited to create deepfakes.
Enable strong privacy settings: Use privacy settings on social media and other platforms to control who can access your content.
Use multi-factor authentication (MFA): Implement MFA on all systems to prevent your accounts from deepfake-based hacking attempts.
Educate yourself and others: Stay informed about the latest developments in deepfake technology and educate family members on recognising and detecting deepfake content.
How To Respond to a Deepfake Incident
If you find that you have been targeted by a deepfake scam, you must take immediate action, including reporting the incident to law enforcement agencies (LEAs) like cyber police as well as the affected party or parties.
If the deepfake involves the impersonation of a colleague, family member, or public figure, let them know so they can take necessary action, such as issuing a public disclaimer or warning.
If your financial accounts or emails were compromised, you need to contact your bank, email-provider and other relevant organisations to secure your accounts and personal and financial details.
While deepfake frauds are becoming more sophisticated, awareness and vigilance can help protect you from becoming a victim. By examining the quality of content, verifying sources, using detection tools and strengthening your personal security practices, you can reduce your chances of falling prey to such scams.
Stay Alert, Stay Safe!