Bollywood star Dharmendra’s passing has brought many of his memorable stories, films and songs back into the spotlight on social media. Among them are several video clips created using artificial intelligence (AI). These AI-generated clips reminded me of a famous song filmed with Dharmendra in the film Izzat:
“Kya miliye aise logon se jinki fitrat chhupi rahe,
Nakli chehra saamne aaye, asli surat chhupi rahe.”
(Have you met people whose true nature stays hidden? You only see the fake face, while the real one remains concealed.)
This is exactly the right description of deepfakes created using AI! The technology can produce a convincing but entirely false version of a person, masking the real identity or truth behind a carefully manufactured image or video.
India is seeing a rapid rise in new-age cyber frauds driven by deepfake technology — AI-generated videos, voices and images that appear completely real. Cybercriminals are using these artificially created identities to pose as family members, celebrities, government officials, senior company executives and even romantic partners. Their objective is straightforward: trick people into sending money, sharing private information, or giving access to their accounts.
As deepfake-based scams grow across the world, Indian users now face a wave of ‘fake faces creating real problems’.
The origins of deepfakes can be traced back to the early days of morphed celebrity images which were initially viewed as harmless digital entertainment. Deepfakes emerged from the same idea — swapping faces in movie clips, cloning voices for fun, or creating AI-driven memes. But, as with many new technologies, criminals quickly saw an opportunity. They adopted AI tools and transformed deepfakes into a powerful method to deceive and defraud a growing number of people.
Global reports show a steep increase, with deepfake fraud rising by around 700% in early 2025 and financial losses exceeding US$200mn (million) in just one quarter. Similar trends are now visible in India, where police in several states have started issuing warnings after multiple high-profile incidents.
One of the most worrying cases reported this year involved an accountant in a tech company in Bengaluru. He received a video call from someone who looked and sounded exactly like his boss. During the call, the ‘boss’ urgently asked him to transfer money to a vendor. The accountant was about to follow the instructions, but a slight mismatch in the caller’s lip movements made him suspicious. He checked with his real boss and discovered that the call was a deepfake impersonation attempt.
In another case from Mumbai, a woman received a WhatsApp call (voice) in which the caller had perfectly copied her nephew’s voice. He claimed he had been arrested after an accident — a deepfake twist on the long-running ‘relative in distress’ scam. The caller demanded ₹1 lakh immediately for securing a ‘bail’. Thankfully, the woman double-checked with her family before sending the money and realised it was a scam.
These scams succeed because they target the people we trust the most — our families, our employers, celebrities we admire and even our own natural instincts. Today’s cybercriminals can clone a person’s voice using just a few seconds of audio taken from Instagram or YouTube. They can also generate a realistic face with AI tools and build a convincing fake identity at almost no cost.
Using these artificial identities, scammers apply for jobs, open bank accounts, trick people into sending money, or persuade employees to approve fraudulent payments. Around the world, there have even been cases where remote job applicants used deepfake videos to clear interviews and gain access to company systems, or where AI-created versions of senior executives requested large fund transfers.
AI-generated identities are now enabling a new wave of employment-related fraud. Job candidates are increasingly using AI to create fake photo IDs, fabricate employment histories, and even generate real-time answers during interviews. Research cited by
CNBC from Gartner warns that by 2028, around one in four job applicants worldwide could be entirely fake — a trend that poses serious risks for employers and the wider digital ecosystem.
In
May this year, the US department of justice (DoJ) revealed how more than 300 US companies unknowingly hired impostors linked to North Korea for remote IT roles. These individuals reportedly used stolen American identities, remote access tools, and network masking techniques to conceal their true locations. The DoJ stated that millions of dollars earned through these fraudulent jobs were ultimately funnelled to North Korea’s weapons programme — a striking example of how deepfake-style identity fraud can fuel not just cybercrime, but global security threats.
The emotional damage can be as severe as the financial loss. Many victims feel embarrassed, confused, or betrayed, especially in romance scams or cases where a criminal pretends to be a family member.
Experts warn that nobody—not even tech-savvy users—is completely safe. Modern deepfakes are built to fool the human brain, not just the inexperienced.
How To Identify Deepfakes: Warning Signs
Even though deepfake technology is becoming more advanced, there are still clear clues that something may not be genuine. Every Indian smartphone user should keep the following in mind:
1. Look for Visual or Audio Glitches
Pay attention to anything that feels unnatural — robotic or distorted voices, strange pauses, lip movements that do not match the words, odd blinking, overly smooth or plastic-like skin, or facial features that seem to ‘float’ or shift.
2. Apply a Basic ‘Sense Check’
Ask yourself whether the situation makes sense. Would a family member suddenly call you from an unknown WhatsApp number and ask for urgent money? Would a Bollywood celebrity personally reach out about an investment opportunity? If it feels unlikely, treat it as suspicious.
3. Be Cautious When There Is Pressure or Urgency
Fraudsters often push victims to act quickly. If someone demands immediate payments, one-time passcodes (OTPs), bank or card details, or rapid UPI transfers, pause and reassess before responding.
4. Verify Identity through a Different Channel
Always confirm through an independent method. Call the real person using a number you already have, or check with another family member, colleague, or your company’s human resources (HR) team before taking any action.
5. Notice Issues in Background or Lighting
Deepfake videos may exhibit inconsistent shadows, blurred edges, or lighting that does not align with the rest of the scene, particularly around the face.
6. Do Not Trust Only Voice or Video
Both can now be faked with high accuracy. Always rely on additional verification before sending money or sharing sensitive information.
Protection Tips: How You Can Avoid Becoming a Victim
1. Use Multi-factor Authentication (MFA)
Enable multi-factor authentication (MFA) on your email accounts, banking services, UPI apps, and social media platforms. This adds an extra layer of security even if your password is compromised.
2. Set a Family ‘Safe Word’
Create a code word that only close family members know. In any urgent situation—real or fake—ask for this word to confirm the caller’s identity. You can use this in two ways. The first is to agree on a simple question-and-answer format. For example, in your mother tongue, you might say, “This year, we will get more mangoes from our farm.” The correct reply could be something unrelated, such as, “We will have a bumper crop of grapes.” The idea is to keep the answer unpredictable so that an AI-generated or assisted caller cannot guess it.
The second method is to refer to a past event—a fictional one. You might say something like, “Can we meet again this year at the same resort we went to last July?” The real family member knows you never met in July and they will immediately correct you. A deepfake caller, however, may pretend the meeting actually happened and agree to meet again. This simple check can expose an impersonator very quickly.
3. Be Careful with Social Media Content
Avoid posting long videos or clear audio clips publicly. Cybercriminals often utilise readily available materials to clone voices or create deepfake content.
4. Never Send Money Based Solely on a Call or Video
Even if the voice or face looks convincing, verify the request through a known phone number or an official channel before transferring money.
5. Protect Personal Identity Documents
Do not share Aadhaar, PAN, bank account details, or other sensitive documents on unsecured apps or unknown websites. Deepfake scams often succeed when fake videos are combined with stolen personal data.
6. Educate Elderly Family Members
Parents and grandparents may be more vulnerable because they trust phone calls and respond quickly to emotional appeals. Explain these scams to them regularly.
7. Report Suspicious Calls Immediately
If you receive a call that appears fraudulent, file a complaint with your local cyber cell or report it on www.cybercrime.gov.in. Prompt reporting can help prevent further incidents.
If You Suspect a Deepfake, Do This Immediately
• Stop Responding
End the call or conversation as soon as something feels suspicious.
• Block the Number or Account
Prevent the cybercriminal or fraudster from contacting you again through the same channel. You can easily block the number from your mobile’s contact list as well as from WhatsApp.
• Contact Your Bank or Service-provider
If you have shared any personal information, alert your bank, mobile operator, or relevant service-provider without delay.
• File a Police Complaint
Report the incident to your local police’s cybercrime unit or through the national cybercrime portal.
• Preserve All Evidence
Save call logs, messages, screenshot and any other details that may help investigators.
Don’t Let Fake Faces Create Real Damage
Deepfakes are rapidly becoming a dangerous tool for cybercriminals, especially in a digitally connected country like India. But with awareness, good verification habits and careful online behaviour, users can protect themselves from these AI-driven tricks.
The technology may be getting smarter, but we can get smarter too.
Staying alert, double-checking unusual requests and educating family members remain the most effective defences against AI and deepfake fraud.
If something feels off, pause. The safest choice is usually the one you make after proper verification.
Stay Alert, Stay Safe!