As we are getting ready to say goodbye to 2019, the cybercriminals too are preparing to launch more sophisticated attacks on electronic devices, mainly mobile handsets. In 2019, we have seen number of users of financial app, especially the unified payment interface (UPI), getting confused between push and pull transactions and losing money.
The New Year, however, would be more challenging for most common users due to the increased use of ‘deepfakes’ and ransomware.
As the name suggests, deepfakes are media that take a person in an existing image or video and replace them with someone else's likeness using artificial neural networks.
They often combine and superimpose existing media onto source media using machine learning techniques known as auto encoders and generative adversarial networks (GANs). One such example of deepfakes is morphed videos and images being circulated widely through social media, especially WhatsApp in India.
Unfortunately, the majority of users believe in such morphed videos and images and spread it like wildfire.
Deepfakes are being widely used to create fake news or for propaganda. For example, in May this year, US president Donald Trump shared a video with a caption ‘(Nancy) Pelosi stammers through news conference’.
However, this video was a deepfake of an original video Ms Pelosi, speaker of the US House of Representatives. This was a speech she gave during a press conference for Fox News show Lou Dobbs Tonight.
There was one more video of Ms Pelosi. Both these videos were deepfaked to make her appear as though she was slurring in her speeches.
Deepfake audio, or even video, can be used for calling gullible users and requesting them to transfer funds. This is more like a sophisticated version of hacking into email accounts and seeking funds from friends and acquaintances of the user through ‘emergency’ emails.
The email account of Kumar Ketkar, a veteran editor, was hacked in 2010 and hackers sent emails asking for money. Fortunately, all these people called Mr Ketkar and found out that it was a hoax created by cyber-criminals. (
Read: A Trojan from Nigeria)
While commenting on the arms race between technologies to create fake videos and technologies to detect fake videos, security expert and
public interest technologist Bruce Schneier had said, “I do not know who will win this arms race, if there ever will be a winner. But the problem with fake videos goes deeper: they affect people, even if they are later told that they are fake, and there always will be people that will believe they are real, despite any evidence to the contrary.”
It is anticipated that deepfakes will be deployed to impersonate high-level targets at enterprises in order to scam employees to transfer money into fraudulent accounts over next few years.
Earlier this year, a chief executive of a UK energy company was scammed into paying €220,000 into a Hungarian bank account by a scamster who impersonated the voice of the company's parent company's chief using audio deepfake technology.
How can we protect ourselves from deepfakes? For starters, carefully watch the video or hear the audio. If the scamsters are not too sophisticated, or with less resources, you can detect some deficiencies in the video or audio.
For example, in such videos, check if there is any movement of eyes. If not, then this definitely is a deepfake. For audio, check if there is consistency in the speech or the words are coming out as a normal conversation rather than slurred and scripted text.
If the scammers are more sophisticated and have large resources (remember only state actors have these luxuries), these defects won’t exist.
But, in any case, never, ever, send money to a friend or relative believing a video or audio. Just make a phone call to that person and confirm if things are fine and he/she is not in any financial trouble.
Wish you a very happy and safe digital New Year!