Fraud Alert: AI & ML, the Double-edged Swords!
Technology, at the base level, is a great equaliser. It does not differentiate whether the user is poor or rich. When you dial a number from your mobile, it gets connected and you can speak with the other person. It shows technology delivers the same results. However, the moment we club technology with new-age tools like artificial intelligence (AI) and machine learning (ML) and programs such as large language learning models (LLMs) developed with these tools, the end results vary significantly. Humans (read: rich/ resourceful) with easy access to new-age tools will always remain ahead compared with those without or with negligible access. In the same example of mobile calls, the poor person may still be using a basic mobile (2G or 3G), while the rich person may have a device with the latest technology, like 5G, to make clear calls. 
 
In an essay written with Nathan Sanders that appeared in The Conversation, public-interest technologist Bruce Schneier says, "AI tools are more likely to make the already powerful even more powerful. Human + AI generally beats AI only: The more human talent you have, the more you can effectively make use of AI assistance. The richest campaigns will not put AIs in charge, but they will race to exploit AI where it can give them an advantage."
 
"But while the promise of AI assistance will drive adoption, the risks are considerable. When computers get involved in any process, that process changes. Scalable automation, for example, can transform political advertising from one-size-fits-all into personalised demagoguing—candidates can tell each of us what they think we want to hear. Introducing new dependencies can also lead to brittleness: Exploiting gains from automation can mean dropping human oversight, and chaos results when critical computer systems go down," the authors say in the essay.
 
For a country like India, whose gigantic population consists mainly of new generations, direct exposure to AI, ML and LLMs poses more serious challenges than benefits. For example, we all know fire is the biggest invention in human history. When used with good intentions, we can perform many tasks or create wonderful things using fire. However, in the hands of an anti-social element, fire becomes a destroyer of everything it touches. 
 
It is the same story for new-age technology tools. AI, ML and LLMs have revolutionised our online experience and interactions and improved efficiencies and enhanced user experiences (well, partly. It still remains a pain to explain a consumer issue to Robochat assistants or over interactive voice response-IVR systems!). 
 
However, these new-age powerful technologies can be misused by cybercriminals to exploit unsuspecting individuals or carry out much more sophisticated cyberattacks. Therefore, understanding how these tools are used maliciously and learning to protect yourself is crucial in today's digital landscape. 
 
Cybercriminals use AI, ML and LLMs for phishing and spear-phishing attacks, deepfake scams, automated social engineering attacks, credential stuffing and password cracking, fake reviews and manipulation, malware generation and, last but not least, to exploit personal data. 
 
Phishing and Spear-phishing Attacks
Criminals leverage AI to create highly convincing phishing emails or messages. Using ML algorithms, they analyse publicly available data to tailor communications that seem genuine, making it harder to detect fraudulent attempts.
 
Deepfake Scams
AI-powered deepfake technology enables criminals to create realistic fake videos or audio recordings. These can be used to impersonate individuals, tricking others into transferring money or divulging sensitive information.
 
Automated Social Engineering
LLMs can generate persuasive and context-aware messages, allowing criminals to automate large-scale social engineering attacks. These messages often mimic human-like responses, making them more credible.
 
Credential Stuffing and Password Cracking
Machine learning algorithms can analyse stolen credentials to predict likely passwords or identify patterns. This speeds up brute-force attacks, increasing their success rate.
 
Fake Reviews and Content Manipulation
AI tools are used to generate fake reviews or manipulate content to deceive users into trusting fraudulent websites, products, or services.
 
Malware Generation
Criminals use AI to create more sophisticated malware that can adapt to evade detection by traditional security measures.
 
Exploitation of Personal Data
Using ML, criminals analyse massive datasets to extract sensitive personal information. This data can then be exploited for identity theft, blackmail, or fraud.
 
This brings us to the most crucial question: How can one safeguard oneself from being a victim of these new generation artificial tools? As AI, ML and LLM are evolving technologies and tools, we also need to adopt a flexible approach when dealing with them. 
 
Here are a few suggestions...
 
1. Stay informed: Regularly educate yourself about emerging threats and techniques. Awareness is your first line of defence against cybercrime.
 
2. Verify communications: Be cautious of unexpected messages or emails. Verify the sender's identity through a separate, trusted channel before responding or taking action.
 
3. Use strong and unique passwords: Employ complex passwords for your accounts and avoid reusing them. If possible and you are comfortable with it, you can use a password manager to generate and store passwords securely.
 
4. Enable multi-factor authentication (MFA): Add an extra layer of security to your accounts by enabling MFA which requires multiple forms of verification.
 
5. Be sceptical of media content: Question the authenticity of videos, images and audio recordings, especially if they seem suspicious or out of character for the person involved.
 
6. Keep software/ apps updated: Ensure that your devices and applications are up-to-date with the latest security patches to protect against vulnerabilities.
 
7. Use security tools: Invest in reputable antivirus software and firewalls. Also, consider browser extensions that detect phishing attempts.
 
8. Limit data sharing: Be mindful of the personal information you share online, especially on social media platforms. Criminals often gather data from these sources to craft targeted attacks.
 
9. Educate others: Share knowledge about AI-driven scams with friends and family. A community that is aware and vigilant is harder to exploit.
 
10. Report suspicious activity: If you suspect a scam or fraud attempt, report it to the relevant authorities or platforms. This helps prevent others from becoming victims.
 
While AI, ML and LLMs offer incredible benefits, their misuse by criminals is a growing concern. Understanding how these technologies are exploited and adopting proactive measures can protect yourself and your loved ones from falling victim to sophisticated cyber threats. 
 
Stay informed, stay cautious and use technology to enhance your security, not compromise it!
 
Comments
ArrayArray
Free Helpline
Legal Credit
Feedback