In the epic late 20th-century Hindi movie Sholay (1975), there is a powerful moment when Thakur Baldev Singh is asked why he wants to bring two notorious miscreants, Jai & Veeru, to neutralise the dreaded dacoit Gabbar Singh. The jailor reminds him that both Jai and Veeru are themselves seasoned criminals. Thakur replies with calm certainty, “Loha lohe ko kaat ta hai,” meaning iron cuts iron.
This thought fits perfectly into our times. Today, the iron is Artificial Intelligence, AI for short. AI has quietly moved into our everyday life. It is present in our phones through assistants like Siri and Google Assistant, in the systems that run our cars, in smart washing machines that choose the right wash cycle, and in voice-controlled devices like Alexa. It works in our workplaces and even in our homes. It can process and analyse massive amounts of data to make data-driven decisions at a speed and scale that would be humanly impossible. It helps doctors detect illnesses more accurately. It guides farmers in deciding when to sow their crops. It keeps traffic moving in large cities. It helps us find products online and even suggests what we might want to watch in the evening on platforms like Netflix.
But just as in the old days a sword could be used to protect a village or to plunder it, AI is a double-edged sword. For every helpful and noble use of AI, there is someone trying to twist it into a tool for harm. Cybercriminals have realised that AI can make their work easier, faster, and far more effective. They no longer need to spend days writing malicious code or weeks planning an attack. Now they can let AI do the heavy lifting.
They can create fake voices so convincing that even family members cannot tell the difference. They can generate deepfake videos that appear to show real events which never happened. They can produce scam emails that feel personal, using the right tone and details to fool even cautious people. These are forms of social engineering attacks that exploit human psychology by targeting the limbic system, the part of the brain that controls emotions and instinctive reactions. By triggering fear, trust, or urgency, attackers push people into making quick decisions without thinking through the consequences. They can break passwords by analysing stolen data and spotting patterns in record time. They can scan networks and systems for weak points and prepare a customised attack within minutes. Even worse, if one attempt fails, their AI learns from it, adapts and comes back stronger, a process known as machine learning or ML.
The good news is that we are not helpless in this new battlefield. The same AI that criminals use to attack can be used by defenders to protect. AI can monitor huge amounts of online activity in real time and raise an alarm the moment it notices something unusual. It can filter out dangerous emails before they land in our inbox. It can stop suspicious banking transactions before money leaves the account. It can check if a voice is genuine or computer generated and whether a face belongs to a living person or is just a still image. AI can also look at the history of attacks and predict the next likely move, giving security teams precious time to prepare.
Today the battle is often AI against AI. On one side are criminals training their machines to break in and steal. On the other are security teams training their machines to block and mislead. In banking, when an attacker’s AI tries to guess thousands of passwords in seconds, the defender’s AI can spot the pattern instantly, shut the door and even feed the attacker false information to waste their time. In social media platforms, AI is learning to detect deepfakes before they can go viral and damage reputations. These battles happen in milliseconds and most of us never even see them.
Even with these powerful tools, the truth is that the weakest link is still us. Many attacks work not because the criminal’s AI is so advanced, but because we gave it an opening. We click on suspicious links without thinking. We trust messages that sound urgent but are fake. We use passwords so simple that even a half-trained AI could guess them in seconds. This is why technology alone is not enough. Our defence must also include strong neurocognitive habits, the mental skills that help us recognise threats and make sound decisions under pressure. Through deliberate practice, we can build a kind of neuroencoding, training our brains to respond automatically and wisely when faced with suspicious situations, avoiding costly consequences. We must use strong and unique passwords. We should pause and think before clicking on links or downloading attachments. We must verify strange messages, even if they seem to come from someone we know. We should keep our devices and software updated so that security gaps are closed. AI can help protect us, but it cannot replace our own awareness, because when we fail, the consequences can be severe and far-reaching.
The future will blur the line between our physical and digital worlds even more. Our homes will be run by smart devices. Our vehicles will be connected to the internet. Our workplaces and public services will depend on digital systems and networks. This will open new opportunities for attackers, but it will also give defenders new ways to protect us. AI might become like a personal digital bodyguard, always watching, always ready to act before we even notice a threat. But the more we rely on such guards, the greater the risk that we forget how to defend ourselves without them.
The lesson from Sholay remains true. AI is our iron. The criminals have theirs. The side with the sharper edge and the wiser mind will win. We must keep our AI sharper, faster and smarter than theirs. That means investing in technology, training people, and always staying alert to new threats.
And if we fail to do that, the day may come when an AI pops up on our screen saying, “Gabbar is calling. Do you want to accept?” Out of habit, we might just click “Yes” without a second thought. By then, the consequences will already be in motion, the game will be over, and the only thing left to ask will be, “Ab tera kyahoga, Kalia?”
Copyright©2025 Living Media India Limited. For reprint rights: Syndications Today