When a finance executive at a British engineering firm, Arup, joined a routine Skype meeting in Jan 2024, familiar faces filled the screen. The company leaders he had interacted with countless times spoke with authority, blinked naturally, and smiled reassuringly.
Except the people on screen were not real, and the AI-crafted doubles duped the firm, which worked on the Statue of Unity and India’s rail projects, of nearly $25 million. The deepfakes were so flawless that 15 transfers from the Hong Kong office went through before anyone realised the con. Months later, a deepfake surfaced closer to home. At the India office of a global chipmaker, a man used artificial intelligence (AI) to impersonate a real job candidate in an online interview.
“He synced facial movements and tone quite well but we detected the use of deepfake tech, and he was out,” says Naveen Sharma, co-founder of Kroop AI, which built Vizmantiz, a detection tool for synthetic videos and audio.
Deepfake Shock: Nirmala Sitharaman Reveals Fake Videos Of Herself Online, Warns Of AI’s Dark Side
Deception turns domestic India’s deepfake saga began in 2020, when deepfake videos of politician Manoj Tiwari “speaking fluent Haryanvi” to appeal to voters went viral ahead of Delhi assembly polls. By mid-2023, the menace had turned personal: in Kerala, a 73-year-old man lost Rs 40,000 following a WhatsApp deepfake call seemingly from his friend, pleading for urgent help from Dubai.
India recorded a staggering 280% year-on-year increase in deepfake incidents in Q1 2024, particularly in the leadup to national elections, reported Sumsub, a global identity verification provider. A McAfee survey in Nov 2024 found 75% of Indians had seen deepfake content in the past year, and 45%reported that they knew someone who had been duped by a deepfake fraud. “The term ‘deepfake’ covers both synthetic content created from scratch and manipulated content that alters existing videos,” said Sharma.
“Both forms distort truth: one invents it, the other rewrites it.”
Tracing AI fingerprints Forensic experts are now learning to read what AI cannot hide. “Deepfake audio is often too clean, lacking normal background noise,” says Dr Surbhi Mathur, head of Centre of Excellence in Multimedia Forensics at National Forensic Sciences University (NFSU). “AI faces also lack natural light variations and photo response non-uniformity (PRNU) — the ‘fingerprint’ left by camera sensors.
The facial micro-expressions often lacks natural blinking patterns, or the way someone moves their face or their hands near the face.
” Sandeep Shukla, director of International Institute of Information Technology, Hyderabad, says, “There are tools that claim over 90% accuracy in deepfake detection. However, as these rely on neural network-based deep learning tech, there’s no guarantee that every form of media manipulation will be detected.”
Police and judges must be trained in detection and its limits, he urges. “And, when guilt is proven, the punishment must be high enough to act as a deterrent.
”
Faces you know, scams you know
The surge in deepfake content is driven by mass appeal, says Mathur. “Scammers exploit trusted faces for massive financial gain.” Since 2023, the deepfakes analysis unit (DAU), established under the Misinformation Combat Alliance with support from the corporate affairs and IT ministries, has tracked hundreds of AI-created scams.
Fabricated “endorsements” for investment schemes and gaming apps have used the faces of Ratan Tata, N R Narayana Murthy, Rahul Gandhi, Nirmala Sitharaman, Virat Kohli, and even doctors Naresh Trehan and Devi Shetty for bogus health cures.
A viral Ratan Tata “investment video” was found to be 83.8% AI-generated. Tata late
Read More
