India witnesses surge in fraudulent activities led by AI
Feb 2024 | Experian in the News |
By Posted by The Statesman

According to a latest study, India has experienced a significant upswing in fraudulent activities aided by the arrival of generative AI.

Nearly 64% of respondents acknowledged experiencing an increase in fraud losses over the past year, according to the study conducted by Forrester Consulting, commissioned by Experian.

It said that 67% of these respondents are currently grappling with the challenges of keeping pace with the rapidly evolving landscape of fraud threats.

Also, the financial services organisations reporting the biggest increase in losses. There are several factors driving this surge in fraud attacks, such as persisting financial pressure on consumers, numerous data breaches leaking sensitive information onto the dark web, and the arrival of publicly accessible Generative AI – which has lowered the technical skills required to conduct fraud.

“As technology advances, the landscape of fraud undergoes shifts, creating new complexities. The rise of advanced technology transforms fraud dynamics, challenging established securities, and breaking down barriers,” the study said.

The volume of fraud attacks has gone up in almost every category. Synthetic identity attacks, where real and fake information are combined to form new identities, are on the rise, particularly in the financial services sector.

This is closely followed by identity theft attacks and account takeover attacks. In the eCommerce sector, friendly fraud attacks, where customers dispute legitimate charges, have grown the most for merchants, followed by synthetic identity attacks.

The biggest challenge limiting businesses’ ability to prevent fraud is a lack of device fingerprinting for fraud identification (62%). The second is an increasing number of referrals, causing increased delays and costs (56%).

The top fraud-related priority is improving the explainability of ML models (65%), which shows how important ML has become to fraud prevention. Transparent ML models allow for human oversight so that unintentional bias can be identified. This capability is essential to ensure ethical AI use and to comply with a future AI regulatory framework.