The app for independent voices

๐‡๐จ๐ฐ ๐€๐ˆ ๐ˆ๐ฌ ๐๐ž๐ข๐ง๐  ๐”๐ฌ๐ž๐ ๐ญ๐จ ๐‚๐จ๐ฆ๐›๐š๐ญ ๐๐š๐ฒ๐ฆ๐ž๐ง๐ญ ๐…๐ซ๐š๐ฎ๐

Advancements in AI capabilities can be used both to harm and to protect.

Malicious actors are increasingly using accessible and affordable AI-powered tools to automate fraud attempts and reach more victims at scale. Generative AI can improve the quality of written scams and even create deepfake videos or voice clones, making social engineering attacks far more convincing.

These techniques exploit human behavior and emotions, manipulating people into sharing sensitive information or unintentionally compromising security.

As banks have strengthened their cyber defenses in recent years, fraudsters have shifted their focus toward customers. The rise of real-time payments has further narrowed the window to detect and block fraudulent transactions, increasing the pressure on prevention systems.

____

But companies are fighting back using AI.

AI-powered fraud detection tools are already preventing millions of dollars from falling into the hands of criminals. According to recent surveys:

- 42% of issuers

- 26% of acquirers

say they used AI to save more than $5 million from fraud attempts over the past two years.

The rapid availability of accessible and affordable generative AI and LLM-based tools has driven widespread adoption across industries. As of early 2025, 78% of organizations were using AI in at least one business function, up sharply from 55% just two years earlier.

While adoption is still in its early stages, these technologies are already transforming how the payments industry detects, prevents, and responds to fraud.

#fintech #payment #fraud

Jan 12
at
7:01 AM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.