Fraud has become big business, to say the least.
In an interview with PYMNTS' Karen Webster, Featurespace's chief operating officer Tim Vanderham said the money from illicit gains far exceeds the revenues of the world's largest companies, “when you think about the billions of dollars coming out of fraud around the world.”
The conversation took place against the backdrop of a Wall Street Journal article detailing the rise of fraud dens, which essentially function as sophisticatedly equipped business centers with separate departments for training fraudsters, “onboarding” unsuspecting victims, and the KPIs used to determine whether a particular scam will be successful.
Along the way, scammers have proven adept at using artificial intelligence to build relationships and trust with their victims, preying on human emotions to steal personal savings and retirement funds, and siphoning funds from bank accounts at incredible speeds, especially through authorized push payments.
In the United States alone, Vanderham says, the $2.7 billion in fraud reported a few years ago is only a fraction of the actual total. This is mainly because people are embarrassed to report falling prey to unscrupulous scams, while criminal gangs use the stolen funds to fund other crimes, such as human trafficking and drug trafficking.
AI vs. AI
For banks and service providers tasked with fighting fraudsters, fighting AI with AI is a challenge.
“When it comes to using AI and machine learning, they're not being held to the same standards,” Vanderham said.
Financial institutions (FIs) are bound by ethical concerns and a series of regulations that are still being developed.
But the data that flows through financial services systems every day, and a collaborative approach to harnessing and analysing that data, could go a long way in modelling what “true human behaviour” looks like and building profiles of individuals' tendencies and transactions, he said.
Vanderham said Featurespace's model uses behavioral analytics and collaboration to understand, for example, how the trading behavior of a consumer in London differs from that of another individual living in South Africa, or to uncover whether a new deal to Hong Kong is a red flag if it's coming from someone who has never traded there before.
The data “can help banks and financial institutions address warning signs,” Vanderham said, which in turn can drive education and reality awareness for end users and allow them to do additional verification to ensure transactions are legitimate and moving in the right direction.
Featurespace has been investing in advanced algorithms to underpin its fraud prevention efforts: last year, the company launched TallierLTM, the world's first large-scale transaction model, which uses generative AI to improve fraud value detection by up to 71%.
“What OpenAI did for language and speech, we're creating for the payments environment: modeling what real behaviors and transactions look like,” Vanderham said.
Public-private collaboration will be important to drive regulatory and technological evolution.
“We need to make sure we're using advanced data algorithms and machine learning on this data to combat fraud and enable consumers to transact more freely,” Vanderham noted.
“We're ready to take on these fraudsters,” he told Webster, “to take them down and beat them the way they do” by leveraging AI and machine learning as the most prominent line of defense (and attack) against these criminals.
Read more: Artificial Intelligence, Banking, Featurespace, Fraud, GenAI, Innovation, Main Feature, News, PYMNTS News, pymnts tv, Fraud, Security, TallierLTM, Technology, Tim Vanderham, Video
Source link