VISA Inc (NYSE: V) is using artificial intelligence and machine learning to protect against and counter fraud.
The company explained on Friday that criminals use AI bots to perform brute force attacks, continually submitting online transactions through primary account numbers, card verification values (CVV) and expiration dates until they get an approval response.
This method is called an enumeration attack, and causes USD$1.1 billion in fraud losses every year.
The company prevented USD$40 billion in fraudulent activity from October 2022 to September 2023, nearly doubling the amount from the previous year. Scammers employ various fraudulent tactics, including using AI to generate primary account numbers (PANs) and test them consistently, said James Mirfin, global head of risk and identity solutions at Visa.
A PAN, usually 16 digits but sometimes up to 19, is a card identifier found on payment cards.
“We look at over 500 different attributes around [each] transaction, we score that and we create a score—that’s an AI model that will actually do that,” said Mirfin. “We do about 300 billion transactions a year.”
VISA assigns a real-time risk score to each transaction, helping detect and prevent enumeration attacks in transactions where a purchase is processed remotely without a physical card via a card reader or terminal.
Additionally, Visa uses AI to rate the likelihood of fraud for token provisioning requests, tackling fraudsters who use social engineering and other scams to illegally provision tokens and perform fraudulent transactions. In the last five years, the firm has invested USD$10 billion in technology to reduce fraud and increase network security.
Read more: Verses announces Genius public beta preview and webinar June 20
Read more: VERSES AI levels up with global standards for intelligent system interoperability
All manner of scams are using AI
Mirfin also warns that cybercriminals have been turning to generative AI and other tech, including voice cloning and deepfakes in their scams.
“Romance scams, investment scams, pig butchering – they are all using AI,” he said.
In pig butchering, scammers build relationships with victims before convincing them to invest their money in fake cryptocurrency trading or investment platforms.
“If you think about what they’re doing, it’s not a criminal sitting in a market picking up a phone and calling someone,” said Mirfin.
“They’re using some level of artificial intelligence, whether it’s a voice cloning, whether it’s a deepfake, whether it’s social engineering. They’re using artificial intelligence to enact different types of that.”
Scammers use generative AI tools like ChatGPT to produce more convincing phishing messages to deceive people. Cybercriminals using generative AI can clone a voice with less than three seconds of audio, according to U.S.-based identity and access management company Okta.
This cloned voice can then trick family members into believing a loved one is in trouble or deceive banking employees into transferring funds from a victim’s account. Additionally, hackers and other cybercriminals have exploited generative AI tools to create celebrity deepfakes to deceive fans, said Okta.
“With the use of Generative AI and other emerging technologies, scams are more convincing than ever, leading to unprecedented losses for consumers,” Paul Fabara, chief risk and client services officer at Visa, said in the firm’s biannual threats report.
Read more: Illustrious market research firm Gartner recognizes VERSES AI for its innovation
Read more: Verses AI onboards chief product officer in push for AI product Genius
Regulatory response to AI is slow
The regulatory response has been moving slowly as it continues to grasp with the complexities of the technology.
There is no federal regulation policy solely dedicated to AI at present. However, various governmental bodies have taken steps to regulate specific aspects of AI. For instance, the Federal Trade Commission (FTC) actively protects consumers related to AI, issuing guidance on AI use in advertising and marketing and enforcing actions against biased algorithms.
Additionally, the National Institute of Standards and Technology (NIST) developed a framework to address AI risks, emphasizing the importance of transparency, explainability, and accountability in AI development and deployment.
According to Vancouver-based artificial intelligence company, Verses AI (CBOE: VERS) (OTCQX: VRSSF), policymakers and government agencies must understand the intricate nature of AI, recognizing its interconnection with human culture, identity, and human rights. This understanding opens up the potential to harness AI for positive societal impact.
To enhance policymakers’ understanding and inform decision-making processes, the company proposes establishing an International AI Regulation Agency. This agency could also act as a global governance board, studying the worldwide effects of AI and promoting international cooperation on AI regulation, drawing inspiration from the Intergovernmental Panel on Climate Change.
.
Verses is a supporter of Mugglehead news coverage
.
Follow Joseph Morton on Twitter
joseph@mugglehead.com
