It’s 1993. Bill Clinton just took office, Janet Jackson and Whitney Houston are topping the charts, and Visa is experimenting with neural networks designed to cut down on credit card fraud.
“It was simpler AI at the time, as you can imagine. I call it good old-fashioned AI—rule-based, simple math equations and so on,” Rajat Taneja, Visa’s president of technology, told Tech Brew.
Fast-forward more than three decades and the payments giant is still at it with a new generation of AI. Taneja said the company’s long history with the technology is a key asset as it attempts to make the best use of a new class of language models at a time when AI is seemingly on the tip of every business exec’s tongue.
Much of the company’s AI operation is still focused on combating fraud—Taneja said these models saved Visa $40 billion last year—but the company has also begun to apply large language models (LLMs) to writing code, customer support, and marketing personalization.
The journey: Years after the early foray into machine learning, Visa began its latest effort to consolidate data and ramp up AI around a decade ago, retooling for the deep-learning revolution of the mid-2010s. The company has sunk $3.3 billion into AI and data infrastructure over the past 10 years, according to Taneja.
Taneja said his team was early to the current language model era, which ultimately traces back to a seminal 2017 research paper on transformer models. Starting around five years ago, Visa began using generative AI to create synthetic data—generated output meant to imitate, in this case, fraudulent payments—to train the company’s deep authorization model, which scores transactions based on risk. This helped “overcome the AI’s well-known problem of cold start,” he said.
“If you’re building a new model for risk and you don’t have existing data to tell you what kind of transactions are taking place, can you mimic and generate sample data using other data?” Taneja said.
The team also tapped transformers to help predict risk for account-to-account transactions and detect enumeration attacks in which cyber-criminals buy card numbers off the dark web and test them in mass quantities.
“This was all in the time before the big hoopla around generative AI,” Taneja said. “And of course, that hoopla is well-justified.”
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
The latest undertakings: These days, Taneja’s team is applying LLMs across three different buckets that might sound familiar if you’ve been reading Tech Brew in the past year or so: Developer tools, automating business functions and aiding customer service agents, and personalized marketing around improving the shopping experience.
“A lot of the hard work [of shopping] is done by us, and we think we will move much like autonomous cars to a world where commerce can be benefited by a lot of self-driving,” Taneja said. “AI can help take the burden of shopping out and bring the joy of shopping back so registering for warranties, price matches, service calls—all of that it can help. Packages could show up before you even thought about it, because it knew it was someone’s birthday in your house and what you would want to buy.”
The equipment: Visa currently taps a mix of models for its various AI functions, including some from OpenAI and Anthropic and open models from IBM and Mistral, Taneja said. The company is also evaluating Meta’s Llama open model and Google Gemini.
The company runs most of its workloads on its own data centers with redundancy and backup generators built in, given Visa’s classification as critical or strategic service provider by US and European regulations, according to Taneja.
Some parting thoughts: Visa operates in a highly regulated industry and has had to shape its AI strategy around compliance. But Taneja said future regulations should aim to boost AI innovation.
“We need good regulations. We need a collaboration between regulators, governments, academia, industry like us. We need tool kits that they provide,” he said. “It’s important to have that with a good connection between all these parties that have got great knowledge and experience and expertise.”
Taneja also thinks there should be more openness and transparency in general around how models operate.
“The debate between closed models and open-source models—that’s going to be a huge thing in the coming years, because you cannot have a model that’s kind of completely closed,” he said. “You have to have visibility into it, even if somebody is not giving you the underlying training features.”