It's 1993. Bill Clinton just took office, Janet Jackson and Whitney Houston are topping the charts, and Visa is experimenting with neural networks designed to cut down on credit card fraud. "It was simpler AI at the time, as you can imagine. I call it good old-fashioned AI—rule-based, simple math equations and so on," Rajat Taneja, Visa's president of technology, told Tech Brew. Fast-forward more than three decades and the payments giant is still at it with a new generation of AI. Taneja said the company's long history with the technology is a key asset as it attempts to make the best use of a new class of language models at a time when AI is seemingly on the tip of every business exec's tongue. Much of the company's AI operation is still focused on combating fraud—Taneja said these models saved Visa $40 billion last year—but the company has also begun to apply large language models to writing code, customer support, and marketing personalization. The journey: Years after the early foray into machine learning, Visa began its latest effort to consolidate data and ramp up AI around a decade ago, retooling for the deep-learning revolution of the mid-2010s. The company has sunk $3.3 billion into AI and data infrastructure over the past 10 years, according to Taneja. Taneja said his team was early to the current language model era, which ultimately traces back to a seminal 2017 research paper on transformer models. Starting around five years ago, Visa began using generative AI to create synthetic data—generated output meant to imitate, in this case, fraudulent payments—to train the company's deep authorization model, which scores transactions based on risk. This helped "overcome the AI's well-known problem of cold start," he said. Keep reading here.—PK |
No hay comentarios:
Publicar un comentario