I’m of the view that most technology is at its essence, agnostic. Whether it is used for good or ill depends upon the intent of the user.
Artificial intelligence, which has exploded into the public eye over the past year, certainly falls into this category. In the field of finance, this dazzling technology offers both promise and peril. That’s the takeaway from a conversation I had with Craig Lewis, a professor of finance at Vanderbilt University, who previously served as chief economist for the U.S. Securities and Exchange Commission (SEC). We talked at a recent conference on disinformation at Cambridge University in England.
Disinformation (I host a podcast on this topic) can be defined as the manufacture and dissemination of false narratives with the deliberate intention of deceiving others. Some of the most famous accounting scandals in recent history fall neatly into this category. Two notorious examples: Enron Corp., once a Houston-based energy giant, and WorldCom, once America’s second-largest long-distance telephone company, collapsed more than two decades ago after executives were caught cooking the books in sleazy attempts to con investors.
Similarly, Bernie Madoff, who ran the biggest Ponzi scheme in history before being outed in 2008, can also be considered a manufacturer of disinformation for his years of conning gullible investors into believing that his asset management business could generate consistent, above-average returns.
In Cambridge, I asked Lewis whether artificial intelligence could have rooted out these fraudsters sooner. “It’s really interesting,” Lewis says. “I think the natural language processing component to fraud detection and financial results is kind of right at its infancy.” This, Lewis says, could have an impact when it comes to documents that corporations are required to file with the SEC, like 10-Ks.
“When you think about the management discussion and analysis section in a 10-K, it’s a company’s attempt to explain what’s going on. There is a lot of opportunity to try to position and interpret your performance through the way you represent the text in your financial statements. So in addition to having these quantitative metrics based on ratios, for example, you can also try to take unstructured data like text and put structure around it and analyze the structured data itself.”
Meaning that as AI improves, along with people’s ability to engage it through carefully crafted prompts, our insight into a company’s operations and management thinking could meaningfully improve. (A prompt is when we describe a task that we want an AI program, say ChatGPT, to perform).
“ChatGPT is an incredibly transformative idea,” Lewis says. “And one of the things that you can do with something like that (in terms of preparing a 10-K, for example) is you could literally draft the text and then push it through ChatGPT and ask it write it more elegantly — for example like a Wall Street Journal article. If your intentions are good, this could help somebody come up with even more clear, transparent ways of describing performance.”
But Lewis cautions that “Clever earnings management schemes arise from operational problems and a desire by management to hide them. It is unclear how generative AI would be very helpful at the margin. I think ‘solutions’ would be idiosyncratic to the firms’ internal control systems.”
Meanwhile, let’s return to my earlier point that technology is agnostic. When current SEC Chairman Gary Gensler was a professor at MIT, he wrote this paper warning of the dangers of AI. Gensler warned that AI presents a double-edged sword, “providing previously unseen predictive powers enabling significant opportunities for efficiency, financial inclusion, and risk mitigation.” But, he added, “Broad adoption of deep learning, though, may over time increase uniformity, interconnectedness, and regulatory gaps.”
One risk, he noted, was that AI-powered trading algorithms could cause markets to crash, if programmed to think and operate in the same way. “There simply are not that many people trained to build and manage these models, and they tend to have fairly similar backgrounds,” Gensler wrote. “In addition, there are strong affinities among people who trained together: the so-called apprentice effect.”
Others call this dynamic the “herd effect,” when markets stampede in one direction or the other. We’ve already seen what can happen when algos are designed to do the same thing. In 2010 — a lifetime ago in terms of both technology and the markets — an event known as a “flash crash” occurred, sending markets into a steep plunge and causing hundreds of billions of dollars to evaporate in minutes before recovering nearly as quickly.
Regulations were put in place after the crash, following the well-worn path of regulators fighting the last war, as opposed to trying to anticipate the next one. It is this anticipation that we should now focus on with regard to artificial intelligence. AI’s power has advanced faster than our ability to understand, much less attempt to control it.
More: How to defend yourself and stay safe against AI fraud and identity theft
Also read: Why AI-generated stock picks won’t beat the market
Read the full article here