The Truth About 'Lie Detector' AI and Machine Learning Disclosure Analysis

The Truth About ‘Lie Detector’ AI and Machine Learning Disclosure Analysis

In most cases, effective communication is intuitive – you know it when you hear it. As an IR professional, you are probably a natural communicator in your day-to-day life and investors are, after all, just people. But what does an effective message look like when your audience is a machine?

Investors are increasingly adopting advanced tools that use AI and machine learning to analyze disclosures, trying to find hidden meaning in numbers, words, and even non-verbal behavior in corporate disclosures. . These tools are used during your earnings calls, road shows and even private phone calls with investors. So how do you communicate effectively in this new disclosure landscape?

In this article, I will provide insights from the academic literature to explain what some of these tools are, why investors use them, and what IR professionals can do about them.

Much of the corporate disclosure content is also available in financial statements, suggesting something most IR professionals are familiar with: How? ‘Or’ What something is said can be as important as What is said. And investors looking for an edge over their competitors use a variety of methods to analyze words and voice signals in disclosure.

Text analysis

Language analysis methods in a disclosure range from simple word counting to complex probabilistic modeling. Word count can be used to measure the frequency of specific words, like “income” or “risk,” as well as categories of words, like personal pronouns. Specific dictionaries can also be used for features such as tone, which compare the number of positive and negative words in a disclosure.

Although these approaches are simple to use, a limitation is that they may not capture the context of the disclosure. For example, consider the simple sentence: “Bad debt expenses have decreased”. While this clearly conveys positive information, a simple word-counting approach would likely classify each of the words as negative, resulting in a measure of tone that doesn’t capture the meaning of the sentence.

More sophisticated approaches attempt to overcome these issues using tools such as machine learning. These methods typically start by collecting a large sample of historical data, such as transcripts of a company’s previous earnings calls.

This data is used to “train” a model to look for relationships between groups of words and stock movements. The model can then be used in real time to predict stock prices using the words in the news disclosures.

Although machine learning approaches have some advantages over simpler methods, they also have limitations. Machine learning involves complex calculations and algorithmic analysis, which can be expensive to implement. The complexity of these approaches can also increase the risk of user error. Combined with high frequency trading strategies, the speed and efficiency of analysis means that these failures can be catastrophic.

And because they are based only on historical data, the algorithms can fail when new circumstances arise. Even at the best of times, the complexity of statistical models can make it difficult, if not impossible, for investors to know what the model is responding to, unlike simpler measurements or qualitative analysis.

Acoustic analysis

In addition to the words you use, investors also listen for voice characteristics such as pitch, volume, and how quickly you speak. These cues can be informative about a speaker’s personality, emotional state, and intentions, and research has shown they can predict CEOs’ financial performance, stock market returns, and even career outcomes.

Non-verbal communication has always been important, but the availability of free open source applications such as Praat and Python has made it possible to quantify these signals with greater accuracy and speed than ever before. While it is clear that there is useful information in voice signals, it is important to note that this area is still new and investors are still learning how to use this data.

While there is evidence that individual signals like pitch are associated with stock price movements, the general consensus is that these signals should be considered in combination with each other.

With little guidance on how to combine these metrics, investors can rely on a purely statistical machine learning approach or off-the-shelf software. These approaches are more expensive, but the biggest problem, especially for proprietary software, is that they are opaque and difficult for users to understand. Without being able to see what’s going on under the hood, investors have to trust developers’ claims.

For example, many voice analysis tools are marketed as effective lie detectors and claim that they can determine if a CEO is misleading. While this idea may sound appealing to investors, such claims should be treated with skepticism; the proprietary nature of these tools prevents independent verification, and the evidence overwhelmingly shows that voice cues cannot reliably detect deception.

Even so, research consistently shows that stock prices and key performance indicators are associated with managers’ vocal characteristics, and investors are likely to continue to analyze and trade on these indices.

What we can do about it

The disclosure landscape is changing and a common concern I hear from IR professionals is not knowing how to respond. How do you communicate effectively when even subtle changes in voice pitch can swing stock prices? Before canceling your earnings calls, here are some recommendations.

  1. Don’t try to beat the algorithms: While it’s important to know how investors analyze your information, you don’t have to become a machine learning expert. In many cases, even investors using these algorithms don’t know what’s going on under the hood, so it’s not worth guessing. In addition, machine learning involves learning, so the algorithms will adapt over a period. By the time you figure out what he was doing last term, he’s probably already changed.
  2. Be clear and concise: When the algorithms process your disclosures, the goal is to extract additional information to gain an advantage over other investors. One of the best ways to reduce this advantage is to make your communications easy for key investors to understand, using plain language and simple formatting.
  3. Tailor non-verbal communication to your message: A common assumption in acoustic analysis is that speakers do not control their speech signals, which act as “signals”. But that’s not entirely true. Effective speakers use their pitch, volume, and rate to communicate, clarify, and persuade. Therefore, when preparing to speak with investors, consider the specific audience and message: keeping them in mind can help you tailor your vocal cues to your goals.
  4. Practical (but not too much): Like any other skill, effective communication is something that can be learned and improved upon. Rehearsal and presentation coaching can help identify opportunities to improve your performance. That said, recognize that everyone has their own style and there are many ways to communicate effectively. While everyone can improve, don’t try to change too much – if your performance doesn’t feel authentic to you, it won’t feel authentic to your audience.
  5. Ask for help: As new technologies change the way investors analyze your information, the same technology is being developed to help companies improve their communication. Collaborating with experts and researchers in this field can help IR professionals maintain their edge and communicate effectively in this ever-changing disclosure environment.

Blake Steenhoven is Assistant Professor of Accounting at the Smith School of Business at Queen’s University, Canada

This article originally appeared in the Fall 2022 issue of IR review.

Similar Posts