Problems With Probability: Anyone for Tennis?

Technology has always been an integral part of human history and reflects in the naming conventions that we use to define historical periods, such as the bronze age, the industrial revolution and, more recently, the information age. In the information age, technology has become increasingly embedded in our daily lives to the point that modern society cannot continue without it. While the advent of Artificial Intelligence (AI) reflects a potential inflection point in the use of technology, in truth, much of what is hyperbolically termed AI today are technologies that many of us have been using for decades.

However, as using technology has become increasingly instinctive, we have started to forget its limitations. Specifically, we are willing to accept outputs generated by a computer without question, even if we would question those same outputs if they were to be produced by a human. For example, large numbers of people continue to trust satellite navigation systems over the evidence of their own eyes. Given the age of these systems, we would expect that such blind faith should be decreasing, but accidents ascribed to slavishly following directions from a satellite navigation system continue to occur in both developed and emerging countries.

Not only drivers choose to believe computer outputs despite clear evidence to the contrary. In a recent tennis match, Victoria Azarenka got into a spat with the chair umpire because of an electronic line call where a ball was called in. While she pointed to a mark on the court showing that the ball was out, the chair umpire could not investigate the matter further because the rules of the sport do not permit the calls of the technology to be challenged or overruled. These rules are less surprising when we consider that electronic line calling was first introduced in tennis for players to question calls made by human officials. This created a culture where the technology always had the last word, irrespective of other available evidence.

Importantly, technology in its current form (whether we choose to term it AI or not) simply reflects the ability to process probabilities faster and more accurately than humans can. However, probability is not certainty. For example, even if there was a 99% chance that the ball that Azarenka complained about was in, there remains a 1% chance that she was correct and that the ball was out. Given that there was physical evidence to consider, a case can be made that the umpire should have been permitted to investigate the evidence.

Correct line calls in a tennis match may seem rather irrelevant. Despite the importance to those who could be affected by the outcome of the match (such as the players and professional gamblers), it does not reflect a life-or-death situation. Nevertheless, what transpired on the tennis court illustrates that negative consequences could arise if we uncritically accept output generated by a computer. As calls made by computers continue to infiltrate deeper into all aspects of life, we would do well to consider whether we are questioning technological outputs as carefully as we should. This is particularly true when a general fascination with a specific technology, as we are currently seeing with AI, creates pressure to start using it.

As investors, it would be irresponsible to ignore new technologies and their potential impact on the businesses that we invest in. Furthermore, we would be short-sighted if we did not at least consider whether new technology could be harnessed to improve our investment processes. However, we would be downright foolish if we implement and trust a technology to make decisions on our behalf without ever questioning the output.

We should always remember that AI is simply assessing probability based on historical patterns. Therefore, the output does not represent certainty and we should not use it as though it does. This is very difficult for us to master because human beings crave certainty (which explains why fortune tellers are still in business). In addition, as we become more familiar with a new technology, we become less critical of it. For example, the first time your colleague presented you with a spreadsheet that does all the work for you, you were cautious and compared the output with what you produced independently. With continuous use for several years, you have come to accept the spreadsheet with its complicated macros as being infallible. Such uncritical acceptance of output can become a problem even when the underlying technology deals with mathematical certainties. When the technology is assessing probabilities (grey areas), it could have dangerous consequences.

Therefore, as we increasingly allow technology into our lives, we should also remember to consider the other evidence in front of us. In other words, we should allow for the possibility that the computer output could be wrong. This might be easier to do when we are taking a non-routine or more important decision. However, we need to adopt a questioning stance even when relying on technology to make routine decisions, lest we miss the exception before us.

Ironically, the technology that Victoria Azarenka questioned was developed from the technology used to adjudicate cricket matches. However, the rules of cricket do not automatically accept the output of the technology in all cases. Instead, certain judgments are left to the discretion of the human umpire when the margin for error is too wide. Therefore, when using technology in investing and our daily lives, it seems that we should apply the rules of cricket, rather than those of tennis, to increase our chances of making better decisions.