Skip to content

Machine learning is another tool, not The Tool

As we head into a new decade, the potential and increasing application of artificial intelligence (AI) to assist with investment decision-making and improve the accuracy of our investment choices continues to receive a lot of attention.

Vaughan Henkel

Vaughan Henkel

Absolute Return Strategies

 
  • The primary benefit of machine learning for fundamental investment analysts is the ability to access less-widely disseminated data.
  • In a world where the future will not look like the past, human intervention is critical and our interaction with machine outcomes to inform investment decisions is key.
  • Machine learning is one tool of many for the fundamental investment analyst.
  • Visit STANLIB’s News & Insights page for more articles. 

Machine learning is one tool of many for the fundamental investment analyst

Disruptive technologies have several benefits, changing the way we interact with data, apply our human knowledge and invest money. However, we should apply caution when relying on machines to do the thinking, especially when the future does not necessarily look like the past.

 

Machine learning is an application of AI that provides machines with the ability to automatically learn and improve from experience without being explicitly programmed.

 

A catch-all phrase for the analysis of large datasets to make better forecasts, machine learning essentially uses statistics to derive relationships between data without first specifying a model. The goal is to automate decision-making by learning from data and determining a structure, without human assistance. Here we unpack some of the benefits and pitfalls for fundamental investors.

 

One tool of many

The STANLIB Absolute Returns team has a broad and deep toolbox to inform investment decisions. We believe that machine learning is a key tool and that we benefit from the ability of a machine to process data points that are not easily available and have never before been considered as part of the decision-making process to forecast, for example, the sustainability of company returns. We also benefit through saving time, allowing us to focus on fundamental research to complement a machine’s automated intelligence.

 

‘Over-fitting’ is potentially the biggest challenge to the tool: this is when a variable seems useful but there is no fundamental causality to its relationship. Therefore, the combination of fundamental research and machine learning should result in a better outcome.

 

For example, if you were a keen fisherman in search of the world’s best salmon fishing waters, would you want to scour all the world’s oceans or be steered directly to the teeming waters of Scotland?

 

While the first option would ultimately deliver a result, the second would clearly be better, faster and significantly more efficient.

 

Since human analysis, consideration, and manual intervention is an ongoing requirement, we argue that machine learning should be just one useful tool in the arsenal, rather than the ultimate power tool in and of itself. Using the well-known analogy of a butterfly flapping its wings in the Amazon causing a ripple of financial crises, causality is crucial.

 

Our perspective as fundamental analysts

We, as fundamental analysts, are excited about the benefits machines can offer, the most important being the ability to access less-widely disseminated data (i.e. not direct financial data). This is best illustrated in HSBC’s approach of analysing company investor conference calls for comments on employee performance.

 

HSBC analysts have proven the relationship between companies’ positive comments about their employees to improved investment returns.

 

In the report titled “Who talks about workers? Machine learning analysis of conference calls”, they identified those companies whose management spoke about workers in a positive light (analysing 27 million sentences for 978 companies). The result was that the top 10% of companies in this metric outperformed on a total return basis.

 

The outperformance was 1.7% CAGR for the period 2012-2019. Furthermore, an Alexandria Contextual Text Analytics (an external AI database that extracts context and sentiment from unstructured content), indicates that 75% of data inputs are supplied via newswires and regulatory agencies, meaning this data is widely available to all market participants. To extract alpha, we believe that it is in the remaining 25% of data input into analysis that will reveal relevant relationships and thus investment opportunities.

 

However, we note that the primary assumption that the past is indicative of the future may need to be challenged. Under this premise, structural changes in relationships cannot be catered for and human intervention is required to monitor and adjust such dynamics. Probably the best example of this is the 30-year bull run in the United States Department of the Treasury which began when Federal Reserve Bank Governor Paul Volker aggressively hiked rates to bring inflation under control in the early 1980s. We may be coming to the end of this cycle (declining US10Y rates), and this will have an impact on the future 10 years’ returns for the US market.

 

Using machines to predict 10-year stock market returns

To further illustrate the views around the pitfalls and benefits of applying machine models, we recently attempted to forecast the US stock market returns in 10 years using five different machine learning models. We used the Shiller-Goyal data from 1948 and eight different indicators to drive our prediction, including Price Earnings (PE), Price to Dividends, Price to Book, Cyclically Adjusted Price Earnings (CAPE) ratio, Total Return CAPE, inflation, unemployment and the US 10-Year Bond Yield. The models use the data from 1948 to 1991 as a training set and obviously the assumption is that the future is similar to the past. The models used are using the R coding language. Each of the five models has a slightly different way of interpreting the data as shown below:

 

 

  • glmnet – essentially a linear model with limits to prevent overfitting
  • KNN – using a nearest neighbour algorithm
  • MARS – takes non-linearities into account
  • XGBoost – a tree model taking non-linearities and interactions into account
  • SVM – is very flexible so may easily overfit

The results that follow show which indicators are the most important in determining or forecasting 10-year returns for each model. The MARS model only uses CAPE but the other models are more evenly spread.

 

From an accuracy perspective, the best model is XGBoost (model against the training data from 1948 to 1991) and the worst is the KNN model where the accuracy of using the factors to predict returns as measured by the statistical measure R squared low.

In summary, using current data for these indicators and the preferred XGBoost model, the forecast for the US Market 10-year CAGR is 8.9% given the significant importance this model applies to the CAPE indicator and the PE ratio.


The MARS model on the other hand suggests, if we only use CAPE the same forecast returns would be -9.9% CAGR for the next 10 years.


Model selection is clearly very important in determining the accuracy of the outcome, as there is a trade-off between flexibility and overfitting. We conclude that CAPE, US10Y and the PE are the most useful determinants of the future 10-year return of the US market. Machine learning is a beneficial tool to drive quantifiable outcomes that will aid investment decision-making, using mathematical concepts from the 1960s and 1970s but with computing power large enough to do it justice. It is another tool in our arsenal, not the only tool and must be used carefully to avoid the assumption that machine learning replaces all analysis.


Human interaction remains critical and our analysis clearly highlights the concept that while machines can do incredible work it’s the combination of people and machines that will drive even better results.

More insights