Making sense of the APRA super fund report card
There have been several reviews into the performance of Australia’s superannuation funds in recent years. Many of the reviews found significant issues. The banking royal commission found that funds run by banks often pay money to other parts of the bank for services such as buying and selling bonds, rather than doing it themselves or through brokers who would get better prices.
In the wake of these damning findings, the Australian Prudential Regulation Authority (APRA) has been tasked with rating the performance of super funds. It’s first recently published report looked at 80 My Super funds, which are the no-frills, basic super option for those Australians who don’t select their own fund, which suggests a lack of interest. Collectively, these accounts hold $900 billion in assets, or around one third of all superannuation savings.
Funds which are persistently underperforming, particularly those who charge high fees, have no place in a compulsory industry. However, the methodology adopted by APRA and subsequent findings are significant because if a product fails for two consecutive years, the super fund will be prohibited from accepting new members, which is effectively a death knell. From next year, the number of super funds captured in the review will also increase.
The focus of the APRA review is on super funds where the trustee dictates the investment strategy. Any funds where members control the investment strategy, like self-managed super funds, are (understandably) excluded.
In many ways, the APRA report creates more questions than answers. When reviewing performance figures against any benchmark, it is never straightforward. We have reflected on some of the key issues anyone reviewing the APRA results should consider.
APRA is assessing funds with at least 5 years of performance data, and only extending the review out a maximum of 8 years. In statistical terms, this period is not long enough to deliver compelling conclusions[i]. Any super fund whose performance is 0.5% below the median is labelled as failing.
Predicting future performance from past performance is difficult. ASIC reviewed over 100 academic papers from the last 40 years and concluded:
“Good past performance seems to be, at best, a weak and unreliable predictor of future good performance over the medium to long term.”
In addition, ASIC concluded performance comparisons can be quite misleading if not done properly. Importantly, returns are only meaningful if adjusted for risk/volatility or comparing "like with like".
Is past performance meaningful? Some investors may select a super fund based on its past 5 years of performance. However, past performance offers little insight to a fund’s future returns. For example, most U.S funds in the top quartile (25%) of previous 5-year returns did not maintain a top-quartile ranking in the following 5 years. In fact, as the graph below demonstrates[ii], the probability of the fund remaining in the top quartile was only 1 in 5.
This explains why we consistently see the phrase ‘past performance is no guarantee of future performance’ when super funds publish their returns.
Is the benchmark meaningful? Risk is consistently overlooked or ignored when benchmarking performance, often because it is inherently difficult to make apple with apple comparisons. Looking at industry super funds in Australia, the definition of a ‘growth’ fund varies greatly. Some super funds have only 65% exposure to growth assets (i.e., listed property, shares) in the ‘growth’ fund, whilst others have closer to 90%.
Many super fund portfolios also have between 20% and 60% allocated to “alternatives” which generally comprise private equity, hedge funds, commodities, infrastructure, distressed debt and unlisted real estate. Unlike publicly listed assets, which have daily pricing available, most of these alternative assets are illiquid, so their pricing is only updated periodically. There have been documented examples over time of super fund trustees deferring the downward revaluation of alternative assets to enhance their performance figures.
The APRA performance test neither measures the return members achieve, nor adequately measures the risks a fund took in achieving the returns. The performance test compares each fund’s returns only with its strategic asset allocation.
Asset allocation - Even within a growth asset allocation, the weighting towards Australian, International and Emerging Markets can vary greatly. Some investors may want to focus on franking credits, so they will have a higher allocation to Australian shares. But during a period when international shares do particularly well, that portfolio with a higher Australian allocation will underperform. But if the portfolio delivered the outcome the investor was targeting, is that a bad result?
A growing number of investors want to incorporate sustainable principles focused on environmental, social and governance (ESG) factors into their portfolio. These types of portfolios are not benchmark aware, meaning they are indifferent to what stocks are in the ASX300 index for example - they buy stocks based purely on attributes against a set of criteria. If ESG stocks underperform the market over a period, again, is this necessarily a failure in the eyes of the investor, especially if the returns are still adequate to meet their financial objectives?
A ‘fail’ does not necessarily mean poor returns - The chart below compares the actual performance for each fund in the APRA performance test. The chart also shows the asset size of each fund[v].
The chart highlights it is not just poor performing funds which have failed the test. In part, this reflects lifecycle funds, where the overall fund can fail the test despite some of the multiple underlying options producing good returns.
Observing some funds which have returned highly for their members have failed the test, while other lower returning funds have passed the test, is intuitively hard to reconcile and could lead to consumer confusion and disengagement – the opposite of the desired outcome.
In addition, while a number of small funds failed the test, size isn’t a key indicator of the test result. Large funds also failed, showing increased scale isn’t the panacea to good member outcomes.
Indeed, the distribution of ‘failed funds’ appears to be quite random when viewed by return and size – two factors many would assume to be indicators of success.
Investment philosophy – the Stewart Partners investment approach is based on research papers published by Nobel Laureates to identify factors that drive portfolio returns. The graph[iii] below shows that in share portfolios, the size, relative price and profitability factors have delivered long-term premiums around the world.
However, whilst these factors are evident over the long-term, there can be short- and medium-term periods when they do not occur. The graph[iv] below shows how often these premiums are negative over 1, 5 and 10 years. The ‘Market’ results compare the stockmarket to fixed interest returns.
It is understandable these factors are not always evident, because if they were constant, there would be no premium available.
After a 5- or 10-year period where a particular factor does not deliver a premium, the APRA results may encourage investors to look elsewhere. But as the graph below shows, this would be a mistake. The subsequent 10 year performance of these factors after a period of underperformance has historically been incredibly strong. The longer a factor fails to deliver a premium, the more you should be considering including this tilt in your portfolio as the expected return has increased!
Summary
There is no silver bullet performance test which enables us to perfectly compare one investment product with another, as we need to consider asset allocation, appropriate levels of risk for each investor, and the extent to which we purposely stray from a ‘market benchmark’ in pursuit of other objectives like sustainable investing.
We hope the APRA results help to eliminate funds that habitually underperform but are wary of making decisions on the selection of a super fund based on taking these results at face value.
Author: Rick Walker
You are welcome to share this article - and all our articles - with family and friends who may benefit from reading it.
[i] When reviewing data, academics seek a T statistic greater than +2 or less than -2 to demonstrate the outcomes are acceptable. A t statistic measures how many standard errors the coefficient is away from zero. Whilst we haven’t seen the APRA data, based on our experience talking to academics over the years, the probability of a t statistic being acceptable is very low.
[ii] Source: Mutual Fund Landscape 2021, Dimensional Fund Advisors
[iii] All returns in USD except Australian stock returns. Data from Dimensional and Fama/French indices
[iv] Date for period June 1927 to December 2019 for market, size and relative price, and July 1963 to December 2019 for profitability. Source: Dimensional Fund Advisors.
[v] Source: https://www.firstlinks.com.au/really-best-way-remove-super-underperformers