Risk vs. Return

How to compare asset allocations using risk and return measures

Looking for Tactical Asset Allocation Ideas?

RecipeInvesting.com compares 100's of portfolio recipes: Tactical (dynamic), Strategic (static), and Managed (ETF/Mutual Funds)

 
Investment risk is the likelihood that your investment will perform poorly. This could mean that your portfolio suffers a loss, or that your portfolio does not perform as well as you expect.

Because no one can perfectly predict the future, we can estimate a portfolio's risk using several different historical measures to predict future risk.

Risk is usually stated as a percentage over a time period such as "15% per year." The risk characterized by the percentage depends on the risk measure being used. For example, one risk metric could measure the maximum percentage loss that a portfolio has suffered over the past 10 years. Another risk metric could average the monthly declines in a portfolio over the course of a year.

How do we measure risk?

Below are several popular ways to measure risk.

Standard deviation. This measures how a portfolio's returns compare to the portfolio's longer-term annual average return.

Examples:

  • A portfolio that consistently returned 5% per year for four years has a standard deviation of 0.0%.
  • A portfolio returning +5%, then 10%, then -10%, then -5% has a standard deviation of 9.1%
  • A portfolio returning +4%, then 5%, then 6%, then 5% has a standard deviation of 0.8%

U.S. Stocks typically have a standard deviation between 15% and 20%

U.S. Investment grade bonds typically have a standard deviation between 5% and 10%

Maximum drawdown. This measures the largest decrease in a portfolio's value, from peak to trough. This should be considered over a period of at least 7 years that contains an overall market drop. If you just look at maximum drawdown over a period when assets are rising, you will get a small percentage and this will not be useful for determining the risk of the portfolio during a downturn.

Examples:

  • A portfolio consistently returned 5% per year for four years. Its maximum drawdown is 0%.
  • A portfolio returned +5%, then 10%, then -10%, then -5%, Its maximum drawdown would be 14.5%
  • A portfolio returned +4%, then 5%, then 6%, then 5%, its standard deviation would be 0%, since there were no losses during the period.

Downside deviation. This measures how a portfolio's negative returns compare to the portfolio's longer-term average of negative returns. This is similar to standard deviation, except that only negative returns are considered. The reasoning behind this is to avoid penalizing a portfolio for its positive returns.

Beta. This measures how much a portfolio's return varies compared to a benchmark. The benchmark is typically an index of stocks, such as the S&P 500. A beta of 1.0 means that a portfolio's returns will typically track the underlying index. A beta of 1.1 means that the portfolio will typically vary 1.1% for every 1.0% change in the benchmark.

How do we measure return?

A portfolio's return is its increase in value from year to year. For example, if a portfolio has a value of $10,000 on Dec 31, 2014 and a value of $11,200 on Dec 31, 2015, then the one-year annual return is 12.0%. When measuring return over more than one year, there are different methods to calculate the average or aggregate return.

Two popular ways to measure return are as follows:

Average return. This simply takes the average (arithmetic mean) of all the returns in the time period. 

Examples:

  • A portfolio that consistently returned +5% per year for four years has an average return of 5.0%.
  • A portfolio returning +5%, then +10%, then -10%, then -5% has an average return of 0.0%
  • A portfolio returning +10%, then +20%, then +10%, then +20% has a standard deviation of 15.0%

Compound annual growth rate. This is also called CAGR or compound return or year-over-year growth rate. This method assumes that you reinvest your returns from the prior year. The CAGR shows the return that you needed each year over a given time period to produce the same results as if your portfolio had reinvested the actual annual return each year.

Examples:

  • A portfolio that consistently returned +5% per year for four years has a CAGR of 5.0%.
  • A portfolio returning +5%, then +10%, then -10%, then -5% has CAGR of -0.3%
  • A portfolio returning +10%, then +20%, then +10%, then +20% has a CAGR of 14.9%

Note that returns calculations can be very sensitive to the time period chosen. For example, if you see a 5-year return of 12% per year that sounds nice until you realize that 7 years ago the portfolio fell in value 50%. The 7-year return for that same portfolio would be much lower than 15% per year.

How do we compare Risk vs. Return?

One way to combine risk and return is by using a "risk-adjusted return" or a score which includes risk and return measures.

Also several companies provide Ratings & Scores comparing % risk and % return for a portfolio and then generating a single number or score for each portfolio.

Below are several ways to measure return vs. return using a single number or value.

Alpha (or Jensen's alpha). This percentage shows the extra return that a portfolio generated, above the expected return, while accounting for the portfolio's volatility. The expected return is usually set as a broad market index such as the S&P 500. 

Sharpe ratio. This is a unitless ratio of % return to % risk, so a higher Sharpe ratio is better. Return is defined as the excess return over a benchmark, where the benchmark is a riskless investment such as a Treasury bill or cash equivalent. The risk measure is standard deviation. This measure was first described by William Sharpe in 1956.

VizMetrics V-Score. This is a risk-adjusted scoring system that rates each portfolio from 0 to 100. A score of 100 is best and zero is worst. A score of 100 means that a portfolio has outperformed an optimal global portfolio which uses 14 global asset classes. A score of zero means that a portfolio has done worse than the worst of the 14 asset classes.

Morningstar Star Rating. This is a scoring system that rates each portfolio from 5 stars (highest) to 1 star (lowest). The star ratings are assigned within specific peer categories (e.g., international portfolios, stock portfolios, bond portfolios, etc.) so it's possible for a lower-performing portfolio to get a high star rating, as long as it is better than the other portfolios in its category.

Lipper Leaders. This is a scoring system that rates each portfolio from 5 (highest) to 1 (lowest). The ratings are based on peer groups, similar to the Morningstar Star Rating system.

Modigliani-Modigliani (or M-squared). M-squared is a risk-adjusted performance measure, shown as an annual total return %. It is derived from the Sharpe ratio, but M-squared is in units of percent return. Since M-squared is shown in units of percent, this measure can be easier to understand than the unitless Sharpe ratio. M-squared inputs include standard deviation, the monthly total return, the risk-free rate (such as the 3-month T-bill rate), and the return of a benchmark portfolio (such as the S&P 500). This measure was first described by Nobel Prize winner Franco Modigliani and his granddaughter Leah Modigliani.

Treynor ratio (or reward-to-volatility or Treynor measure). This is unitless ratio compares return and risk. Return is defined as the excess return over a benchmark, where the benchmark is a riskless investment such as a Treasury bill or cash equivalent. The risk measure is beta. This measure was first described by Jack Treynor.

Sortino ratio. This is a unitless ratio compares return and risk. Return is defined as the excess return over a minimum acceptable return. The minimum acceptable return can be set to 0%, or to the risk-free (cash) return rate, or some other "hurdle rate." The risk measure is downside deviation. This measure was first described by Frank Sortino.