Technically Speaking, July 2017

MTA becomes CMT ASSOCIATION

Beginning July 15th, after nearly 50 years of service to the financial industry, the Market Technicians Association is becoming the CMT Association and updating the organization’s brand. As members of the organization, we cannot thank you for creating this remarkable home for the advancement of technical analysis. Committed volunteers and engaged participants are the reason our discipline has the professional respect we enjoy today. As we strive to uphold the vision of our founders while adapting to the rapidly changing industry, we hope that you will help us extend the reach of the organization by sharing this news with your colleagues, clients, and professional contacts.

The Association’s leadership including Board Members, founding members, and senior staff, recommended that members approve changing our organization’s name. Having carefully considered all implications of the legal name change, the leadership felt that it is imperative to rationalize the number of acronyms out in the industry. Pending approval by a vote of the Membership, you’ll see the new look and name anywhere we’re out in public, like our website, publications, digital webcasts, LinkedIn, Facebook and Twitter. Very soon you’ll see new signage and materials at your local chapters, as well. The new brand name better matches what we’ve become since the early 1970s: the preeminent global designation for financial professionals committed to advancing the discipline of technical analysis.

When our association began, we operated for years without a credentialing body, exam process, or charter. Since our legal incorporation in 1973, we’ve stuck with the Market Technicians Association. Although we slightly altered our colors in late 2015 as we built out the new website, we did not consolidate our two acronyms – MTA and CMT. Today, the CMT Program is central to the value of the Association as well as the professional identity of nearly all Members. The new name reflects our continued commitment to our global members and unique capacity to advance the discipline of technical analysis among industry professionals. Aligning with best practice allows us to consolidate the MTA and CMT acronyms for renewed clarity and notoriety of our CMT charterholders worldwide. Other than these visible changes, the association will continue to operate in its current structure with no change in staff or volunteer leadership. Your contacts for all ongoing projects and initiatives will remain unchanged. Furthermore, the mission and goals of the association remain unchanged. The CMT Association will continue to be a place for collegial discourse and exchange of ideas amongst like-minded professionals. Whether you are a charterholder or not, your Membership status will not change.

Our brand goal was to align with industry best practice. In our field, the notable organizations are all designation-centric (CFA Institute, CFP Board, CAIA Association, etc..) We also aimed to reduce confusion in the marketplace around our multiple acronyms; better matching our name to our core value proposition and the users we serve. A small team of staff and volunteers worked with professional designers to find something that appeared crisp, approachable, professional, modern, and connected.

The “M” of the logo creatively represents a barchart, but it’s stylized and emphasized through color to connote the importance of Markets within our name. The study of price  behavior is about markets in comparison to the study of fundamentals and the assessment of companies. Our decision to use the acronym in the logo was inspired by the diversity of our members. A “CMT” is a portfolio manager, a research analyst, a financial advisor, an asset allocator, a quantitative trading system developer, and many more things – but always a professional committed to advancing the discipline of technical analysis and upholding the highest ethical standards in the industry.

We hope you like this new look for the CMT Association! Look out for more updates and a broader industry presence as we continually try to better serve our members with the preeminent global designation and highest member value in all our programming and initiatives.

What's Inside...

NORTHERN CALIFORNIA MTA CHAPTER MEETING - JUNE 22, 2017, SAN FRANCISCO, CA

Steven Moffitt, PhD presented to an enthusiastic crowd during the meeting...

Read More

BUYING OUTPERFORMERS IS TOO LATE

Editor’s note: this paper was originally published in the Research section at Optuma.com.

ABSTRACT: We are told two rules in...

Read More

THINK & TRADE LIKE A CHAMPION: THE SECRETS, RULES & BLUNT TRUTHS OF A STOCK MARKET WIZARD

“I have always said that trading triggers the same neurological impulses as going into the jungle unarmed and having a...

Read More

“MAYDAY!” CALL FOR EUR/USD?

Editor’s note: this analysis was originally prepared on May 1, 2017.

Students of market price action, commonly known as market...

Read More

EASY AND SUCCESSFUL MACROECONOMIC TIMING

Editor’s note: This paper was originally published at MathInvest.com and is reprinted with permission.

Abstract: When the economy takes a...

Read More

THE DIFFERENCE BETWEEN STATISTICS AND STRATEGY

Editor’s note: this article was originally published in the Pension Partners blog on June 1, 2017 and is reprinted with...

Read More

CHART OF THE MONTH

For additional perspective on the average drawdown the following chart is reposted from Pension Partners June Market Webinar.

Read More

NORTHERN CALIFORNIA MTA CHAPTER MEETING - JUNE 22, 2017, SAN FRANCISCO, CA

Steven Moffitt, PhD presented to an enthusiastic crowd during the meeting of the Northern California MTA chapter at the Le Meridien Hotel located in the heart of the San Francisco financial district. The meeting began when co-chair of the local chapter, Rick Leonhardt, CMT, provided an update about the MTA along with the recent changes and current growth of the CMT program. After this overview, another one of the chapter’s co-chairs, Rick Lehman, gave an introduction and bio of the event’s speaker.

Steven D. Moffitt holds a Ph.D. in Statistics, an MA in Mathematics and is currently an Adjunct Professor of Finance in the Stuart School of Business at the Illinois Institute of Technology. He has assisted in the development of an index arbitrage system for one of the largest index arbitrageurs in the United States, developed and tested risk management systems for the largest options clearing firm and for the fifth largest bank in the United States (at the time), served as the director of research for a large Commodities Trading Advisor, conducted seminars on trading and risk management, selected trading advisors for two money managers and developed trading software and systems. Steven has recently published a new two volume book entitled “The Strategic Analysis of Financial Markets”.

Dr. Moffitt’s presentation was entitled “A New Look at Technical and Fundamental Predictors of Market Crashes”. Three specific crash detectors and/or predictors were examined in detail:

  1. The Bond Stock Earnings Yield Differential (BSEYD) Model
  2. The Log Periodic Power Law (LPPL) Model
  3. The Shiryaev-Roberts Crash Detection Model

Each model was thoroughly discussed along with the benefits and drawbacks of each method. This was followed by a question and answer session that further elaborated on the elements of the three models and their application in trading systems.

After the presentation, guests were greeted with a private cocktail reception in which the lively discussion of Technical Analysis continued. Many area Technical Analysis practitioners and educators were in attendance which made for an interesting and informative afternoon for all.

Contributor(s)

BUYING OUTPERFORMERS IS TOO LATE

Editor’s note: this paper was originally published in the Research section at Optuma.com.

ABSTRACT: We are told two rules in finance: “buy low and sell high” and also “past performance is not a guarantee of future returns”. Yet many advisers and investors will recommend the best performing securities based on that very assumption. This paper will show that to maximize returns, there has to be a different way to examine when a security should be bought. We do this by using the Relative Rotation Graphs (RRG) to test if absolute returns can be improved by responding the relative trend performance on the RRG charts. The paper will also explain the basic concepts behind the RRG and give the results from many tests.

INTRODUCTION

It’s a commonly used adage that investors should buy low and sell high. Everyone who is interested in the capital appreciation of their investments is trying to live by that rule to maximize their capital growth. Another often ¬quoted saying in the financial services industry, is that past performance does not guarantee future returns. Yet, it is common for a Financial Advisor to sit with a client and recommend the funds that have performed the best over a specific period-of-time-basing their advice on the premise that an outperforming fund will continue to outperform. This is essentially buying high and hoping that past performance will lead to future returns.

It is the purpose of this paper to expose how buying the best performing securities are hurting portfolio returns, and to show that investing in the worst performing securities (buying low) leads to incrementally better returns. A key step in this process is quantifying performance of one security against a benchmark. This is typically done by dividing the value of the security by the value of the benchmark. We call this “Relative Strength”. When the Relative Strength line is rising, the security is outperforming the benchmark on a capital returns basis.

Once we have the Relative Strength for the security, we then need to compare multiple securities to identify the candidates that we would want to include in our portfolio. The issue with the typical values for the Relative Strength is that there has not been a simple, meaningful way to rank securities by their relative performance. A higher priced security will have a higher relative value when divided by the index than a lower priced security. Does that mean the returns are better? Not necessarily.

Relative Rotation Graphs (RRG®) were developed to solve this issue by normalizing the Relative Strength calculations and provide a view of them on a single chart. We will use the RRG heavily in this paper.

The final step in this paper will be to explore the quantitative results across thousands of securities to prove the idea and to see what other filters we can add to improve our results. 

RELATIVE STRENGTH

Relative Strength is derived by mathematically comparing a security with a benchmark. A typical example is to divide the price of an equity by the price of an index. For example, in the image below we have a chart with Microsoft (MSFT) as the black line and the S&P 500 Index as the green line. At the bottom of the chart, the blue shaded value is the result of dividing MSFT by the S&P 500.

The chart reveals that the capital growth of MSFT is greater than the capital growth of the broader S&P 500 each time the shaded area is increasing. Conversely, when the shaded area is falling, MSFT is underperforming the S&P 500.

It’s important to remember that an index like the S&P 500 is an average of the capital values of all the equities that are the constituents of that index. Indices are more complex than a simple average. They include factors that are adjusted every time that equities are added and removed to maintain a smooth historical dataset. This means that at any time there will be constituent equities that are outperforming the index. By definition, there also must be constituent equities that are underperforming.

Historically, one of the first Relative Strength papers was published by H.M. Gartley in 1945.[1] While there was a short renaissance in 1967 by Robert Levy[2], the concept was not popularized until the 1990’s when in 1993 the Journal of Finance published “Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency” by Narasimhan Jegadeesh and Sheridan Titman.[3]

WHY IS THIS IMPORTANT?

In today’s investment environment, investors can easily choose direct equity investment, commodities, currencies, indexes, or ETFs. In fact, ETFs give investors a simple way to gain exposure to different asset classes and international investments.

In a direct equity portfolio, the Portfolio Manager needs to be able to provide alpha over their chosen benchmark, or investors are likely to transfer their funds into an index fund. Alpha is the measure of active returns over the Benchmark’s returns. If a Portfolio Manager can overweight the outperforming equities in their portfolio, and underweight (or sell) the underperformers, then they can provide alpha over the index. The concept of overweighting an equity is when the Portfolio Manager looks at the equity’s weight in the benchmark and then chooses to hold a higher weight of that equity in their portfolio. Many of the world’s larger portfolios mirror the constituents of their chosen benchmark and the overweight and underweight to provide extra alpha.

It’s important to note that in a strong Bull market, all the index constituents may be rising in absolute value, but some will still be performing better than the index (average). Using Relative Strength allows the Portfolio Manager to identify the equities that are outperforming on a relative basis.

We need more than binary over/under performing.

The issue with Relative Strength has always been that the actual numerical value of the division has no meaning beyond the binary positive/negative result. How can we identify the very best outperformer from all the outperformers?

If we look at the values of Microsoft divided by the S&P 500 and Google (Alphabet) divided by the S&P 500 you can see the problem. The Google value is more than ten times bigger than the Microsoft value. Does that mean Google should be in the portfolio?

For Relative Strength to be truly useful, we must have a way to define the trend of the relative strength (Relative Trend) and normalize these values so that we can actually rank the equities by their relative performance.

Relative Trend is defined as the trend (quantifying the rising or falling angle) of the securities being divided by the benchmark.

INTRODUCING RELATIVE ROTATION GRAPHS

The Relative Rotation Graph (RRG) is a unique way that multiple securities can be viewed on a single chart, showing their current and historical Relative Trend, and by extension, their relative performance. The RRG was developed by Julius de Kempenaer, a sell¬side analyst in The Netherlands. de Kempenaer would visit Portfolio Managers and be asked about his opinion on equities. Often, they were equities that he had not analysed before. What was even more difficult was that the Portfolio Managers would want to know how that one equity compared to others. 

It was from the need to be able to answer those questions that de Kempenaer created the concept of the RRG. Early incarnations of the RRG were manually built in Excel, but today a number of software packages have included RRGs.

The following is a weekly RRG of the ten S&P 500 Sectors with the S&P 500 index as the benchmark. That is, we are looking at the relative performance of the ten sectors compared to each other. From an equity selection perspective, we would do this if we were looking for Sector with the best Relative Trend so we can examine the equities within it.

The values are plotted on a grid that is centered at 100,100. The x axis (JdK RS-Ratio) is a measure of the normalized Relative Trend. The y axis (JdK RS-Momentum) is the Rate of Change of the Relative Trend or its Momentum. This gives an indication of the velocity of the Relative Trend.

The grid has been colored to highlight the four main quadrants with 100,100 in the center.

  • Leading Securities that are in the Leading quadrant are in a relative uptrend versus the benchmark and the trend is getting stronger. A portfolio that is weighted with these securities is expected to outperform the benchmark.
  • Weakening Securities in the Weakening quadrant still have positive Relative Trend but the momentum of the trend has changed. The falling momentum is an early warning that the Relative Trend is about to change. A Portfolio Manager should watch these securities as there is a high probability that they could start to underperform the benchmark.
  • Lagging Securities in the Lagging quadrant are in a relative downtrend versus the benchmark and the trend is getting weaker. These securities are expected to underperform the benchmark.
  • Improving Securities in the Improving quadrant still have negative Relative Trend but the momentum of the trend is improving. The rising momentum is an early indication that the Relative Trend is about to change.

Each of the securities on the RRG have an arrow which shows the most recent observation for that security. The tail shows the reader where the security has been. The multiple marks on the tail are the previous observations for that security. On a weekly RRG, the mark that is closest to the arrowhead would be the observation from the previous week, and so on. This allows the RRG to give the viewer a sense of where the security has been in the past.

As analysts, we have all heard about the concept of Sector Rotation and knew that performance was cyclical by observing sine waves on charts. What de Kempenaer was able to finally visualize was that there is a predominant clockwise rotation of Relative Trend that securities follow. It was the first time that analysts had graphical proof that many of today’s outperforming securities would eventually rotate and underperform the benchmark.

In the weekly RRG of the S&P 500 Sectors above, the red Energy Sector (S5ENRS) was well into the Leading quadrant six weeks ago (the length of the tail). While it still has positive Relative Trend at the time this chart was drawn, the trajectory shows that it is rotating into the Weakening quadrant.

ROTATION PROBABILITIES

Before we dig in and find the best place on the RRG to add positions to our portfolio, we first need to establish that the rotations do in fact occur. When you use software that allows you to scroll through the history, this is visually evident (and we encourage interested readers to try that). In this paper without animated charts, we need to rely on 7 our statistics.

To test this, we examined over 20,000 rotations. Every time an equity exited a quadrant, we noted which quadrant it entered. In the table below, the first column is the quadrant the equity exited. The subsequent columns show the probability that that the equity rotated into that quadrant, e.g., when a security was in Leading, 92% of the observations moved into Weakening. 

Here are all the results

The statistics show us that once an equity is in the Leading or Lagging quadrants, the probability that it will continue on a standard clockwise rotation is very high. While the progression from the Weakening and Improving does not have the same level of certainty, the probability of standard rotation is still higher than random.

Statistically, from many observations, we can see that the clockwise rotation has a high probability of continuing.

Note: This study does not consider how long the security stays in each quadrant. That is a topic for more study.

BUY HIGH AND SELL LOW

Now that we have set all the ground work, we can look at our assertion that there is a high probability securities with positive Relative Trend (outperformers) will soon rotate around and underperform. Also that the best time to add securities to the portfolio will be when they have negative Relative Trend since there is a high probability that they will soon begin to outperform the benchmark.

Testing the Theories on S&P 500 Equities

To examine this we set up a test on the S&P 500 constituents from October 2001 to October 2009. The testing framework that we use is fully explained in Appendix 1. At this point, it is important to understand that we are not doing traditional backtests as the results of a backtest are at the mercy of the portfolio construction rules. A Signal Test finds every single occurrence of the signal and then measures the security’s performance from the signal.

In our tests, we display 21 periods forward and back which is approximately a calendar month in each direction. We like to keep the tests short to ensure that we do not overlap multiple signals. The same tests can be done on any timeframe.

In our first set of tests, we wanted to measure the performance if we simply took a position in one of the S&P 500 equities as it changed quadrant and then held it for one month.

Many of the scripts in the rest of this paper reference the quadrants by number. These have been shown in the image below as a reference.

Here is the Optuma script. The JDKRS function, which is set with the SPX benchmark, has the quadrant as one of its outputs. We want to know when it changes to the Leading (0) quadrant.

JDKRS (INDEX=SPX:WI).Quadrant Change To 0

The following is an image of the Signal Tester output that we are using. The first test is testing the signal when any security enters the Leading quadrant.

There are a number of observations that we can make about these results:

  1. The blue bar at the top shows us the dispersion of the signals throughout our testing period. If all our signals were clustered at one time, our signal would not have practical use as it would be too dependent on a single event in time.
  2. Under the title, we can see that there are nearly 26,000 signals that were tested in under 8 seconds. It is important in any test that we have a large sample set to work with.
  3. The Green Band is the mean return from all the signals that have been analyzed. Note that this is the absolute return, not the relative returns. It shows the period 21 days prior and 21 days after the signal. It is interesting to see the strong move up into the signal. This is as expected, since the relative outperformer must have had a significant absolute rise to outperform the benchmark.
  4. The Orange Band is the S&P 500 index for the same tests. Whenever we get a signal, we also take a sample of the Index and measure the returns over the same period. This allows us to see if we have Alpha over the index for our tests.
  5. The “Profit Analysis” allows us to see inside the average. Since the average is made up of thousands of signals, it is important for us to know the dispersion of the signals around the average. This graphic shows us that the dispersion is “high and tight”, meaning that we can have a high level of confidence in the average.
  6. The “Monte Carlo Simulation” is the result of taking random signals and measuring the compounded returns. This is done 20,000 times to give us a simulation of what we could expect if we used this strategy. Our goal is to see an increase in the Probability of Gain.
  7. Finally, the Statistics Panel is full of important information that we need to examine. We will dig into that a lot more as we explore the results.

In this following image, we ran the tests for all four quadrants and arranged the four result sets so they are in the RRG quadrant that corresponds to their test.

What is immediately obvious is where the green line was prior to the change of quadrant. For example, with the lagging test (bottom left) the average price of the equities was falling before the signal.

The image shows us that the average returns over 21 days is greater when securities enter the Lagging quadrant. What’s even more important is that the average return is greater in the first few days. The initial few days on the Leading Test is flat, seen on the image below.

It should be noted that all quadrants are producing gains. These tests of Index constituents are subject to Survivorship Bias and further tests need to be done with Survivorship Bias free data. (Survivorship Bias occurs when we consider today’s Index members over a timeframe in the past. Some of today’s equities may not yet have been included. Others that were in the index at the time may have since underperformed and been removed from the index. The result of Survivorship Bias is that it gives the results a positive bias.)

The next thing we need to do is dig into the statistics of our four tests. We are going to set them up a bit differently so we can compare them.

Immediately we can see that historically, equities entering the Lagging quadrant—where they have negative relative trend—have the highest returns. They also offer the largest units of reward for every unit of risk.

The following example gives a small insight into why this is happening. Here we have set the bars to use a RRG Quadrant color scheme. By the time the security is marked as having negative Relative Trend, the next cycle of the market is pushing it back up.

Notice 21 bars after the red, the market is usually higher and after the green is usually lower. Because normalization requires a level of smoothing, a delay in the signal is introduced. It may be that the phenomena is working as a function of timing the next cycle. 

This allows us to see, from over 25,000 observations, that we can gain incremental capital improvement by focusing on the underperformers. These are very simple tests. We need to ask if there is more that we can do to improve the results?

Introducing RRG Headings

One idea we have been working with at Optuma is to consider the “angle” and “heading” of the arrows on the RRG. If we consider the RRG as a compass, with the angles on the image below, we can start to explore very different relative relationships. When we did this, one of the first things we noticed was that heading was very important. Any time the arrow was heading Northeast (45 degrees) on the chart it would give positive returns.

Conversely, any time it was pointing Southwest (225 degrees) it was giving negative returns. We know from our previous tests that selecting equities in the lagging quadrant gives the best results. In the next test, we want to see if we can improve those results by only selecting the securities that are in the lagging quadrant and pointing Northeast on the RRG.

Following is the script that we used. Our goal is to only find the equities that are in the lagging quadrant (not just entering it like we did in the last test) and have rotated so that they have passed the 45-degree heading:

Here are the results

As you can see from this test, we can secure better returns with lower risk and volatility, by layering in the heading of the equity on the RRG. This extra condition filters out more than half the number of signals we had in our first set of quadrant tests. That is exactly what we want to see. Our goal is to use the probabilities and analysis to filter out the signals that have lower probability of gain, leaving us with the high probability signals. Can we take this further?

Another observation that we have made is that the securities that are further away from the 100,100 origin on the chart have the potential to create more alpha. To test that we add an extra condition to our script that the vector distance of the security from the origin must be greater than 2.

Here are the results added to our table

The volatility of this signal is a bit higher and we would expect that since the securities further from the origin on the RRG are more volatile. The results however show that the extra volatility is giving better results from almost half the number of signals.

The process of adding extra layers has allowed us to fine tune our strategy and gain better results. Our goal is always to increase the returns and Risk Reward and/or reducing the volatility.

Can we produce similar results with Sectors?

SPDR Sectors

Another test to consider is the nine major SPDR ETFs over the same period. Using the SPDRs has some advantages—they mitigate the Survivorship Bias issues that we have with direct equity tests. Again we are looking for the results over a month to see if investing in the underperforming SPDR sectors would give us alpha. As is expected, there are less signals which also reduces the volatility of the signals. The Monte Carlo analysis is not used because of the lower number of signals.

We’ll add the images and the statistics (Including our 45 degree stats).

Again, we are seeing confirmation that investing in the SPDRs when they are outperforming the SPY will give us negative returns. When we look at the statistics, we see that Improving quadrant is even better than the Lagging. One thing to consider is the difference between the Green(sector) and Yellow(SPY) lines on the images above. The Lagging signal is much further from the Index signal which shows us that we gain significant Alpha by investing in Sectors when they are in the Lagging quadrant. 

The final column in the Statistics table again shows that by layering the heading into the quantification of the signal yields even better results.

Does this only apply to equities?

Our research has shown that this same behavior occurs in all asset classes.

It could be argued that a fund, with an active portfolio manager, should mitigate this rotation effect. The next chart shows the history of the Fidelity Magellan Fund against the S&P500. The chart reveals that there are periods where the fund’s Relative Trend is positive and then rotates into underperformance.

This next RRG chart contains a number of commodities with the CRB Index as the benchmark. The rotations hold in commodities.

The image below shows the current top 100 US ETFs with the average of the 100 being calculated and used as the benchmark. While it is cluttered, the rotation of the ETFs is evident in the tails.

The same principle is evident in currencies. The calculation is a little different as the benchmark is the base currency.

CONCLUSION

We’ve been able to use the analysis of thousands of historical signals, and their display on the RRG, to show that the practice of investing in the highest relative performer will not, on average, yield the best results. All equities, currencies, commodities and even bond yields rotate around an appropriate benchmark and the probability of a standard clock¬wise rotation is significant. Every relative outperformer will eventually rotate away and underperform. Our tests have shown that strategies that invest in the underperformers lead to marginally better short-term results. Layering in other filters such as Heading and Distance continue to give incrementally better results.

We would not recommend that Relative Strength be used in complete isolation. Coupled with other strategies such as Trend, Momentum, Valuation, (etc.) Relative Strength and RRGs can provide the analyst significant advantages.

Appendix 1 – Optuma Signal Testing

Editor’s note: this section is included to explain the details of backtesting.

At Optuma we are focused on the testing of quantifiable signals to determine if a particular signal has a higher probability of leading to positive results.

Many technicians rely on backtesting a model portfolio to test their ideas. We believe that there are some flaws with that approach. backtesting is very important, but it should not be used to benchmark a quantifiable signal.

The primary issue is that a backtest is a multi-dimensional test. It tests a Buy Signal, a Sell Signal, Ranking, position size and Capitalization. If signal optimization is included then it gets even more complex. We realized from early backtests, of new quantitative signals, that we were getting wildly different results when we made minor changes to the portfolio in the test. We now call that Portfolio Bias. The way the portfolio is set up has significant effect upon the results of the test. For instance, some of the signals are ignored because the capital ran out. Those signals may have been critical to our understanding.

When we test a signal, we don’t want to think about the best portfolio rules. At the early stage we just want to know how good the signal is. The only way we can do that is to get our tools to find every instance of the signal and then measure the  returns from the signal for a set period of time. These results are then averaged and analyzed. 

Once we have proven to our satisfaction that a Signal is statistically valid, then we can start to think about the optimum portfolio rules that suit the signal’s frequency, duration and our risk tolerance.

The first image below is the Signal Testing workbench that we use. This is the result after running a Signal Test. Some things to note, the tool allows us to test any data over any time frame. For consistency, we always do our initial tests on the S&P 500 stocks using the SPX as our comparison index. We also limit our tests to the period from October 1, 2001 to October 30, 2009 because the SPX started and ended at the same level with two bull and two bear markets covered. This means that we have eliminated the general upward trend of markets in our results.

The main sections of the Signal Tester are as follows:

  • Signal Distribution

The Signal Distribution allows us to see the spread of the signals throughout the history of the test period. If the signals were clustered and only happened infrequently, then it would not be viable. We want to see signals that occur in all market conditions.

The Signal Distribution also allows us to adjust the period that we are looking at instantly. Eg If we want to see how our signal fared in 2008, we just click on that year and all the graphs and statistics update instantly to show us.

  • Signal Performance

The main display is of the average performance of the securities before and after the signal, measuring the unmanaged forward returns for each day over this holding period. The green line is our signal and the yellow line is the average returns in the comparison index over the exact same signals. The bands, which can be hidden, show the range of the highs and lows of the data bars. It is important to note how many signals are being averaged. This is written in the subtitle on this chart. How many results will be a function of how many securities are being tested.

  • Profit Analysis

It is easy to forget that an average is made up of lots of values. The spread of those values determines how valid the average is, e.g., these two number sets have the same average (49,50,51) and (0, 50, 100) but the first series gives us much more confidence that the next value will be close to the average. The issue with viewing statistics alone has been well documented and the reader should study Anscombe’s quartet.

In the Profit Analysis chart, which counts how many signals resulted in the gain displayed on the x¬axis, we want to see a “High and Tight” distribution. This is telling us that there is a significant number of signals that are supporting the average. If the distribution was “low and wide” then we know that the likelihood of future signals landing near the average is very low. The darker area on the distribution marks out the 20th and 80th percentiles of the signal returns. That will be explained more in the Statistics section below.

  • Monte Carlo Simulation

Monte Carlo allows us to run thousands of simulations to make sure that if we used our strategy multiple times in a row, the compounded returns would be better than the individual signal returns.

We tell the tools to run at least 20,000 tests where each time it selected 10 random signals from our results. The distribution has the same 20/80 zone as the Profit Analysis.

  • Statistics

The Statistics table gives us the information that we need to review the signal compared to the comparison index.

Probability of Gain/Loss tells us how many of the signals resulted in a profit/loss.

Mean Return and Median Return are the average returns our signal produced. We report both as the Median is less susceptible to outlier returns skewing our results. As a trader though, outliers are some of the most profitable trades. We like to see a strategy where the two are in agreement. It means that our signal is not reliant on the outliers.

80th/20th Percentiles are the returns at each of these points. They are the boundaries of the darker shaded area on the Profit and Monte Carlo distributions. We use these as a measure of Risk to Reward, e.g., if the 80th percentile is at 10% and the 20th at 5%, we have a 1:2 risk:reward ratio. For every unit of risk, the signal gives me 2 units of reward.

Skewness and Kurtosis are mathematical measures used to describe the distribution plots. Skewness measures whether the peak of the distribution is offset from the breakeven point. Our goal is to have positive skewness and see that rise in the Monte Carlo. Kurtosis describes the shape of the distribution. Higher values are better.

Standard Deviation allows us to measure what the “average” distribution of the results around the average is. The higher the value, the more spread out the returns are. This means that the returns are much less predictable.

ENDNOTES

  1. H. M. Gartley, Relative Velocity Statistics: Their Application in Portfolio Analysis, Financial Analysts Journal, January/February 1995, Volume 51 Issue 1 DOI: http://dx.doi.org/10.2469/faj.v51.n1.1853
  2. Levy, R. A. (1967), RELATIVE STRENGTH AS A CRITERION FOR INVESTMENT SELECTION. The Journal of Finance, 22: 595–610. doi:10.1111/j.15406261.1967.tb00295.x
  3. JEGADEESH, N. and TITMAN, S. (1993), Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency. The Journal of Finance, 48: 65–91. doi:10.1111/j.15406261.1993.tb04702.x
  4. 1NBER Business cycle dates: http://www.nber.org/cycles/cyclesmain.html

Contributor(s)

Mathew Verdouw, CMT, CFTe

Mathew Verdouw, CEO and Founder of Optuma. As the founder and CEO of Market Analyst (1996) and Optuma (2019), Mathew has been working in the field of Technical Analysis for over 27 years. His inquisitive nature, engineering background, and passion for helping...

“MAYDAY!” CALL FOR EUR/USD?

Editor’s note: this analysis was originally prepared on May 1, 2017.

Students of market price action, commonly known as market ‘technicians’ or sometimes more confusingly ‘technical analysts’, often find themselves by the very nature of their discipline pouring over historical price charts in a detective-like manner looking for clues as to future market direction. Technical theory has it that financial markets tend to move in trends and that those trends can often persist long enough for the technically-based trader to make profit from accurate market timing.

One of the most important tools a technician has at his or her disposal are so called ‘support’ and ‘resistance’ levels. These are inflection points on the price charts where after an advance the price starts to fall leaving behind a high watermark or ‘resistance’ level while a support level is the same thing but in reverse with regards to a market decline. A rising trend can be defined as a series of one or more breaches (breakouts) of such resistance levels. A common technical trading strategy is to buy just as the price is breaking above a resistance level in the anticipation of a move higher presenting an opportunity to profit from the long side of the market as either a new uptrend starts or an existing one extends.

This is all well and good if every breakout saw sufficient follow-through buying but here in the real world the problem is that there are many false breakouts or market ‘traps’ which occur when the breakout does not go exactly as planned for the buyers. What to do?

The question for the technician who is monitoring the upside breakout is of course whether the breakout is genuine and there are many ‘filters’ one can use to qualify the breakout including the time spent above the broken resistance level or percentage penetration above it. If the breakout does not qualify as a valid one it may well be a ‘trap’ and so wrong-footed traders will need to know what action to take to limit risk (exit with a small loss) and perhaps attempt to profit by ‘reversing’ their position to short.

The strength of a resistance level is said to be determined by the length of time it has remained intact so it follows that if the price breaks above a resistance level which has held for over six months the break should be much more significant than if it had held for two weeks. One way to measure the strength of a resistance level is by counting the number of bars (days, weeks months etc..) to the left and to the right of the resistance level or ‘swing high’ as I will define it, the more bars counted on either side the more significant the breakout should be.

The reason I am writing this article now is because EUR/USD, one of the world’s most important and actively traded markets, has recently made what I would call a significant upside breakout which has been matched by a similar downside breakout in the U.S. Dollar Index of which EUR/USD is by far the largest component. Looking at the strength of the swing high at the price of 1.0829 I note that there were seven weekly bars to the left and seven to the right of the swing high for a swing high of (7/7) strength. 

The question here is: What usually happens when a swing high (and for the purposes of symmetry, a swing low) of (7/7) strength is broken on a weekly chart of EUR/USD and does such a breakout have any implications for market direction, either up (a valid break) or down (a trap)? Fortunately, with the aid of a computer and a sophisticated charting application with back testing capabilities such as TradeStation one can ask these kinds of questions and get some very quick and revealing answers.

Let’s run a simulation of this where we automatically buy or sell EUR/USD as a (7/7) swing high or swing low is broken, the objective being to determine whether there is a high probability of profiting by trading in the direction of the breakout as technical theory would suggest one should. We’ll start with some basic buy and sell rules which represent an ‘always in the market’ system where each new signal results in flipping one’s position from long to short and vice versa. As this is a very low frequency system, slippage and commissions are almost irrelevant and have therefore been omitted.

ENTRY RULES

Long entry: Buy 1 unit if price trades above the last (7/7) swing high
Short entry: Sell 1 unit if price trades below the last (7/7) swing low

Here are the results for EUR/USD:

EUR/USD spot weekly. Swing high/low (7/7) breakout. (01/01/1999 to 31/03/2017)

  • Number of trades: 14
  • Percent profitable: 64.29%
  • Profit factor: 2.36

Short USD trades only:

  • Number of trades: 7
  • Percent profitable: 42.86%
  • Profit factor: 2.10

Analysis: On balance the results point in favor of going with the breakout rather than fading it as 64.29% of the time buying strength or selling weakness trumped the more intuitive ‘buy low, sell high’ manta we humans have been conditioned to follow. On the other side of the coin the sample size only 14 trades was quite small so the percentage profitable numbers should be taken with a pinch of salt for now at least. The profit factor of 2.10 although smaller than for Euro sales/USD buys does hint at some directional bias for USD sellers on this entry trigger.

We need to dig a little deeper to expand the sample size to get more of a feel for how using (7/7) as an entry signal has performed and this we can do by running the same tests on historical price data for the U.S. Dollar Index (1985 to date) and for the German Deutsche Mark (1976-1998) which are as good proxies as any for EUR/USD. Here are the results:

U.S. Dollar Index future weekly. Swing high/low (7/7) breakout. (22/11/1985 to 31/03/2017).

  • Number of trades: 24
  • Percent profitable: 66.67%
  • Profit factor: 2.99

Short USD trades only:

  • Number of trades: 12
  • Percent profitable: 66.67%
  • Profit factor: 3.28

Analysis: Expanding our sample size to 24 and by switching to the U.S. Dollar Index we can see that the results of the smaller EUR/USD test have been borne out as the percent profitable remains in the mid-60%’s but the profit factor has risen from 2.36 to just shy of 3.0 providing us with the oft heard 3 to 1 risk/reward ratio should we decide to apply the system to trading which I would not advise at this stage. Looking at how one could have fared shorting the U.S. dollar as the current EUR/USD buy signal suggests we do one can see that the percentage profitable number jumps from 42.86% to 66.67% and the profit factor from 2.10 to 3.82. To be fair the period in question started in the same year at the Plaza Accord which was an agreement between the G5 group of nations to attempt to devalue the dollar after a particularly steep advance in the early 1980s so one would expect results to be skewed in favor of the short USD side.

As U.S. Dollar Index futures only started trading in 1985 we will need to switch to Deutsche Mark futures to see how the USD moved vis a vis the most important currency in Europe (Germany’s) prior to 1985.

Here are the results:

Deutsche Mark future weekly. Swing high/low (7/7) breakout. (19/03/1976 to 29/01/1999)

  • Number of trades: 17
  • Percent profitable: 82.35%
  • Profit factor: 9.44

Short USD trades only:

  • Number of trades: 8
  • Percent profitable: 100.00%
  • Profit factor: N/A (no losing trades)

Analysis: Extending the simulation back to the mid-1970s we can see that although the sample set as with the EUR/USD test was small the results provide further evidence of successful trend following on the (7/7) entry trigger. The percent profitable was over 80% for all trades and the profit factor was an almost unheard of 9.44. Looking at the USD sell side again all eight trades were winners (no losers, no profit factor calculation!) but admittedly there was a large degree of overlap with the U.S. Dollar Index results which were skewed in the bears’ favor.

CONCLUSION

While we don’t yet have a fully operational system which should always include risk control features such as stop losses and more efficient exit strategies such as profit targets, time based exits and other market timing filters what we do have is a good indicator of directional bias on a multi-month if not multi-year timeframe for the exchange rate between two of the world’s key trading partners.

When viewed in the context of my recent article entitled ‘The Grand Old Party and the U.S. dollar’ (https://www.hf-systems.com/blank) which calls for a major bout of dollar weakness over the term of the current Republican Administration, the break of resistance at 1.0829 on EUR/USD may prove in hindsight to have been a “MAYDAY!” call for those who will be coming to the same conclusion but at a much later stage in the cycle.

Contributor(s)

Howard Friend, CMT

Howard Friend, CMT, is a Swiss-based multi-asset class trader with a specialization in the development and trading of systematic chart-based methodologies (‘HF Systems’). Howard is Chief Investment Officer, Easy Neu Alpha Partners SA. He has developed his own trading methodology, ‘Break and...

EASY AND SUCCESSFUL MACROECONOMIC TIMING

Editor’s note: This paper was originally published at MathInvest.com and is reprinted with permission.

Abstract: When the economy takes a turn for the worse, employment declines, right? Well, not all employment. Certainly, full-time employment declines during recessions, but concurrently part-time employment rises strongly during economic downturns. An adept fiduciary can contrast the two types of employment, as well as a variety of other data, and get good broad-brush investment timing decisions before an official determination of a Recession. Using the described method enables an investor to seriously reduce drawdowns and greatly improve the reward-to-risk ratio. This works well as a stand-alone improvement to Buy & Hold or as an initial screen prior to other fundamental or technical analysis. Decisions can be made monthly, weekly or even daily for those willing to do a bit more work.

What are the implications of recessions on investments?

A good rule for investment success is to avoid buying equities at the beginning of a recession. Since 1920 the National Bureau of Economic Research has identified 17 recessions, during the worst of which equities declined over 88 percent. Despite that, the Dow Jones Industrial Average has grown at a compound annual rate of 5.85 percent. If an investor could have known about the recessions and avoided them, his investment return would have been 10.12 percent p.a. with a max drawdown
of 38.23 percent.

Of course, that foreknowledge is not possible, but an alternative plan is even more effective. Moving on to the current period, we see the recent recessions highlighted on the market:

If you could have had the recession dates in advance and navigated between SPY and Tbills since SPY existed you would have increased your return to 11.6% per annum and reduced the maximum drawdown to 39%. Without that prior knowledge the benchmark return was 9.2% p.a. with the max drawdown of 55%. All results are tabulated in the spreadsheet in the Appendix. Pay particular attention to the column “Reward to Risk Ratio” on the right side.

The official identifications of recessions frequently come long after they begin, so they are difficult to avoid. Agreement on their dates is not universal, and may be in some part political. However, one can easily identify the dangerous recessionary periods by following the trail left by various economic datasets. Timing is then quite easy.

In a recession, the standard measures of economic growth decline: Full Time Employment, Auto Sales, Commercial & Industrial Loans, Housing Starts, Electricity Usage and Payroll Tax Receipts growth, to name a few. Naturally, at the end of a recession they all rebound.

On the other hand, recessions see increases in the Unemployment Rate and various measures of unemployment, Part-time Employment and Institutional Money Funds. The following monthly Federal Reserve charts illustrate:

The above data of employed workers illustrate the monthly decline of full-time workers and the rise of parttime workers with the recessionary periods shaded. Of course, their orders of magnitude are different, but by taking their rates of growth you make them comparable, a process referred to as normalization. It should be no surprise that employment data has a strong relationship to good and poor economic activity; the United States is predominantly a service economy.

The methodology is straightforward. Identify data that mimics the economy on a growth basis as well as data that does the reverse. The data does not have to lead the economy – coincident indicators are in many cases better because they are less prone to give false signals than the leading indicators. Ideally the data chosen should exhibit robust changes during recessionary periods. Then take the difference of their moving annual rates of change, or an even better method of comparison: their slopes. Essentially this becomes a two-factor model for predicting recessionary activity, and by extension for successful investing. If you are a Buy & Hold investor, this would be “Buy & Hold with holidays” a first step towards active investment. If you are already a more active investor, this should be your first screener.

BUT CAN YOU FORECAST RECESSIONARY ACTIVITY?

The first two candidates mentioned (full-time vs. part-time employees) can be researched back to 1969. During those 48 years there have been seven named recessions. Buying and holding the S&P500 index over that period would have resulted in a 6.65 percent p.a. gain with a maximum drawdown of 49.3 percent (looking at month-ends). Avoiding the seven recessions netted 8.21 percent p.a. with a 32.43 percent drawdown. Note, however that is with perfect knowledge.

Using the simplest of comparison tools (a 12-month rate-of-change) of the two types of employment, the profitability over the period is 7.27 percent p.a., and the max drawdown has been reduced to just 31.15 percent. The drawdown is frequently more important because it is that number that determines whether or not the client will stay in the program. A client who quits during a drawdown will never get to experience the long term growth of the asset.

If you take the simple step of smoothing the data (reducing its “noise”) by comparing slopes, results rise to 8.3 percent p.a. with the same 31.15 percent drawdown. That is, by using a simple two-factor model (without foreknowledge) we are able to improve upon a program that had perfect look-ahead bias. And it is completely tradeable; the results here were acted upon in the month after they were known.

Those results are over the last half-century, and we all know that performance is frequently dependent upon the starting date. So what has happened recently – is it still tradeable?

Most certainly. Below is the chart of SPY prices shaded to reflect the recessionary conditions indicated by the normalized difference of full-time and part-time employment. Using employment characteristics to identify recessionary periods is what most economists would expect. Payrolls are important; the bulk of the tax revenue taken in by the U.S. government is in the form of Payroll Tax Receipts.

This is not a forecast of recessions, but a forecast of recessionary activity. The former is hard to nail down in real time, whereas the latter is easily identified.

As long as the growth rate of Full-time employment exceeds that of Part-time employment, own equities. Should the growth rate of the part-timers exceed that of the full-timers, assume a coming recessionary period and exit equities. The purpose here is to enable the investor or his fiduciary to “first, do no harm.”

Trading SPY on the monthly differences looked thus:

The results from comparing full-time vs. part time employment using SPY and T-bills are achievable and are perfectly realistic. Spreadsheet programs as well as market software can calculate both rates of change and slopes. Slopes have the advantages of greater smoothness, significantly less lag, and being relatively immune to subsequent data revisions. However there is one item which would prevent such a program from being adopted by most fiduciaries. That one thing is very easily illustrated in the equity curve:

Those relatively flat lines on the blue curve represent the periods when the assets are in Treasury bills. Despite the fact that the fiduciary might have saved the client’s retirement funds from significant drawdown, clients have been known to get frustrated with a portfolio consisting of 100 percent Treasury bills. Investors react negatively to paying for the privilege of owning T-bills for an extended period of time, despite any obvious benefit. Philosophically speaking, the client is correct: because T-bills are so liquid, they really do not fit the definition of “investment” (i.e. investment and liquidity are mutually exclusive). However, there is a solution, and it turns out to be a win-win.

Instead of holding T-bills for extended periods, the logical substitute asset is “IEF”, the 7- 10-year Treasury bond ETF. Although this ETF has not existed for as long as SPY, the historical rates of return of IEF are symmetrical with that of the Dow Jones Chicago Board of Trade Treasury Index[3],and they are included here.

The result is that the investor has a larger return and we have removed the problems of owning Treasury bills. The equity curve chart makes that point:

What are the implications of getting more granular? Very good macroeconomic data exists on a weekly basis. Of course the releases lag the actual data by a few weeks, but using it still gives an edge over monthly data with regard to identifying recessionary periods.

For example, Initial Unemployment Claims (Fed codes “ICSA” and “IC4WSA”) are released weekly and correspond well with monthly Part Time Employment (“LNS12032197”). The Civilian Workforce (“COVEMP”) is a good weekly surrogate for the monthly Full Time Employees (“LNS12500000”). Additionally, Commercial & Industrial Loans by large banks (“CIBOARD”) mimics the overall economy very well. The purpose is to measure the good side of the economy as well as the poor side, and CIBOARD (loans) measures the good side very effectively. All of the mentioned datasets are available (for free) in seasonally adjusted variations which make the practitioner’s job considerably easier.

Consequently the decision-maker gets to pick the frequency with which he wants to decide. Note that the changeover from a growth period to one of recession is a robust move. Thus, using weekly data does not mean the investor will be trading weekly, just that weekly data will theoretically give a decision two weeks before monthly data. Note also that Payroll Tax Receipts (not seasonally adjusted) are available daily, providing even more granularity.

When your focus switches from comparing monthly ending data to weekly ending data you will notice that the maximum drawdown increases. That is purely a function of the measuring period. The larger drawdowns were always there, but the intra-month volatility was hidden by taking month-end closes. The gains in the rates of return however are real and a function of more timely identification of economic activity.

The above chart in log scale illustrates the use of Initial Claims vs. Commercial & Industrial Loans, both of which are available weekly. The default asset is SPY with IEF used during recessionary activity.

IF TWO INDICATORS WORK WELL, WILL MORE INDICATORS WORK BETTER?

Illustrated is a two-factor model with the choice of the two input variables being somewhat openended. This could be expanded to include more indicators. However note that if you use enough variables you can model the past behavior of anything to a high degree, but that does not give it any predictive value. For example, there are several regional Federal Reserve Branches that have created indicators using over a hundred inputs. Their indications are largely ignored by the investment community. Likewise, it has occasionally been anecdotally stated that the accuracy of a model is inversely proportional to the number of input variables. Occam ’s razor works. That is, the simple approach of a difference between two somewhat opposing variables is better than multiple conditions of multiple variables.

WHICH DATA ARE BETTER INDICATORS?

In considering all the possible variables, there tends to be safety in large numbers. That is, results are more reliable when large numbers of individuals (or loans) are counted, rather than a (possibly politically-determined) single number like the Unemployment Rate. The goal is to measure actual activity, not politics.

Below you will find a table listing a large number of tested datasets. Their starting dates reflect the longest possible period, given the data. That is, if the first data is available in 1955, a 12-month rate of change is not available until 1956. In all  cases the classical macroeconomic comparison period was chosen, i.e. a year. It is always possible that a different period would outperform. The results all support the basic points: identifying the periods of recessionary activity is critically important, highly beneficial and operationally possible for investors or their fiduciaries.

None of the mentioned data is “non-standard”. They are all circulated by the Federal Reserve, although much of the data comes originally from other sources, such as the Department of Labor, or Treasury. The imprimatur of the Fed is an important fact for a fiduciary.

WHY ARE THE CRITICAL FACTORS?

Why has this not been shown before? Researchers have analyzed employment data for decades, knowing it has an obvious connection to the economy and the equity markets. But the results of that research have not been significant. The first critical factor here is the normalization of the data (putting them on the same scale). The two datasets chosen for comparison have to be viewed as separate (independent) events. This was not done before. The second key factor is the use of a sophisticated measurement tool such as the rate of change of the slope rather than the rate of change of the data itself. Details tend to make significant differences.

WHY NOT JUST USE A MOVING AVERAGE OF THE PRICE?

Equity prices can fluctuate without any economic reason. Macroeconomic variables are both logical and work well. Because the data studied here is macroeconomic it cannot be traded or arbitraged away, as could happen with a market-derived or technical indicator. These data measure economic truths rather than trading positions.

HOW STATISTICALLY RELIABLE IS THIS FORECASTING OF RECESSIONARY ACTIVITY?

Despite the fact that the method described here (without look-ahead bias) has duplicated or outperformed all of the officially defined recessions (that had look-ahead bias), the sample size is small. What do statisticians do when confronted with a perfect record and a small sample size? They add additional “pseudodata”, two additional observations with one positive and one negative outcome. For example, if after adding the pseudodata to 8 successes you have 10 observations, then your record becomes 9 out of 10. This is known as Laplace’s Rule of Succession in which the estimate of success is the number of observed successes plus 1, divided by the number of observations plus 2.

Using the largest number of observations (from 1956) there have been 9 successful forecasts of recessionary activity. The Rule of Succession would calculate the subsequent estimate of success at 10/11 or 91 percent.

Perhaps the better position to take is that changes in the labor participation described here operationally define a recession, although not officially. By using this method a fiduciary would be identifying recessionary behavior before the (possibly political) definition of “Recession” is made.

CONCLUSION

It has been shown that the timing of recessionary periods has a major impact on investment performance. Avoiding the recessions produce a major improvement over a Buy & Hold strategy. The problem is that the client or his fiduciary never has the “Recession” defined when it is needed. However with the simple use of a choice of only two inputs, recessionary activity can easily be identified and substantial investment losses avoided.

Employment differences should model the economy, something the stock market rarely agrees with. However, in this case the employment differences seem to do a good job with the market. And this approach also works well with other macroeconomic data, such as banking data as opposed to labor data.

How well did this work in the long, long run? Going back as far as the data permits (1956), every one of the named recessions (by the National Bureau of Economic Research) was identified. And the outcome was that anyone using this method outperformed the perfect knowledge benchmark. 

There has always been a desire among market gurus to discover a program that works in both good and bad markets. Most have failed. Experts have long counseled “Don’t fight the tape.” And history shows that the tape is bearish in recessionary periods. One implication of this research is there will be less fighting the tape if identification of recessionary activity is the first screening performed. This is not difficult and should be an arrow in the quiver of every investor or fiduciary

Note: For more on Laplace Rule of Succession, see https://en.wikipedia.org/wiki/Rule_of_succession This is also referred to as the “Sunrise Problem”, or the calculation of the probability that the sun will rise tomorrow.

Both a rate of change and moving slope rate of change are calculable in most spreadsheet programs for any period. The latter is smoother than the former without introducing lag or erratic results. For a further discussion, see: Rafter, William, Two Moving Function Hybrids, Stocks & Commodities V. 23:9. September 2005

ENDNOTES

  1. NBER Business cycle dates: http://www.nber.org/cycles/cyclesmain.html
  2. Full time employed: https://fred.stlouisfed.org/series/LNS12500000 Part time employed: https://fred.stlouisfed.org/series/LNS12032194
  3. I have to illustrate the past as it was, and no tradable bond ETF existed for the time prior to the start of IEF or TLT (2002). Hence the use of T-bills. However since I am recommending future investment in a longer maturity debt ETF, I must also illustrate how a surrogate for IEF would have behaved. Fortunately an index was created which measures the price performance of 7- 17 10 year Treasury Bonds. That price series (the Dow Jones Chicago Board of Trade Treasury Index) has data going back to 1988. The moving correlation between the 12-month rates of return of IEF and DJCBTI vary from .9795 to .9975, much closer than IEF and TLT for example, and making it an excellent surrogate for IEF.

Contributor(s)

William Rafter

William Rafter is president of MathInvest. He has managed hedge funds for over 30 years. Mr. Rafter was educated at the Wharton School of the University of Pennsylvania, and the Haas Graduate School of Business Administration of the University of California, Berkeley....

THE DIFFERENCE BETWEEN STATISTICS AND STRATEGY

Editor’s note: this article was originally published in the Pension Partners blog on June 1, 2017 and is reprinted with permission.

Statement 1: the average year has between 3 and 4 corrections greater than 5%.

Statement 2: every year has between 3 and 4 corrections greater than 5%.

Statement 1 is a true statistic regarding the S&P 500 going back to 1928. It can serve as a helpful reminder that there is no reward without risk and that equity securities are inherently volatile (the current period notwithstanding). Corrections happen, even in good markets.

Problems start to arise when investors assume that because Statement 1 is true that Statement 2 must be true as well. Markets, unfortunately, don’t operate that way. The actual environment rarely looks like the average environment.

Since March 2009, there have been 21 corrections greater than 5%, but they were not evenly distributed. In 2009 and 2010, we saw 4. In 2013 we only saw 1. Thus far in 2017 we have yet to see any.

This behavior is rare, but not unprecedented. In 1995 the S&P 500 went the entire year without a 5% drawdown. In fact, the largest drawdown on a closing basis was only 2.5%, and did not occur until December.

You can bet that many investors fought this non-stop rally higher and many others sold out in the early going, “waiting for a correction” to “get back in” at a lower price.

But the correction never came in 1995 and the corrections in 1996 were not deep enough to provide an opportunity for investors who sold out in early 1995 to get back in at lower prices. The S&P 500 would gain more than 20% in 1996, 1997, 1998, and 1999 before ultimately peaking in March 2000.

To be sure, this was perhaps the strongest period in history for U.S. equities, and no one could have foreseen what was to come at the end of 1995. But that’s part of the point here. We could not reasonably predict what was to occur from 1996-2000 because the behavior of 1995 was not by itself a signal of anything. It was merely a deviation from the average outcome, which is not the exception in markets but the rule.

As I write, the S&P 500 is hitting yet another all-time high. Thus far in 2017 the largest drawdown on a closing basis has only been 2.8%, which would be the lowest since 1995.

The Volatility index (VIX) ended May at its lowest monthly close in history.

For those waiting for the typical correction or for the average level volatility to resume, this has been an immensely frustrating year. For everyone else, it’s been quite good as I outlined last week.

That’s not to say that we shouldn’t expect to see corrections with more frequency. We should. And it’s not do say that we shouldn’t be prepared to see higher volatility. We should as well. The statistics (average outcomes) say that both higher volatility and increased frequency of corrections are likely. But there’s a big difference between statistics and strategy and this is where many investors go awry. There were four corrections in 1996. Volatility did indeed rise from the extreme low levels of 1995. But that “reversion to the mean” was not a strategy; it did not help investors who sold in early 1995 on the hopes of buying in at lower prices.

Which is why basing your investment strategy on interesting statistics can be a dangerous game.

Pension Partners, LLC, was founded by Edward M. Dempsey, CFP® in 1999. The firm offers its buy and rotate approach worldwide through mutual funds, separate accounts, and model portfolios. Pension Partners offer a unique proprietary investment process that rotates offensively or defensively based on historically proven leading indicators of volatility. The firm’s core beliefs are (1) long-term wealth generation begins with wealth preservation; (2) risk should be managed before, not after, higher volatility in markets; (3) volatility is predictable and can be positioned for in advance; (4) minimizing drawdowns is critical to long-term outperformance; and (5) alpha can be generated through active risk management.

Contributor(s)

Charlie Bilello, CMT

Charlie Bilello, who holds the Chartered Market Technician (CMT) designation, is the Director of Research at Pension Partners, LLC, where he is responsible for strategy development, investment research and communicating the firm’s investment themes and portfolio positioning to clients. Prior to joining...

CHART OF THE MONTH

For additional perspective on the average drawdown the following chart is reposted from Pension Partners June Market Webinar.

To sign up for Pension Partners’ free newsletter, click here.

Contributor(s)