JOURNAL OF
TECHNICAL ANALYSIS
Issue 69, Spring 2016
Editorial Board
David Aronson, CMT
President, Hood River Research
Richard J. Bauer, Jr. Ph.D., CFA, CMT
Professor, Finance, St. Mary’s University
Stanley Dash, CMT
CMT Program Director
Jeremy du Plessis, CMT, FSTA
Head of Technical Analysis and Product Development, Updata Ltd.
Kristin Hetzer, CMT, CIMA, CFP
Principal and Owner, Royal Palms Capital LLC
Cynthia A Kase, CMT, MFTA
Expert Consultant
Fred Meissner, CMT
Founder & President, The Fred Report
Saeid Mokhtari, CMT
Market Research Analyst, CIBC World Markets

CMT Association, Inc.
25 Broadway, Suite 10-036, New York, New York 10004
www.cmtassociation.org
Published by Chartered Market Technician Association, LLC
ISSN 2378-7295 (Print)
ISSN 2378-7341 (Online)
The Journal of Technical Analysis is published by the Chartered Market Technicians Association, LLC, 25 Broadway, Suite 10-036, New York, NY 10004.New York, NY 10006. Its purpose is to promote the investigation and analysis of the price and volume activities of the world’s financial markets. The Journal of Technical Analysis is distributed to individuals (both academic and practicitioner) and libraries in the United States, Canada, and several other countries in Europe and Asia. Journal of Technical Analysis is copyrighted by the CMT Association and registered with the Library of Congress. All rights are reserved.
Letter from the Editor
by Julie Dahlquist, Ph.D., CMT

Welcome to the 69th issue of the Journal of Technical Analysis! In addition to including the 2015 Charles H. Dow Award winning paper, “Fixing the VIX—An Indicator to Beat Fear,” written by Amber Hestla-Barnhart, this edition contains nine original research papers. Covering a wide range of technical analysis topics, these papers add to the body of knowledge of technical analysis, promoting a greater understanding of the discipline within the MTA membership and beyond.
The Journal of Technical Analysis continues to be a leading source for both practitioners and academics wishing to explore the expanding field of technical analysis in greater depth. Whether fine-tuning existing tools of the trade or melding ideas from disciplines such as physics, statistics, and psychology, the authors provide fresh insights for understanding our everchanging financial marketplace. As a profession, we are grateful to them for their willingness to share their ideas and findings.
Each of these papers began as an idea, sometime much different from the end product. It was the energy, creativity, and perseverance of the authors that turned those initial ideas into the finished product. If you are interested in sharing your ideas with the Journal of Technical Analysis readers, including over 4500 MTA members in 85 countries, feel free to contact me so that we can discuss the process of turning your idea into a journal submission.
Fixing the Vix
by Amber Hestla-Barnhart
About the Author | Amber Hestla-Barnhart
Bio coming soon.
Abstract
Volatility is widely considered to be a category of technical indicators with a simple interpretation – no matter how it is measured volatility is widely believed to rise in a market downturn. This approach is applied to indicators such as the Average True Range (ATR), Bollinger Bands® BandWidth or the most widely followed volatility indicator, VIX, which is formally known as the CBOE Volatility Index®.
VIX is widely known as the “Fear Index” because it often increases when the stock market drops and the fear of further price declines increases. While this concept sounds useful, there are significant limitations to executing trading strategies based on VIX and these limitations actually make VIX virtually useless for the average investor. Although it is not widely followed, there is a simple volatility indicator available in the public domain that can be used to implement trading strategies based on the concept of VIX. This indicator, the VIX Fix developed by Larry Williams, overcomes all of the limitations of VIX. This paper will explain that indicator and introduce a quantitative trading strategy to profit from rising fear.
In the rest of this paper, I will briefly review what VIX is, highlight some of the limitations of VIX, describe an alternative to VIX and then provide test results demonstrating how well the VIX Fix works. The main focus of the paper is on the test results.
AN OVERVIEW OF VIX
VIX is intended to quantify the market expectations of near-term volatility included in S&P 500 stock index option prices. The idea of a tradable volatility index dates back to at least 19861 and the VIX was developed in 1993.[2] The Chicago Board of Options Exchange notes that “Since its introduction in 1993, VIX has been considered by many to be the world’s premier barometer of investor sentiment and market volatility. For investors who wish to trade an instrument related to the market’s expectation of future volatility, VIX futures were introduced in 2004, and VIX options were introduced in 2006.”[3]
The calculation of VIX is complex and requires using market prices of S&P 500 options contracts to derive the size of the expected price move in the S&P 500 index over the next 30 days.4 From a trading perspective, it is more important to focus on how VIX can be applied to market analysis than it is to understand the calculation of VIX.
Most analysts consider high levels of VIX to be associated with market bottoms. A chart showing both the S&P 500 index and the VIX index demonstrates that this is true. Figure 1 shows that VIX spikes are seen at significant bottoms. Vertical lines have been added to the figure to highlight times when there was both a VIX spike and an important market low.
Visually, it does appear that VIX consistently tops near important market bottoms. Although Figure 1 uses monthly data, the same behavior can be seen in weekly, daily and even intraday charts.
Limitations of VIX
Figure 1 seems to demonstrate that the relationship between a high VIX and market bottoms is reliable but there is no tradable information in that chart. Tops in the VIX index are confirmed only in hindsight. Tops in VIX are also only loosely associated with bottoms in the S&P 500 with the actual low in price occurring days, weeks or months after the peak in VIX. The time between VIX peaks and market lows is variable and unpredictable in real time.
As one example, VIX set a multi-year high in October 2008, more than four months before the S&P 500 bottomed in March 2009. The top in VIX is only apparent with the benefit of hindsight and traders could have believed the multiyear high in VIX reached in September 2008 was important at that time. There was no way to predict that VIX would continue rising based on the information available at that time.
It will actually never be possible to know when VIX is peaking in real-time. Because VIX is unbounded by its calculations it is always possible that VIX can go higher. To derive trading signals from VIX, it might be possible to apply traditional tools of technical analysis such as moving averages (MAs) or Bollinger Bands to place the current level of VIX in context. This is a testable concept but would be of limited value due to an additional limitation of VIX which is that VIX applies directly only to the S&P 500.
In practice, many analysts ignore this limitation since there is generally a high, positive correlation between the price moves of broad market indexes and individual stocks.
However when applying VIX as part of a trading strategy, it is important to remember that VIX is calculated from the premiums traders pay on S&P 500 index options contracts and its value is specific to the S&P 500 index. Recognizing this limitation, the CBOE provides calculations of volatility indexes for the NASDAQ 100, the Dow Jones Industrial Average, the Russell 2000, gold oil, and euro futures. The CBOE also publishes volatility indexes for several individual stocks including Amazon (AMZN), Google (GOOG), Goldman Sachs (GS), IBM (IBM) and Apple (AAPL).5 These newer indexes acknowledge the limitation of VIX and highlight the value of calculating volatility for different securities.
The VIX Fix
The fact that VIX applies only to the S&P 500 led market analyst Larry Williams to develop an indicator he calls the “VIX Fix” that can be applied to any stock, ETF, tradable security or market-based index.6 Figure 2 shows SPY with the traditional VIX index in the second pane of the chart and the VIX Fix in the pane below that. The general direction of the trend in both indicators is the same although there are differences in the magnitude and slight differences in the timing of turning points in the two indicators. At the bottom of the chart, both indicators are shown in a single pane to highlight the similarity in price movements between the two. This chart demonstrates that the direction of the trend in the VIX Fix is highly correlated with the trend in VIX.
The VIX Fix applies the same general formula that is used to calculate the stochastics indicator and is fairly simple to calculate. The difference between the highest close in the past 20 days and today’s low is divided by the highest close in the past 20 days. That ratio is multiplied by 100 to scale the indicator from 0 to 100. The formula for the VIX Fix is:
(Highest (Close, 20) – Low) / (Highest (Close, 20)) * 100
Where “Highest (Close, 20)” means the highest closing value in the past 20 periods and the low refers to the current period’s low. The formula can be applied to any timeframe.
In the calculation, Williams used 20 days to include approximately one month of trading history. With weekly or monthly data, 20 is used as the default parameter.
This indicator extends the powerful concept behind the VIX to any stock or ETF. With a simple calculation method and broad applicability, VIX Fix addresses the two limitations of VIX identified above.
The VIX Fix can be applied in a number of ways. For example, it can be used to identify volatile stocks during periods of relative calm in the broad market. Figure 3 shows the traditional VIX index based on data for the S&P 500 index along with the VIX Fix indicator for Monster Beverage (MNST). VIX Fix fluctuates wildly for MNST during most of the time period shown in the chart, indicating the stock was highly volatile even when the broad market was experiencing low volatility as indicated by the relatively low and flat values of VIX. By identifying the volatility of individual stocks in any market environment, traders can identify stocks with current volatility profiles that are best suited to their trading style.
There are other tools available to determine the volatility of an individual stock but none is as simple to apply as the VIX Fix. One alternative is implied volatility which can be calculated for many individual stocks. The VIX Fix however does not require access to options price data like implied volatility does and is more applicable since the VIX Fix can be calculated for any stock, security or index, even those without actively traded options.
Testing VIX Fix as a Mechanical Trading Strategy
A mechanical trading strategy is based on rules that dictate the buy and sell decisions. One advantage of a mechanical strategy is that it is objective. All traders applying the strategy exactly as written will obtain similar results. Any difference in results should be attributable to slippage and commissions which are unavoidable and vary from trader to trader. Another advantage of mechanical trading strategies is that they are useful for evaluating the effectiveness of technical indicators. In a back test, slippage and commission costs can be controlled, allowing the results of a back test can be used to assess how well selected indicators work.
To develop a test of the effectiveness of the VIX Fix, an MA can be added to the indicator. For an initial test, a 20-day MA will be applied to daily data. All stocks in the S&P 500 and several other indexes will be tested using data from November 1, 1999 through October 31, 2014, a 15-year test period. If the total history available for any stock covers less than fifteen years as of October 31, 2014, all available history is used. This test does not correct for survivorship bias because it uses the stocks that were in the index on October 31, 2014. To partially offset the impact of survivorship bias, tests are run using several indexes that include nearly 3,000 separate stocks and a long timeframe is used. The period used in the test includes two bear markets and three bull markets to capture the performance of the VIX Fix under a variety of market conditions.
A buy will be triggered when the VIX Fix falls below the MA and a sell will be generated when the VIX Fix rises above the MA. Trading costs of $5 per trade were deducted from each trade to simulate the impact of slippage and commissions.
The buy rule is designed to identify periods of time when volatility has become unusually high (periods of time when it is greater than the MA) and takes action (buys) when volatility is returning to normal (falling below the MA). These rules assume that volatility is mean reverting, a widely accepted assumption among market analysts.7 After a stock is bought, the position will be held while volatility (VIX Fix) remains low (below its MA) and sold when the volatility (VIX Fix) becomes higher than average (crosses above the MA). When a sell signal is given, the strategy will move to cash and assumes a 0% return on cash. The results of this test are shown in Table 1.[8]
The same tests were also completed using weekly data and the results are presented in Table 2. The same 20-period MA was used so the strategy rules were exactly the same.
The results show that buying stocks after a peak in volatility can be rewarding. The highly liquid stocks in major market averages all provide double digit average annual returns that are 2.8 to 3.6 times higher than the gains of the S&P 500 index. Of the stocks included in the test on the S&P 500 index, 391 (78.2%) were profitable in standalone testing.
All except the NASDAQ 100 delivered higher gains with less risk, defining risk as the largest maximum drawdown. Although the NASDAQ 100 test showed those stocks outperformed a buy and hold strategy, the risk was higher. The VIX Fix strategy is invested a relatively small percentage of the time which would allow investors to pursue alternative investment strategies instead of holding cash most of the time.
One question that must be addressed is whether or not traders could obtain superior results using the VIX itself for signals rather than the VIX Fix. The answer is that the VIX will be on a buy or sell signal for all stocks at the same time. With the VIX Fix, signals are staggered through time and at sometimes there will be few positions while at other times there will be a large number of positions. The VIX Fix varies the degree of exposure to the market based on current volatility levels of individual stocks. Using VIX would lead to an “all or nothing” investment position and would be a completely different approach to trading compared to the VIX Fix.
Short trades are theoretically possible by flipping the rules. To test this strategy, short trades would be entered when volatility increase (the VIX Fix crosses above its MA) and would be covered (buying to close the position) when the VIX Fix falls below its MA. The results of back tests were not profitable which indicates rising volatility will not always be associated with falling prices.
The 20-period MA was selected solely as a convenient parameter. It was not optimized. Testing of nearby values shows that the parameter is stable. The results of those tests are summarized in Table 3.
Based on this test, the VIX Fix Strategy is robust as defined by Connors and Radtke.[9] They demonstrated that “a sound trading strategy should provide results that vary slightly when the strategy parameters are varied by a small amount.” Results presented in Table 3 show that relatively small changes in the period of the MA result in relatively small changes in performance. There is also a linear trend in the percentage returns and drawdowns. Based on the data, it does appear that an MA with a value greater than 20 would improve the performance of the strategy. This data increases the confidence level that the parameter has not been curve fit to the past data. It also increases the confidence level that the strategy is likely to perform in a similar manner in the future because it not based on curve fitted data.
A Real Time Test
To supplement the back testing presented in the previous section, in this section, the results of a real time test will be presented. This test is modeled on the test presented in Stock Selection: A Test of Relative Stock Value Reported over 17 1⁄2 Years, the 2001 Dow Award winning paper presented by Charles D. Kirkpatrick II, CMT.[10] As Kirkpatrick noted, “the best and most convincing test of any theory is to see if it works by itself using completely unknown data.” That is the type of test that was conducted with the VIX Fix.
Traders can also apply the VIX Fix to benefit from price moves in the options market. Options prices incorporate a number of factors with volatility being one of the important. If volatility is higher than average, traders selling options should be able to generate significant gains as volatility returns to a normal level and the options price declines.
From September 20, 2013 through September 26, 2014, real time trade recommendations were published in a weekly newsletter.11 Each week, three to five put option selling recommendations were provided to subscribers.
The trades were based primarily on the VIX Fix indicator. If a stock chart showed that the VIX Fix had fallen below its 20-week MA in the previous week, a put option meeting a minimum income requirement was identified. To be recommended, the put sale needed to generate a return on investment of at least 3% of the required margin deposit. All of the options recommended expired in less than 90 days. Assuming profits are reinvested, this strategy could produce an annual return on investment of more than 10% a year if the win rate is high.
The results of the test indicate how the trades would have performed if each one was acted on and are summarized in Table 4.
Given the reality of the options market, the strategy relying on the VIX Fix performs significantly better than a random entry with 92.6% wins compared to an expected win rate of 5.5% that can be expected from randomly selling puts and allowing them to expire.
Many investors erroneously believe that selling puts is a high probability trading strategy with an expected win rate of 75% or more. This perception appears to be based on a study[12] which found that three out of four options held to expiration, on average, expire worthless. That particular study reviewed options on various futures contracts for a three-year period (1997-1999) and found that 76.5% of options contracts held to expiration on the Chicago Mercantile Exchange (CME) expired worthless. This study is misleading because it does not include contracts closed prior to expiration. According to the widely quoted study, 6.3 million option contracts expired worthless in 1999. According to CME data, 115 million options contracts were traded that year.[13] Most options contracts are closed prior to expiration and because there is a buyer and seller for each contract, half of those were closed with a gain and half of the options traders closed positions with a loss. Of all contracts traded, just 5.5% expired worthless that year.
Another reality of the options markets is that a high win rate is not always an indicator of profitability for an options selling strategy. It is possible to win small amounts on a high percentage of trades and suffer extraordinary losses on the remaining trades. This means the risk of ruin is high for options sellers.
This test did not apply any risk management rules to decrease the risk of ruin. In practice, traders could apply a stop-loss rule or some other strategy to close out options positions when they show large losses.
To determine whether or not this test would have resulted in profits over the test period, a simple calculation was made assuming the losing trades were exited at the closing market price on the Friday before the option expired. If that had been done, the profits from winning trades exceeded the expense of closing losing trades cumulatively and in both 2013 and 2014.
Based on the real time test with a provision for including the impact of losses, the VIX Fix strategy can be combined with a minimum income requirement rule to implement a profitable put selling strategy.
Conclusion
Back testing demonstrates the VIX Fix can be used effectively as part of a trading strategy. This is an indicator that was fully disclosed by its developer, Larry Williams, and is available in the public domain. The tests described in this paper use the VIX Fix exactly as it was described in 2007.
This paper adds an MA to the VIX Fix to develop a complete trading strategy. Applying the VIX Fix with its MA to weekly data seems to be more profitable than using daily data. As demonstrated in this paper, VIX Fix can also be used to trade options. Results of a real time test of the put selling strategy reveal that traders can find high probability trading opportunities with the VIX Fix.
Further research on the VIX Fix could be done to determine how effective the indicator is when combined with other indicators such as Bollinger Bands. The length of the MA used for trading could also be optimized through additional testing. Research could also be conducted on international markets and other asset classes.
References
[1] Brenner, M., & Fand Galai, D. (1989). New Financial Instruments for Hedging Changes in Volatility. Financial Analysts Journal, 45(4), 61-65. Retrieved November 20, 2014, from http://people.stern.nyu.edu/mbrenner/research/FAJ_articleon_Volatility_Der.pdf
[2] Whaley, R. (1993). Derivatives on Market Volatility: Hedging Tools Long Overdue. The Journal of Derivatives, 71- 84. Retrieved December 1, 2014, from http://rewconsulting.files.wordpress.com/2012/09/jd93.pdf
[3] VIX® Index. (n.d.). Retrieved November 20, 2014, from http://www.cboe.com/micro/VIX/vixintro.aspx 4 The CBOE Volatility Index – VIX ®. (2014, August 31). Retrieved December 1, 2014, from http://www.cboe.com/micro/vix/vixwhite.pdf
[5] http://www.cboe.com/micro/equityvix/introduction.aspx
[6] Williams, L. (2007, December). The VIX Fix. Active Trader Magazine, 24-32.
[7] Poterba, J. and Summers, L. (1987). Mean Reversion in Stock Prices: Evidence and Implications. NBER Working Paper Series, Working paper No. 2343. Retrieved December 23, 2014, from http://www.nber.org/papers/w2343.pdf
[8] Test was conducted using Trade Navigator and data provided by Genesis Financial.
[9] Connors, L and Radtke, M. (2014). Parameter-results stability: A New Test of Trading Strategy Effectiveness. Journal of Technical Analysis, (57), 30-34. Retrieved December 20, 2014, from http://www.mta.org/eweb/docs/Issues/57 – 2002Winter.pdf
[10] Kirkpatrick II., C. (2002). Stock Selection: A Test of Relative Stock Values Reported over 17 1/2 Years. Journal of Technical Analysis, (68), 61-68. Retrieved December 31, 2014, from https://docs.cmtassociation.org/journal- ta/jota68_2014.pdf
[11] Hestla-Barnhart, A. (2014, HYPERLINK “x-apple-data-detectors://3” January 1). Income Trader Pro. Retrieved from http://www.profitabletrading.com/
[12] Summa, J. (n.d.). Do Option Sellers Have a Trading Edge? Retrieved December 5, 2014, from http://www.investopedia.com/articles/optioninvestor/03/100103.asp
[13] Simon, M. (2014). Options on Futures: A Market Primed for Further Expansion. TABB Group, 12(10), 2-13. Retrieved December 6, 2014, from http://www.cmegroup.com/education/files/options-on-futures-a-marketprimed-for-further-expansion.pdf
Sell in May with a Lindsay Overlay
by Ed Carlson
About the Author | Ed Carlson
Ed Carlson, who holds a Chartered Market Technician (CMT) designation, is the Chief Market Technician at Seattle Technical Advisors. He has close to 30 years of experience in the markets, and is a leading expert on George Lindsay’s market timing methods. Seattle Technical Advisors provides technical analysis of the equity, fixed-income, commodity, andcurrency markets for RIAs, Hedge Funds, Money Managers, and Financial Advisors.
Ed is the author of George Lindsay and the Art of Technical Analysis and the annotated edition of An Aid To Timing. In George Lindsay and the Art of Technical Analysis, Ed Carlson demonstrates the immense power of Lindsay’s methods in today’s markets. Using visual models, Carlson explains Lindsay’s models clearly, simply, and intuitively.
Abstract
The Sell in May strategy is generally thought of as a way to avoid market volatility during the May-October period each year when little to no return is expected. The application, to the Sell in May strategy, of an overlay using the methods of George Lindsay produces consistently positive results giving investors both a decision tree for deciding whether or not to exit the market as well as a significant reduction in the time period during which the low of any drawdown is expected.
INTRODUCTION
“Sell in May and go away” is taken from the old British catch phrase “Sell in May and go away and come on back on St Legers Day.” Established in 1776, the St Leger Stakes is a horse race run in September each year. In his paper Wayne’s Sell in May Essay (April 2013), Wayne Whaley found that the optimal period for Sell in May (since 1950) has been May 5 through October 27. He concluded “an investor could have accrued every penny of the historical return of the S&P with 47.4% less market exposure, by only being in the market those 192 calendar days of the year.” While Whaley’s 63 year sample showed a return of near zero percent during May-October, it also shows investors would have avoided an average peak to trough drawdown of 14.12% by being out of equities during this period.
The 12 year interval
The six-month time duration of the Sell in May archetype corresponds to the 12 year interval discovered by the technician George Lindsay (1906-1987). Lindsay’s 12 year interval was the first step in his process for identifying important lows in the Dow industrials index. Counting from a significant high, forward in time, he found that significant lows arrive during a six-month time period 12 years, 2 months to 12 years, 8 months after that significant high. While the preserved, written work of Lindsay shows no method for quantifying what a significant high is, Lindsay wrote that these highs are those that “stand out” on a chart; a good reminder from the days before personal computers that we “don’t have to be a chicken to know what an egg is”.
In Figure 1 we can see how counting a 12 year interval from the high in January 2000 surrounds the low in June 2012. Lindsay’s long term interval called for a low in the six-month period from March to September 2012.
LONG CYCLES
This article reviews the Dow industrials index during the time period from 1921 through 2002. This period was chosen as it contains four complete periods which Lindsay called the Long Cycle; 1921-1942, 1942-1962, 1962-1982, and 1982-2002. Lindsay identified Long Cycles from 1798 until 1949 in his paper An Aid to Timing (pub. 1950). Long Cycles are roughly twenty years in duration. Lindsay had a specific method for identifying them which is beyond the scope of this article. Figure 2 shows the long cycles during the twentieth century. Note the symmetry in duration that resulted from the use of Lindsay’s methods to identify these cycles.
The drawdowns referenced in Whaley’s paper notwithstanding, an examination of the entire time span from 1921 until 2002 finds that remaining in the market during the May-October period was net positive and a simple Sell in May strategy would have yielded an average annual loss of 0.70%.
Looking only at those 12 year intervals that were coincident or overlapping with the Sell in May time period, the average return during the six month period identified by Lindsay’s 12 year interval was also net positive and exiting the market during this period each year would have resulted in an average loss of 1.83%. Using this interval as a stand-alone model to exiting the market (similar to Sell in May) would not have been a good approach. It should be noted that Lindsay never intended the interval to be used in this fashion.
Overlapping intervals
The results become much more interesting when we narrow our focus solely to the period of time during which the two models overlap each other. If the 12 year interval doesn’t begin until June or later, that later date is used as the beginning of the overlapping period and October 27 is the end of the overlapping period.
If the 12 year interval begins before May, then May 5 is the beginning of the overlapping period and the end of the 12 year interval marks the end of the overlapping period.
In Figure 3, the 12 year interval counted from April 1990 targets a low in the period from June to December, 2002. The 12 year interval from July 1990 targets a low in the period from September 2002 until March 2003. These two Lindsay intervals narrow the Sell in May interval to the one-month period from September to October 2002 and capture the bear market low on October 10, 2002.
With this approach the results from being out of the market swing from negative to positive with an average return of 0.49%. We also find that the time period during which we expect to find the bottom of the expected drawdown narrows from the six-month period of each separate approach to an average period of less than four months. In seven of those instances (1958, 1967, 1987, 1998, 2000, 2001, and 2002) two 12 year intervals overlapped with the May-October period helping to narrow the overlapping period even further.
A separate approach is to assume the overlapping period always begins on May 5 and terminates at the end of the 12 year interval or October 27, whichever comes first. In this case, the average return from being out of the market increases to 0.54% and the time period again contracts from six months to less than four months.
The four complete long cycles can be thought of as secular bear and bull cycles. The 1921-1942 and 1962-1982 long cycles are bear cycles as they both ended close to the price level at which they began. The 1942-1962 and 1982-2002 long cycles are bull cycles (see Figure 1). During the first two long cycles of the twentieth century breaking out the results by long cycles provides no advantage in the methods described above.
However, during 1963-1982 the practice of Sell in May provided an average return of 1.69% and 1983-2002 an average return of 0.75%. Using only the 12 year interval to disengage from the Dow resulted in negative returns of 0.68% and 1.69%, respectively.
The use of overlapping intervals resulted in a positive average return of 0.97% in 1963-1982 and 1.98% in 1983-2002. Always using May 5 as the starting date of the overlapping period resulted in average gains of 1.94% and 1.33%, respectively.
BASIC CYCLES
Lindsay’s concept of basic cycles is composed of basic advances and basic declines and usually conforms to what most market participants think of as cyclical bull and bear markets albeit without the restriction of the arbitrary 20% minimum decline to define bear markets. Sometimes the declining portion of the cycle (basic decline) is less than 20% and serves as a multi-month consolidation between two separate cyclical bull markets (basic advances). Similar to long cycles, Lindsay had a specific method for identifying basic cycles which is beyond the scope of this article. Figure 4 depicts the basic cycles during the 1962-1982 long cycle.
Being aware of the basic cycles and taking the time to determine whether the Dow is in a basic advance or basic decline during the time in question yields some staggering results.
1921-1942 Bear long cycle
During 1921-1942 there were seven years in which the Sell in May period occurred during a basic decline. Exiting the market then yielded a positive average return of 15.29%. This long cycle contains twelve years in which Sell in May occurred during a basic advance. Exiting the market then yielded a negative average return of 16.73%. Additionally there were three years when the basic cycle changed from up to down (or down to up) during the period in question and are not included in these calculations.
Using the 12 year interval to exit the market during basic declines produced a positive average return during those seven years of 11.15%. Exiting the market during the 12 year interval in those years which contained a basic advance resulted in a negative average return of 12.43%.
Using the approach of overlapping intervals during those seven basic declines produced a positive average return of 9.24% to those who exited the market. Exiting the market during the twelve years which contained a basic advance produced a negative average return of 10.00%.
Employing the approach of always exiting the market on May 5 and re-entering at the end of the 12 year interval or October 27 (whichever came first) produced a 14.27% average return during the seven years of basic declines and a negative average return of 12.43% during the twelve years of basic advances.
1942-1962 BULL LONG CYCLE
During 1943-1962 there were only four years in which the Sell in May period occurred during a basic decline. Exiting the market then yielded a positive average return of 10.00%. This long cycle also contains twelve years in which Sell in May occurred during a basic advance. Exiting the market then yielded a negative average return of 6.20%. Additionally there were four years during which the basic cycle changed from up to down (or down to up) and are not included in these calculations.
Using the 12 year interval to exit the market during basic declines produced a positive average return during those four years of 5.43%. Exiting the market during the 12 year interval in those years which contained a basic advance resulted in a negative average return of 7.40%.
Using the approach of overlapping intervals during those four basic declines produced a positive average return of 9.74% to those who exited the market. Exiting the market during the twelve years which contained a basic advance produced a negative average return of 3.30%.
Employing the approach of always exiting the market on May 5 and re-entering at the end of the 12 year interval or October 27 (whichever came first) produced an 8.62% average return during the four years of basic declines and a negative average return of 5.26% during the twelve years of basic advances.
1962-1982 BEAR LONG CYCLE
During 1963-1982 there were six years in which the Sell in May period occurred during a basic decline. Exiting the market then yielded a positive average return of 11.45%. This long cycle also contains thirteen years in which Sell in May occurred during a basic advance. Exiting the market then yielded a negative average return of 1.32%. Additionally there was one year during which the basic cycle changed from down to up and is not included in these calculations.
Using the 12 year interval to exit the market during basic declines produced a positive average return during those six years of 10.18%. Exiting the market during the 12 year interval in those years which contained a basic advance resulted in a negative average return of 3.50%.
Using the approach of overlapping intervals during those six basic declines produced a positive average return of 8.90% to those who exited the market. Exiting the market during the thirteen years which contained a basic advance produced a negative average return of 0.58%.
Employing the approach of always exiting the market on May 5 and re-entering at the end of the 12 year interval or October 27 (whichever came first) produced an 11.85% average return during the six years of basic declines and a negative average return of 1.12% during the twelve years of basic advances.
1982-2002 BULL LONG CYCLE
During 1983-2002 there were only three years in which the Sell in May period occurred during a basic decline. Exiting the market then yielded a positive average return of 9.93%. This long cycle also contains thirteen years in which Sell in May occurred during a basic advance. Exiting the market then yielded a negative average return of 3.63%. Additionally there were four years during which the basic cycle changed from down to up (or up to down) and are not included in these calculations.
Using the 12 year interval to exit the market during basic declines produced a positive average return during those three years of 3.11%. Exiting the market during the 12 year interval in those years which contained a basic advance resulted in a negative average return of 4.76%.
Using the approach of overlapping intervals during those three basic declines produced a positive average return of 4.77% to those who exited the market. Exiting the market during the thirteen years which contained a basic advance produced a negative average return of 0.84%.
Employing the approach of always exiting the market on May 5 and re-entering at the end of the 12 year interval or October 27 (whichever came first) produced a 10.54% average return during the three years of basic declines and a negative average return of 2.97% during the thirteen years of basic advances.
As shown in Figure 5, when exiting the market during basic declines the Sell in May strategy produced an average return of 11.67%, the 12 year interval produced an average return of 7.47%, simple overlapping intervals produced an average return of 8.16%, and overlapping intervals always beginning on May 5 produced an average return of 11.32%.
When exiting the market during basic advances the Sell in May strategy produced a negative average return of 6.97%, the 12 year interval produced a negative average return of 7.02%, simple overlapping intervals produced a negative average return of 3.68%, and overlapping intervals always beginning on May 5 produced a negative average return of 5.45%.
CONCLUSION
During 1921-2002, using the simple approach of exiting the market during the period during which the Sell in May and 12 year intervals overlap, changes each models’ stand-alone results from negative to positive with an average return of 0.49%.
We also find that the time period during which we expect to find the bottom of the expected drawdown narrows from the six-month period of each separate approach to an average period of less than four months.
A separate approach of the overlapping period always beginning on May 5 and terminating at the end of the 12 year interval or October 27, whichever comes first, increases the average return from being out of the market to 0.54% and the time period for the low of the drawdown again contracts from six months to less than four months. Regardless of which long cycle is examined, the results of each of the four approaches, when taking the basic cycles into consideration, are consistent. Each of the four approaches produced a positive average return when exiting the market during basic declines and negative average returns during basic advances.
EMD-Candelstick
by Raymond H. Chan
About the Author | Raymond H. Chan
Raymond H. Chan received his B.Sc. degree in Mathematics from the Chinese University of Hong Kong and his M.Sc. and Ph.D. degree in Applied Mathematics from New York University. He is now a Choh-Ming Li Chair Professor and Chairman in the Department of Mathematics at The Chinese University of Hong Kong. He is a Fellow and Council Member of the US Society of Industrial and Applied Mathematics. His research interests include numerical linear algebra, financial mathematics, and image processing.
by Alfred Ka Chun Ma
About the Author | Alfred Ka Chun Ma
Alfred Ka Chun Ma received his B.Sc. and MPhil degree in Mathematics from the Chinese University of Hong Kong and his Ph.D. degree in Operations Research from Columbia University. He is now a Managing Director in CASH Axiom Capital Limited. He is a CFA charterholder, a Professional Risk Manager, and an Associate of Society of Actuaries.
by Hao Pan
About the Author | Hao Pan
Hao Pan received his B.Sc. degree in Mathematics from the Chinese University of Hong Kong. He is now a Quant Analyst in CASH Axiom Capital Limited. His research interest include financial mathematics and technical analysis.
Abstract
The paper proposes an application of Empirical Mode Decomposition in technical analysis. The EMD-candlestick is designed to replace the traditional candlestick as the signal generators in technical trading strategies to improve the profitability. We investigate a representative set of technical trading strategies, including moving average, trading range break-out, relative strength index, and intraday and interday trading rules, using the securities included in Dow Jones Industrial Average from 1993 to 2012. Empirical results show that variable length moving average rules, relative strength index rules, and intraday and interday trading rules are more profitable if EMDcandlestick is used than if the traditional candlestick is used.
Editors Note: The research was partially supported by the CUHK DAG grant 4053007 and by HKRGC grants CUHK40041, CUHK2/CRF/11G, and AoE/M-05/12. The authors would like to thank Bankee Kwan for his continuous support in Mathematics education and its applications in finance.
INTRODUCTION
Since Huang et al. (1998) introduce the empirical mode decomposition (EMD), there are many successful applications of this methodology in various areas. Echeverria et al. (2001) suggest the use of EMD and the associated Hilbert spectral representation as time-frequency analysis tools for heart rate variability data. Nunes et al. (2003) extend the one-dimensional decomposition to two-dimensional data and apply it to the texture extraction and image filtering. Coughlin and Tung (2004) extract a clear 11-year solar cycle signal from stratospheric data using EMD. Liang et al. (2005) apply EMD for the analysis of esophageal manometric time series in gastroesophageal reflux disease. Liu et al. (2006) apply EMD to analyze vibration signals for localised gearbox fault diagnosis and find that EMD is more effective than the often-used wavelet transform in detecting vibration signatures.
Since EMD is a powerful adaptive data analysis tool especially for non-stationary non-linear data, it has been applied in various areas in finance. Zhu (2006) studies a new technique of suspicious transaction detection by first decomposing the complex financial time series into intrinsic mode functions (IMF) and residue that represent different time scales like daily, monthly, seasonal or annual scale. Zhang et al. (2008) apply EMD to crude oil price analysis and explain the oil price as the composite of a long term trend, effect of a shock from significant events, and short term fluctuations caused by normal supply-demand disequilibrium. Guhathakurta et al. (2008) use EMD to analyse two financial time series and compare the probability distributions of their IMF phases and amplitudes. Drakakis (2008) applies EMD technique on Dow-Jones volume and makes some inferences on its frequency content. An introduction on EMD is given in Chan et al. (2014) where they also design a trading strategy using EMD and investigate its profitability using daily Hang Seng Index and China Shanghai Composite Index. In this context, we focus more on the technical analysis aspect and consider the EMD as a frequency pass filter.
In finance, technical analysis is a security analysis methodology to forecast the future price trend by using historical financial data, mainly price and volume. Many technical analysis tools are based in particular on the daily close prices to derive trading strategies. Empirical results of profitability of these technical trading strategies are mixed. Brock et al. (1992) show the forecasting ability of the moving average and the trading range break rules on the Dow Jones Industrial Average Index over a period of 90 years. Bessembinder and Chan (1995) extend Brock’s study and find that the trading rules are successful in the emerging markets of Malaysia, Thailand and Taiwan. Some other empirical studies, including Sweeney (1988), Taylor and Allen (1992), Neely et al. (1997), show the usefulness of technical trading rules based on daily close price. However, Mills (1997), Ito (1999) and Marshall et al. (2008) find that the return of technical analysis trading strategies are insignificantly different from the return of buy-and-hold strategy. Park and Irwin (2007) review and summarize 95 modern studies on the profitability of technical analysis and observes that 56 of them find positive results, 20 of them obtain negative results, and the rest indicate mixed results.
In addition to the aforementioned rules that are based only on the past daily closing prices, there are also rules that includes daily high, low and open prices. A good representation of these four daily prices is the candlestick graph, a technical analysis tool credited to Munelusa Homma (see for instance Marshall et al. (2006)). The candlestick essentially is a summary of the daily performance of the underlying stock and is completely determined by the open price, close price, daily high, and daily low. Fiess and MacDonald (2002) investigate the informational content of these four prices and their values in forecasting volatility and future levels of daily exchange rates. Lam and Chong (2006) study the profitability of directional indicator which takes daily high, low and close prices to generate the signal. Lam et al. (2007) use all contents in the candlestick to examine whether the day’s surge or plummet in stock price can serve as a market entry or exit signal and find the trading rules perform well in the Asian indices.
While the profitability of technical trading strategies is still a controversy, we attempt to improve their performance using EMD. One of the reasons why most technical analysis tools use candlestick as a building block is that they believe the candlestick to be a genuine summary of the daily performance of the stock. However, the noise within the intraday level may affect the use of the four prices as a summary. To this end, we hypothesize that the construction of candlestick using a less noisy level of data can generate more profitable technical trading strategies. We employ the EMD methodology to separate the information from the noise on intraday level data because of the following two reasons. Firstly, the EMD allows for local extraction of information and makes no assumption about linearity or stationarity as the classic Fourier analysis does. Secondly, intrinsic time scales of the data are used in the decomposition. Each resulting component has different average frequency and is defined by the amplitude variations in the original time series. Therefore, the EMD is an ideal data-adaptive tool to decompose real-time data and retrieve its frequency components that have actual physical significance. In our text, the EMD will be used as a low-pass filter to stock prices that filter out the noise and we define the candlestick constructed using EMD as EMD-candlestick.
The usefulness of the EMD-candlestick can be empirically tested with technical analysis. We follow the literature to identify a set of technical trading strategies that are based on the contents in candlestick. In this paper, we conduct empirical tests to determine whether the use of EMD-candlestick instead of the traditional candlestick can improve the profitability of these strategies.
The rest of the paper is organized as follows. Section 2 describes the EMD methodology and how it is applied to intraday financial data. Section 3 describes the data, the technical trading strategies we identify in the literature for the empirical tests and the hypothesis tests. Section 4 concludes our findings.
Empirical Mode Decomposition
The empirical mode decomposition (EMD), proposed by Huang et al. (1998), is a data-adaptive algorithm which decomposes a real-time signal into finite and often small number of intrinsic mode functions (IMFs) and a residue. A summary of EMD is given in Chan et al. (2014). IMF is defined as the function satisfying the following conditions:
- The number of extrema and the number of zero-crossings are either equal or differ at most by one for any time interval.
- The mean value of the upper envelope connected by local maxima and the lower envelope connected by local minima is zero.
Given a discrete signal x = (x1, x2, …, xn), the procedures for the EMD algorithm are:
- Identify all the maxima and minima of x.
- Connect all maxima and minima respectively by cubic splines to form the upper envelop, envelopmax, and the lower envelop, envelopmin, respectively.
- Calculate the local mean, denoted by a = (a1, a2, …, an), of the upper and lower envelops.
- Obtain the detail d=x−a.
- Check if d meets the conditions of IMF. If not, repeat Steps 1 to 4 using d as the new x until certain stopping criteria are satisfied. Different stopping criteria are proposed by Huang et al. (2003) and Rilling et al. (2003). In our research, we will adopt the criteria proposed by Rilling et al. (2003):
This step is often known as the sifting process.
- The detail after the sifting process is called the first IMF c1 = (c1,1, c1,2, · · · , c1,n). Repeat the sifting process on the residue r1 = x − c1 to obtain the second IMF c2. This procedure is repeated on all the subsequent rj. Then we have
- End the operation when the residue rm becomes so small that it is less than a predetermined value or when it becomes a monotonic function therefore cannot be further decomposed. Thus, after the above EMD process, we have decomposed our data x into m IMFs and a residue, rm. Mathematically, we have,
Since the local mean with low frequency has been iteratively removed from the original data, the first IMF c1 should represent the highest-frequency component and subsequent IMFs have lower and lower frequency ranges. The residue can either be the mean trend or a constant. For data with a trend, the final residue rm should be that trend (Huang et al., 1998).
Since the IMFs and residue, as the output of EMD, have different frequency ranges, the summation of low frequency IMFs with the residue can be considered as the output of passing the original data x through a lowpass filter, for instance,
where i0 is close to m is a low-frequency version of x. In particular, the residue term rm itself represents the output of x after filtering out all high-frequency components.
Let the given data x be the vector consisting of the stock prices in a day, and rm is the residual term of x from EMD. Then the EMD-candlestick is defined to be the open, high, low and close prices of the first, maxima, minima and last values of rm respectively. Under this definition, the EMD-candlestick can be considered as the summary of the daily performance of the stock after filtering out the high frequency components, or in other word, the intraday noises. The main purpose of this study is to investigate whether this EMD-candlestick can improve the performance of technical analysis trading rules, when compared with the performance of these rules using the traditional candlestick generated from the same data.
Methodology
Data
In this study, we select the data following Scalas et al. (2004) who use securities in the Dow Jones Industrial Average Index (DJIA) in the analysis of waiting time of high-frequency financial data. The data used in this study is retrieved from Wharton Research Data Services (WRDS) through the dataset Trade and Quote (TAQ). We collect TAQ data for the period of 1993 to 2012. We consider all the DJIA securities with complete data during this period. There are 14 securities satisfying this criteria and the codes are: AXP, BA, CAT, DD, DIS, GE, IBM, JPM, KO, MCD, MMM, MRK, PG and UTX. We follow Liu and Maheu (2008) to filter out invalid trades by using correction indicator. The trade data is kept only if the correction indicators equal 0 or 1, which refer to regular trade and later-corrected trades respectively. The traditional candlestick from the original data can be easily obtained by simply taking the price of the first trade as open price, the maximal price of all trade as high price, the minimal price of all trade as low price and the price of the last trade or the official closing price as the close price.
Since EMD algorithm needs regular time series data as input, 1-second data in one day will be used to generate the EMD-candlestick. For each day, the time range of the dataset starts from the time of the first trade to the time of the last trade of the day. For every second in this range, the price of the last trade in this second will be used to form the input of the EMD. If there is no trade in one second, the price of that second will be taken from the previous second. The sequence of prices in one day will be processed by EMD to obtain the residue.
Take the security GE on 22 January 2012 as example. This security was traded in the first second and the last second of the trading session of this day. Since the trading session of New York Stock Exchange is from 9:30 AM to 4:00 PM, with a duration of 23400 seconds, a list of price x = (x1, x2 · · · , x23400) can be formed according to the above rules. Then x is processed by EMD to obtain IMFs c1, c2, · · · , c13, and the final residue r, as illustrated in Figure 1.
As shown in Figure 1, the IMFs c1, c2, · · · , c11 represents components of x in different average frequencies and the residue r represents the trend of the original data by filtering out the high-frequency components. Then the EMD-candlestick of one day can be formed by taking the first, maximal, minimal and the last values of r as the daily open, high, low and close prices. Table 1 shows the comparison of the traditional candlestick and EMDcandlestick for this example. It can be observed that the value of EMD-candlestick is numerically close to the value of traditional candlestick. However, the trend of the original data x and the residue r is different. The open price is larger than the close price for the original data while the open price is less than the close price for EMDcandlestick. This difference may lead to the different results of the trading strategies that use these two kinds of candlestick as input.
Figure 2 shows the series of daily traditional candlesticks and EMD-candlesticks of GE in the year of 2012. It can be observed that the two series have similar trends, but the EMD series shows a clearer price path and has less noise in the intraday level.
Trading Rules
To evaluate the quality of the EMD-candlestick methodology, we investigate the impact on the profitability of trading rules available in technical analysis. We select some classes of trading rules which capture different characteristics of the candlestick as input of their trading rules.
Firstly, we follow Brock et al. (1992) to evaluate the same set of twenty-six technical trading rules, including ten Variable Length Moving Average (VMA) rules, ten Fixed Length Moving Average (FMA) rules and six Trading Range Break (TRB) rules, with the official closing price and EMD closing price acting as signal generators. Secondly, we study the relative strength index (RSI) which is a popular indicator proposed by Wilder (1978) showing the strength or weakness in price information. Wong et al. (2003) propose a trading strategy on RSI using ‘50 crossover’ rule and investigate its profitability in Singapore stock market. The same set of four RSI rules are evaluated in this paper.
Thirdly, we study trading rules that use not only the close price. Lam et al. (2007) propose Intraday and Interday Momentum (IIM) rules, using daily open, high, low and close price as price generators. The same set of 45 IIM rules are evaluated. The detail and the parameters that we examined for each aforementioned strategies are provided in Appendix A.
Hypothesis testing
For each rule of a certain strategy, we calculate the average daily return (or average 10-day return for FMA and TRB trading rules) of the 14 securities using traditional candlestick and EMD-candlestick as signal generators respectively. Then, we obtain the sample of average returns of the 14 securities using traditional candlestick as price generator, S = (r1, r2, …, r14). Similarly,we obtain another sample of average returns of the 14 securities using EMD-candlestick as price generator, S′ =(r′1, r′2, …, r′14). Let μ1 and μ2 denote the average returns of the two samples S and S′ respectively. In order to investigate whether the EMD-denoised price can improve the performance of the technical trading strategies, we test the null hypothesis that generating signals using traditional candlestick produces higher returns than using EMD-candlestick, i.e,
Since the returns in the two samples are generated from the same securities, we use paired t-test to compare the means of the two samples. The test statistic is calculated as:
where d is the mean difference between two samples μ1 and μ2, s2 is the sample variance, n is the sample size and t is a paired sample t-test with n − 1 degrees of freedom.
Empirical results
Table 2 summarizes the results of all strategies in the empirical test. In Table 2, the number of rules that tested for each strategy is presented in the second column. Let μ1 and μ2 denote the average daily returns for strategy VMA, RSI and IIM or the average 10-day returns for FMA and TRB, using traditional candlestick and EMDcandlestick as signal generator respectively. Table 2 also shows the average difference (μ2 – μ1) between the two returns, the percentage of the rules that μ2 is better, and the percentage of the rules that the result is significant to reject the null hypothesis at five percent. All these three statistics are presented for buy strategy, sell strategy and overall result respectively. We can conclude from the table that VMA, RSI and IIM strategies are more profitable for buy, sell and overall strategies, when EMD-candlestick is used than when traditional candlestick is used.
In Table 3 (see Appendix B), the results for the trading strategies generating signals from the original close prices are presented in column 2, 4, 6, 8, 10 with the number of buy signals, the number of sell signals, mean of the returns from buy signals, mean of the returns from sell signals, and mean of the overall returns from the VMA rules. Similarly, the results for the trading strategies generating signals with EMD closing prices are presented in column 3, 5, 7, 9, 11. The buy return using EMD prices as signal generators is higher and the average of the difference is about 0.0034 percent. Out of the ten tests, all tests show better return using EMD prices and seven test results are significant at five percent level to reject the null hypothesis that the returns of generating signals with traditional candlestick is better than those with the EMD prices using a one-tailed test. For sell returns, the results are similar. Nine tests show better return using EMD prices as signal generators and the average of these difference is −0.0040 percentage. Four tests results are significant at five percent level to reject the null hypothesis. For the overall returns presented in column 10 and 11, the average of these difference is 0.0036 percentage. Six tests result are significant at five percent level.
Tables 4 and 5 show the empirical results for FMA and TRB, respectively. However, for these two strategies, generating signals with EMD-candlestick does not produce a higher return than generating with the traditional candlestick in general. For the overall return of the FMA case, only half of test results show the better return using EMD prices and only one of them is significant at five percent level. For the overall return of the TRB case, only half test results show the better return using EMD prices and none of them is significant at five percent level. For the case that the return from traditional candlestick is better than that from EMD-candlestick, the null hypothesis that generating signals using EMD-candlestick produces higher return than using traditional candlestick is also tested and no result is significant at five percent level.
Table 6 shows the results for RSI trading strategy. Generating signals with EMD-candlestick produces a higher return than generating with the traditional candlestick for all the rules tested. The average of the differences for buy returns, sell returns and overall returns are about 0.1058, −0.0889, and 0.0972 percent respectively. All the results are significant at five percent level to reject the null hypothesis.
Table 7 to Table 9 show the results for IIM trading strategy. Generating signals with EMD-candlestick produces a higher return than generating with the traditional candlestick in general. The average of the differences for buy returns, sell returns and overall returns is about 0.0356, −0.0235, and 0.0298 percent respectively. For the overall return, 20 of total 45 test results are significant at five percent level to reject the null hypothesis and 31 test results are significant at ten percent level.
Conclusion
In this paper, we attempt to improve the profitability of technical trading strategies by using the EMD-candlestick instead of the traditional candlestick as signal generator. We empirically test the usefulness of EMD-candlestick on the trading strategies VMA, FMA, TRB, RSI that use close price as signal generator and trading strategy IIM that uses all the information in the daily candlestick as signal generator. The EMD-candlestick significantly improves the performance of the trading strategies VMA, RSI and IIM, and the effect is not significant for the strategies FMA and TRB. In view of the positive empirical results, we propose the broader usage of the EMD-candlestick in financial data analysis as a replacement of the traditional candlestick which could be affected by noise. The methodology of the EMD-candlestick can also be easily applied to other areas of finance. Since we only use the residue term r in this paper, the effect of using the other IMFs {ci} will be investigated in future papers.
Appendix A
Variable Length Moving Average (VMA)
The VMA rules generate a buy (sell) signal when the short moving average is above (below) the long moving average by a margin called ‘band’. For a zero band, all days are classified as either buy or sell. For a non-zero band, all days are classified as buy, sell or neutral when no signal is generated. Following Brock et al. (1992), the most popular rules, 1-50 (period of short moving average is 1-day and period of long moving average is 50-day), 1-150, 5-150, 1-200, 2-200, are investigated. These rules are also tested with and without a one percent band. Total 10 rules are tested in this paper.
Fixed Length Moving Average (FMA)
Similarly, the FMA rules generate a buy (sell) signal when the short-term moving average cuts the long-term moving average from below (above). The holding period is a fixed 10-day period as suggested by Brock et al. (1992). Returns are recorded at the end of each holding period and all signals occurring during this 10-day period are ignored. The same rules as VMA strategy, 1-50, 1-150, 5-150, 1-200, 2-200, are tested and the rules are also tested with and without a one percent band in this paper. Therefore, total ten rules are tested in this paper.
Trading Range Break (TRB)
The TRB rules generate a buy (sell) signal when the price rises above (falls below) the resistance (support) level, which is defined as a local maximum (minimum) over n trading days. TRB rules are designed according to the idea that many investors are willing to buy at trough, making the price hard to penetrate the previous trough. When the price falls below the support level, it is expected to further drops and therefore a sell signal is generated. The rationale is vice versa for that of the buy signal and resistance level.
Similar to the FMA rules, the holding period is set to be 10 trading days and any signals during this holding period are ignored. Following (Brock et al., 1992), the number of days n being tested are 50, 150 and 200. These rules are also tested with and without a one percent band. Total six rules are tested in this paper.
Relative Strength Index (RSI)
The relative strength index strategy using ‘50 crossover’ trading rule is proposed by Wong et al. (2003). Let Ct be the daily close price at time t. Then we can define
The ‘50 crossover’ method generates a buy signal when the RSI rises above 50 and a sell signal when the RSI falls below 50. The same four rules tested by Wong et al. (2003), which let N equals to 5, 10, 20, 30 respectively, are tested in this paper.
Intraday and Interday Momentum (IIM)
The IIM strategies, based on the Japanese Candlesticks concepts, are proposed by Lam et al. (2007). Let Ot, Ht Lt, Ct be the daily open, high, low and close price at time t. We first define the N-day Average Intraday Momentum (AIM) at t as follows,
For the parameter k, the values 1, 1.5 and 2 are examined. For the parameter N, which represents the days used to calculate AIM and AOM, the values 10, 20, 50 are examined. Let the rule number above be a parameter, then we can form the set of parameters, (rule number, k, N). For example, (1,1.5,20) represents the strategy that generates signals with rule 1, k = 1.5 and N = 20. Totally 5 × 3 × 3 = 45 parameter sets are tested in this paper.
Appendix B
Notes: The sample period is from January 1993 to December 2012. Those columns with (without) * are the results for VMA rules with close price in EMD candlestick (the original close price) as the signal generator. N(Buy) and N(Sell) are the numbers of buy and sell signals reported in the sample. All returns are calculated in logarithm and reported in percentage level. The numbers in the parentheses below are the t-ratios. They test the null hypothesis that μ1 ≥ μ2. The t-ratios that are less than the five percent significance level are highlighted in bold face.
Notes: The sample period is from January 1993 to December 2012. Those columns with (without) * are the results for FMA rules with close price in EMD-candlestick (the original close price) as the signal generator. N(Buy) and N(Sell) are the numbers of buy and sell signals reported in the sample. All returns are average 10-day return, which are calculated in logarithm and reported in percentage level. The numbers in the parentheses below are the t-ratios. They test the null hypothesis that μ1 ≥ μ2. The t-ratios that are less than the five percent significance level are highlighted in bold face.
Notes: The sample period is from January 1993 to December 2012. Those columns with (without) * are the results for TRB rules with close price in EMD-candlestick (the original close price) as the signal generator. N(Buy) and N(Sell) are the numbers of buy and sell signals reported in the sample. All returns are average 10-day return, which are calculated in logarithm and reported in percentage level. The numbers in the parentheses below are the t-ratios. They test the null hypothesis that μ1 ≥ μ2. The t-ratios that are less than the five percent significance level are highlighted in bold face.
Notes: The sample period is from January 1993 to December 2012. Those columns with (without) * are the results for RSI rules with close price in EMD-candlestick (the original close price) as the signal generator. N(Buy) and N(Sell) are the numbers of buy and sell signals reported in the sample. All returns are average daily return, which are calculated in logarithm and reported in percentage level. The numbers in the parentheses below are the t-ratios. They test the null hypothesis that μ1 ≥ μ2. The t-ratios that are less than the five percent significance level are highlighted in bold face.
Notes: The sample period is from January 1993 to December 2012. Those columns with (without) * are the results for IIM rules with EMD candlestick (the traditional candlestick) as the signal generator. N(Buy) and N(Sell) are the numbers of buy and sell signals reported in the sample. All returns are calculated in logarithm and reported in percentage level. The numbers in the parentheses below are the t-ratios. They test the null hypothesis that μ1 ≥ μ2. The t-ratios that are less than the five percent significance level are highlighted in bold face.
References
Bessembinder, H. and Chan, K. (1995). The profitability of technical trading rules in the asian stock markets. Pacific-Basin Finance Journal, 3(2):257–284.
Brock, W., Lakonishok, J., and LeBaron, B. (1992). Simple technical trading rules and the stochastic properties of stock returns. The Journal of Finance, 47(5):1731–1764.
Chan, R., Lee, S. T. H., and Wong, W.-K. (2013). Technical Analysis and Financial Asset Forecasting: From Simple Tools to Advanced Techniques. World Scientific Publishing Company.
Coughlin, K. and Tung, K.-K. (2004). 11-year solar cycle in the stratosphere extracted by the empirical mode decomposition method. Advances in space research, 34(2):323–329.
Drakakis, K. (2008). Empirical mode decomposition of financial data. In International Mathe-matical Forum, volume 3, pages 1191–1202.
Echeverria, J., Crowe, J., Woolfson, M., and Hayes-Gill, B. (2001). Application of empirical mode decomposition to heart rate variability analysis. Medical and Biological Engineering and Computing, 39(4):471–479.
Fiess, N. M. and MacDonald, R. (2002). Towards the fundamentals of technical analysis: analysing the information content of high, low and close prices. Economic Modelling, 19(3):353–374.
Guhathakurta, K., Mukherjee, I., and Chowdhury, A. R. (2008). Empirical mode decomposition analysis of two different financial time series and their comparison. Chaos, Solitons & Fractals, 37(4):1214–1227.
Huang, N. E., Shen, Z., Long, S. R., Wu, M. C., Shih, H. H., Zheng, Q., Yen, N.-C., Tung, C. C., and Liu, H. H. (1998). The empirical mode decomposition and the hilbert spectrum for nonlinear and non-stationary time series analysis. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 454(1971):903–995.
Huang, N. E., Wu, M.-L. C., Long, S. R., Shen, S. S., Qu, W., Gloersen, P., and Fan, K. L. (2003). A confidence limit for the empirical mode decomposition and hilbert spectral analysis. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 459(2037):2317–2345.
Ito, A. (1999). Profits on technical trading rules and time-varying expected returns: evidence from pacific-basin equity markets. Pacific-Basin Finance Journal, 7(3):283–330.
Lam, W.-S. and Chong, T. T.-L. (2006). Profitability of the directional indicators. Applied Financial Economics Letters, 2(6):401–406.
Lam, W.-S. V., Chong, T. T.-L., and Wong, W.-K. (2007). Profitability of intraday and interday momentum strategies. Applied Economics Letters, 14(15):1103–1108.
Liang, H., Lin, Q.-H., and Chen, J. (2005). Application of the empirical mode decomposition to the analysis of esophageal manometric data in gastroesophageal reflux disease. Biomedical Engineering, IEEE Transactions on, 52(10):1692–1701.
Liu, B., Riemenschneider, S., and Xu, Y. (2006). Gearbox fault diagnosis using empirical mode decomposition and hilbert spectrum. Mechanical Systems and Signal Processing, 20(3):718–734.
Liu, C. and Maheu, J. M. (2008). Are there structural breaks in realized volatility? Journal of Financial Econometrics, 6(3):326–360.
Marshall, B. R., Cahan, R. H., and Cahan, J. M. (2008). Does intraday technical analysis in the us equity market have value? Journal of Empirical Finance, 15(2):199–210.
Marshall, B. R., Young, M. R., and Rose, L. C. (2006). Candlestick technical trading strategies: Can they create value for investors? Journal of Banking & Finance, 30(8):2303–2323.
Mills, T. C. (1997). Technical analysis and the london stock exchange: Testing trading rules using the ft30. International Journal of Finance & Economics, 2(4):319–331.
Neely, C., Weller, P., and Dittmar, R. (1997). Is technical analysis in the foreign exchange market profitable? a genetic programming approach. Journal of financial and Quantitative Analysis, 32(04):405–426.
Nunes, J. C., Bouaoune, Y., Delechelle, E., Niang, O., and Bunel, P. (2003). Image analysis by bidimensional empirical mode decomposition. Image and vision computing, 21(12):1019–1026.
Park, C.-H. and Irwin, S. H. (2007). What do we know about the profitability of technical analysis? Journal of Economic Surveys, 21(4):786–826.
Rilling, G., Flandrin, P., Goncalves, P., et al. (2003). On empirical mode decomposition and its algorithms. In IEEE-EURASIP workshop on nonlinear signal and image processing, volume 3, pages 8–11. NSIP-03, Grado (I).
Scalas, E., Gorenflo, R., Luckock, H., Mainardi, F., Mantelli, M., and Raberto, M. (2004). Anoma- lous waiting times in high-frequency financial data. Quantitative Finance, 4(6):695–702.
Sweeney, R. J. (1988). Some new filter rule tests: Methods and results. Journal of Financial and Quantitative Analysis, 23(03):285–300.
Taylor, M. P. and Allen, H. (1992). The use of technical analysis in the foreign exchange market. Journal of international Money and Finance, 11(3):304–314.
Wilder, J. W. (1978). New concepts in technical trading systems. Trend Research.
Wong, W.-K., Manzur, M., and Chew, B.-K. (2003). How rewarding is technical analysis? evidence from singapore stock market. Applied Financial Economics, 13(7):543–551.
Zhang, X., Lai, K. K., and Wang, S.-Y. (2008). A new approach for crude oil price analysis based on empirical mode decomposition. Energy Economics, 30(3):905–918.
Zhu, T. (2006). Suspicious financial transaction detection based on empirical mode decomposition method. In Services Computing, 2006. APSCC’06. IEEE Asia-Pacific Conference on, pages 300–304. IEEE.
Filters for Fitter Turtles
by Tom Cohen, CFTe
About the Author | Tom Cohen, CFTe
Tom Cohen, has been interested in financial markets since 2007, the year in which his savings account was entirely invested in BEL20 equities. Following investment setbacks, he started reading books and blogs on various investment philosophies. Recently completing a Business Finance Master from Solvay Brussels School, Tom leaned towards a more technical approach after having read Street Smarts from Bradford Raschke and Connors. He has been trading his savings account based on his swing trading research. Being passionate about the markets and their behaviour, Tom is looking for an opportunity related to Technical Analysis. He also encourages feedback on the paper and is open to any queries. He can be reached through LinkedIn or on tom.cohen@outlook.com
ABSTRACT
The paper uncovers price, volume and time features of stocks that increase the probability of successfully trading 20-Day Donchian channel breakouts. By looking at time exit returns and maximum excursion of filtered breakouts, the analysis defines as the best filters the time since the last breakouts, the position relative to the 50-day moving average and the 60-day normalized linear regression slope of typical price. Robustness tests emphasize the ability of time since the last low breakout and the 50-day moving average to distinguish the quality of the high and low breakouts respectively.
ABBREVIATIONS
BM: Big Moves (At least 20% excursions in the 20 days that follow a breakout)
HB: High Breakouts
LB: Low Breakouts
MUE: Maximum Upwards Excursion
MDE: Maximum Downwards Excursion
T15: 15-Day Normalized Linear Regression Slope of Typical Price
T60: 60-Day Normalized Linear Regression Slope of Typical Price
INTRODUCTION
Few are the technicians not familiar with the Turtles’ story and approach to the markets. Their simple momentum strategy that focuses on 20-day highest high and lowest low was a great success in the 1980s future markets. However, the system’s main pitfalls are its low winning trade percentage and, as a consequence, the requirement of a fat-tailed profit distribution to achieve exceptional results. Some technicians realized that the weakness could become a strength; Raschke and Connors published their Turtle Soup strategy in the 1990s with the idea to fade breakouts and exit markets quickly.
The purpose behind this paper is to identify price, volume and time features of a stock that could favor either strategy. The approach taken to define the best filters differs from other papers as it not only looks at fixed time window returns, but also uses the maximum excursions in both directions. Moreover, time and sector tests verify the consistency of results obtained for the best filters.
The next section details the methodology to obtain the results. Section 3 comments results for the three filter categories. The first analyzed results concern the whole sample. Price, volume and time filter tables and figures follow. Section 4 shows how robust the best signals are and section 5 concludes the paper.
METHODOLOGY
The Turtle systems enter intraday on breakouts regardless of the preceding bar. In the interest of this paper, entry of 20-Day Donchian channel breakouts occur intraday as well. For high breakouts (HB), the entry price is thus the highest high of the last 20 days except when the open gaps beyond the channel, in which case the open is used. This study, however, focuses on signals that are not preceded by another 20-day extreme in the same direction. For modeling purposes, the breakout occurs on “day 0” and time exits start as of the following day. As an example, 20- day trades exit at the closing price of “day 20”. Therefore, the trade lasts in fact slightly more than 20 days in case of a breakout late in the “day 0” session whereas an opening gap beyond the channel results in a full 21-day trade.
Considered stocks are S&P500 November 2014 constituents and the simulation runs from February 1985 until October 2014. Keeping in mind the bias of hindsight, results must be read with care: upward (downward) performance is better (worse) than what could be expected. Nevertheless, the aim of the study is first to find consistent filters for breakouts, then to give an idea about absolute and relative performance. From the 294 986 signals, only 48 055 (16.29%) occur before 1995, 108 078 (36.64%) take place between the start of 1995 and the end of 2004 while 138 853 signals (47.07 %) happen in the last 10 years of the sample. The period gives more HB than low breakouts (LB): 172 574 (58.50%).
Filters base themselves on three “day -1” fundamental characteristics: price, volume and time. First, price action preceding the trade is analyzed in detail. Filters include; the candlestick color of “day -1”, the logarithmic distance between the “day -1” close and the channel boundary, the overnight action as reflected in the relationship between the “day 0” open and the previous close, typical price normalized linear regression slopes of 15 and 60 days (T15 and T60), 50- and 200- day simple moving averages (MA50 and MA200) and the maximum excursion direction of the last channel penetration. Next, volume filters include short and middle term ratios. Because results from a longer term ratio are similar to those obtained by the middle term, only the middle term ratio appears in the body of the paper. Finally, time filters look at the amount of bars since the last high and low breakout for the two penetration directions.
Reported statistics are twofold. On the one hand, tables show the signal expected returns for multiple time horizons: intraday, 1 day, 5 days and 20 days. The S&P500 expected return and market adjusted return give context to the absolute breakout results. The lack of sub daily data does not permit an exact comparison (with the exception of breakouts occurring at the open): the calculation for S&P500 returns starts at the “day 0” open. Therefore, if HB occur in an uptrend and a significant part of the market movement takes place before the signal, the intraday market adjusted return loses relative performance. Despite the timing difference, if the filtered HB are stronger afterwards, the adjusted returns should increase with the time horizon. The statistics can also be interpreted differently: deducting the intraday return from the 1-day, 5-day and 20-day return gives the potential for entry at the “day 0” close.
On the other hand, cumulative distribution functions (CDF) of 20-day maximum upwards (MUE) and downwards excursion (MDE) do not limit themselves to a specific point in time and thus complement the expected returns. The excursion in the breakout direction is calculated from “day 0” onwards while the maximum excursion in the other direction only starts at the close of the breakout day (with the exception of gaps beyond the channel boundaries, which use “day 0” extremes for the MUE and MDE)1. Therefore, the calculation assumes that if a breakout reverses intraday, the maximum excursion in the opposite direction is at the close of “day 0” or continues the following days. This assumption avoids using the start of “day 0” that often exaggerates the maximum excursion in the opposite direction of the breakout, but lessens the maximum excursion that occurs between the breakout and the close on “day 0”. Only the most contrasting filter environments have their CDF displayed.
Robustness tests verify that promising results regularly appear through time. The 4-year presidential cycle splits the data starting on the 1st of January 1985, yielding 8 samples. Further tests also consider the different sectors. The 20-day (adjusted) returns of the most contrasting environments are used. Moreover, the difference between the returns of both environments look at the stability of the filter’s ability to distinguish strong and weak signals.
RESULTS
General
As the S&P more than decupled between February 1985 and October 2014, a positive 20-day return for downside breaks is not surprising. However, more unexpected is the fact that LB outperform high penetrations for 5- and 20-day holding periods and that the performance comes at a different time: HB are particularly strong on the breakout day whereas LB are resilient between “day 2” and “day 5”. In both cases, results improve with the holding period. The data indicates that buying weakness is good, but even better when related to the market. Nevertheless, this finding should be taken with a pinch of salt as the “day 0” timing plays an important role in the adjusted return. If position initiation is at the close of “day 0”, 20-day adjusted returns are a lot closer to each other: 0.294% and 0.653% for HB and LB respectively.
Figure 2 shows the 20-day MUE and MDE CDF of signals. The CDF above (below) the horizontal axis represents positive (negative) excursions. The illustration demonstrates that higher volatility generally follows LB. First, median excursions, with values of 5% and 6.3% for down and up movement, are 0.9% and 1.2% greater than for HB respectively. Second, with 8.2% increasing by more than 20%, LB have twice as much bull big moves (BM). The same behavior is observed for bear BM. Moreover, 67% (60.8%) of LB (HB) fall by at least 3% and 74.4% (69.8%) increase by the same amount. Finally, one LB (HB) out of five returns more than 13% (10.2%) if bought and 11.4% (9.1%) if shorted.
Price Filters
A simple strategy could look at “day –1” price action exclusively. On the one hand, the candle color shows the ability of participants to push prices in the direction of the subsequent breakout. The following question can then be asked: Is a breakout stronger when market participants pushed prices further in the breakout direction or would a short term mood change be more beneficial? On the other hand, the distance between the close and the channel limits, defined as a percentage, illustrates the energy required to break the Donchian channel. The question “Do greater distance requirements for the breakout harm post-signal performances?” is answered in tables 3 and 4.
Red candles on “day -1” lead to better absolute returns for both the market and the signal. However, the relative performance of HB preceded by a red candle is not as good as for the green candle; the breakout day movement explains the majority of the difference. LB preceded by a green candle is the only category where expected returns worsen during “day 1”.
The quantile boundaries for the distance separating the penetration and the previous day close are 0.59%, 1.04%, 1.60% and 2.60%2. Absolute performance tends to improve with greater movements before a breakout. Not surprisingly, the market intraday return also improves significantly and therefore, the outperformance starting from the “day 0” close rises as a consequence. LB quantile limits have larger distances: -0.72%, -1.28%, -1.97% and -3.15%. The observations are similar to the HB. Noteworthy is the positive return for quantile 5 intraday returns. This indicates that buyers are more willing to enter the market and shorts more easily closed after a larger “day 0” fall.
As shown in figure 3, the potential from “day 0” to “day 20” differs significantly with the distance required for the breakout. A greater distance results in more BM and median excursions at least 2.4% greater in both directions. As such, quantile 5 LB (HB) have an 18.5% (10.1%) probability of achieving an upward BM and 15.7% (7.7%) for a downward BM. Quantile 2 distances that end with the worst average 20-day returns, compare poorly with an average probability of 3.1% to achieve BM. Moreover, 36% of quantile 2 signals stagnate at 3% excursions in one of the directions whereas signals that move more to pass the channel limit exceed 3% excursions in 83.1% of all cases.
Overnight price action between “day -1” and “day 0” could also serve for classification purposes. Therefore, the next table distinguishes four types of breakouts. The first three categories refer to gaps that can or not be closed intraday. First, “outside” refers to gaps that exceed the channel limits. Second, “breakout direction” includes all Donchian channel crossings preceded by an open both outside the previous’ day range and inside the 20-day Donchian channel. Third, gaps in the opposite direction of the subsequent breakout, which are the most infrequent and obviously closed on “day 0”, find themselves in the “opposite” category. Finally, “continuous” denotes a session opening in the previous day range.
The entry for outside gaps occurs at the open of “day 0” and gives a perfect comparison between signal and market returns. Remarkably for HB, the strength shown in the open beyond the 20-day highest high does not continue in the following days: market adjusted returns are the worst for the four time horizons. This phenomenon is reversed once considering LB, with outside penetrations reacting the strongest in absolute terms. Opposite direction gaps are seldom and do not lead to particularly remarkable results.
Figure 4 displays that the open pattern does not influence excursions much. BM percentages and median excursions are close to those from the unfiltered sample. Nevertheless, the stronger performance of HB with a “day 0” open above yesterday’s range, but inside the channel is explained by generally higher MUE and lower MAE (green curve is lower for both directions). The 20-day return difference for the two LB cases originates also from generally greater MUE, but not from continuously greater down moves. As shown by the crossing of the red and orange MDE, LB opening below the channel tend to reverse more often (70%) as long as the MDE does not exceed 9%. Once passed the 9% mark, gaps below the channel will on average fall further than for continuous breakouts.
Going further than one-day lookback periods, trends and the corollary that they are more likely to continue than reverse, should influence Donchian channel breakout results. Two important points that arise are the definition and the lookback period of the trend. The study uses typical price linear regressions slopes normalized by the average typical price and multiplied by 100 (the multiplication is solely for precision purposes). The time horizons approximate the maximum length of the minor (15 days) and secondary (60 days) trends as defined in the Dow Theory.
Five strength groups, from weakest to strongest, divide the signals for both normalized regression slopes. The quantile boundaries vary strongly with the period of the regression and the breakout direction. For HB, quantile boundaries for the 60- day regression are -0.085%, 0.041%, 0.142% and 0.265%. For the 15-day regression, these are 0.05%, 0.196%, 0.354% and 0.592%. Similarly, LB quantile limits for the ‘secondary’ regression are -0.178%, -0.033%, 0.071% and 0.192% and for the ‘minor’ regressions -0.608%, -0.341%, -0.176% and -0.027%. As an example, a HB in the 5th quantile of the ‘secondary’ regression needs the typical price to rise at an average rate of 0.265% in the last 60 days. The boundaries show the bullish bias of the testing period and the more intense velocity of short term moves.
Table 6 shows that the strongest HB returns happen in neutral markets (quantile 2). HB in more powerful uptrends start better, but have weaker 5-day and 20-day absolute returns and come with weaker markets too: the need for consolidation after a stronger rise could explain this performance difference. The strongest absolute and relative performances of LB are found in extreme trends. Both quantiles move strongly between “day 1” and “day 5” and are able to continue in the 15 ensuing days.
Figure 5 illustrates the influence of T60 on excursions. Half of quantile 2 HB does not decrease less than 3.3%; quantile 4 HB have to wait an additional 0.6%. Both quantiles obtain for every percentage more positive than negative excursions, but the magnitude of this difference is greater for HB in neutral circumstances. For instance, 48.2% of quantile 2 HB have a MUE of at least 5% and only 34.2% decreases as much. The quantile 4 HB have 46% and 39.4% for the same moves. LB in strong downtrends are a lot more volatile. The median MUE and MDE are 9.2% and 7.3% respectively. Bear BM happen in 15.3% and bull BM in 18.7% of LB in the weakest T60. The positive 20-day returns come not only from bigger moves to the upside, but also from the timing of the maximum excursions: 37.64% (22.87%) of MDE (MUE) occur before “day 4” and 16.68% (26.93%) take place during the last 3 days.
Tables 8 and 9 display the returns of breakouts filtered with the ‘minor’ regression slope. The best HB arise in quantiles 1 and 5, contrasting with the T60 findings. The pattern to arrive at their 20-day returns is different though: while HB in the 1st quantile continuously increase, crossings taking place in the strongest trends need to breath during the days that follow the penetration. HB in weaker uptrends are not as effective when adjusted for market returns. LB also perform best in short term extreme situations. The absolute returns and market adjusted returns for entry at the close both get weaker with an increasing T15 and extreme quantile LB have extremely strong intraday adjusted returns.
The difference between HB T15 quantiles is again less pronounced than for LB. HB in negative T15 have relatively more MDE smaller than 7.5% and do not differ much on the upside. MUE of LB in a strong downtrend are slightly stronger than for the T60.
Moving averages (MA) are another way of looking at the trend. As the 50-day and 200-day MA are widely followed, both serve to evaluate where the breakouts take place in relation with past prices. The quantiles’ limits for the distance between the HB and the 50 (200)-day MA of “day -1” are 2.75% (0.98%), 5.01% (7.64%), 7.34% (13.48%) and 10.93% (21.58%). Similarly, the limits of the LB quantiles correspond to -9.94% (-13.14%), -5.92% (-4.71%), -3.48% (1.31%) and -1.32% (7.57%) for the 50 (200) MA. The first quantile contains the relatively lowest breakouts.
Table 10 highlights that HB far above the average give the best absolute and relative results. HB in the 1st quantile are also powerful and accompanied by the strongest market. Table 11 shows that LB from the 5th quantile have the highest returns, far below the MA200 manage fairly well and are weakest when close to the MA200. Therefore, a question that could be raised is “Do low channel penetrations close to the widely followed MA200 favor hesitation from investors?”
The most interesting characteristic of figure 7 is the MDE of HB more than 21.5% above the MA200: half of the signals fall at least 5.6% in the 20 ensuing days. By contrast, HB in close above the MA200 (quantile 2) only fall 3.3% in half the cases. Nevertheless, MUE are even stronger, enabling signals further above the MA200 to rise more after 20 days and having a high share of BM (7% bull BM is high when compared with the whole sample). For quantile 2 breakouts, only 1 trade out of 5 manages to increase (decrease) more than 8.5% (7.4%). The LB performance difference comes mainly on the upside: 5% moves are 11.5% more probable in case of LB that are more than 7.57% above the MA200.
Tables 12 and 13 show that the relationship with the MA50 is not the same as with the MA200. The HB closest to the MA50 (quantile 1) give the best average returns for the 1-day and 20-day outlook. HB more than 10.5% higher than the MA50 (quantile 5) do not have the best overall performance, but are the strongest between “day 5” and “day 20”. The upward reaction following LB more than 9.94% lower than the MA50 is stronger by far. The MA50 relative position differentiates the strongest signals better than the 2 linear regressions.
Figure 8 shows, on the one hand, that LB far below the MA50 are particularly volatile; 23% of moves achieve bull BM and 18.4% bear BM. Moreover, median MUE and MDE are 10.8% and 8.3% respectively and 8% excursions are 11% more probable on the upside. On the other hand, the stronger HB (quantile 1) have slightly more upward potential and less MDE smaller than 5.5%.
The last price filter looks at previous signals and classifies them through their breakout direction and their success. For a signal to be successful, the subsequent price action has to move further in the breakout direction than in its opposite direction, e.g. HB need a greater MUE than MDE to be considered a success. The period for the excursion’s calculus varies with the time between two signals and has a cap of 20 days. The idea behind the previous signal filter is double. First, the direction of the previous trade informs about the trend’s current strength. Strong trends should accumulate signals in the same direction. Further, the success could influence the psychology and confidence of market participants. For instance, demand could increase when traders cover their losing shorts, which in turn could lead to accelerating price movement and the entry of more bullish investors who use a momentum approach.
Table 14 shows that HB following a LB with a greater MUE are the strongest in absolute and relative terms for 5 and 20 days. When preceded by HB, the success of the preceding upper penetration slightly influences absolute returns; only the 5- day return differs by more than 0.1%. Table 15 differs from the HB as the strongest LB and market recovery come after a successful LB and the success of a preceding HB has more influence. The weakest relative returns follow a losing penetration of the last 20 days’ highest high.
Two notable facts can be seen in figure 9. First, although their average 20- day returns differ by almost 0.3%, the downside excursions of HB preceded by losing LB and winning HB do not differ much. The variation mainly comes from the upside performance again: the median MUE of HB preceded by a winning HB is 4.9%, 0.7% shy of its counterpart. Second, LB preceded by winning LB, despite performing as the strongest after 20 days, generally have more potential in both directions than signals following losing HB. Nevertheless, the last trade does not discriminate as well as the trend, e.g. the percentage of BM is only 2% more than for the entire sample.
Volume Filters
How do volume and price interact and can volume increase the probability of success? Ratios between the “day -1” and the last week volume (short term) as well as the last week and the four week volume (middle term) try to answer these questions. From the tables below, volume filters seem worse than price characteristics to filter out breakouts. The boundaries for the short term and middle term quantiles are 0.71; 0.87; 1.02; 1.23 and 0.79; 0.91; 1.03; 1.19 correspondingly.
Extremes in the short term volume ratio favor stronger performances for HB. Decreasing the ratio further to 0.5 yields better returns. A volume between 71% and 102% from the week’s average leads to the weakest performance. Table 15 indicates a different behavior for LB: extreme short term volume ratios underperform mild volume variations, do not discriminate signals as they do for HB and have negative intraday and 1-day returns. Very high or low volumes on ‘day –1” thus encourage the breakout direction when compared with milder volume variations.
Figure 10 illustrates that despite having 20-day returns varying by more than 0.25% in the best and worst quantiles, the short term volume ratio is not able to discern important MDE differences. A bigger variation is visible on the upside: high volume breakouts are 4.2% more probable to reach 5% MUE.
For the HB, the middle term volume ratio differs from the short term ratio in the first quantile: stocks are not able to remain as strong and have a 0.2% lower 20- day return. High volume signals beat other HB absolutely and relatively. For LB, table 17 confirms that the best performance comes after mild volume changes; but with a preference for an increase rather than a decrease as observed in the short term ratio. The middle term volume ratio, however, indicates that relatively low volume during the last week decreases returns and high volume is better for the 5 days after the entry.
No middle term volume ratio MUE and MDE figure is given due to lack of excursion variation for the signals and the similarity with the short term ratio.
Time Filters
The following tables divide the sample into five groups based on the amount of days since the last breakout, e.g. a 2-day period in the table occurred on “day – 3”. Importantly, the time in the table only corresponds to the last signal if no consecutive breakouts take place. The quantile boundaries vary starkly for the 4 tables. The time limits for HB and the last HB (LB) are 1 (12), 3 (20), 8 (32) and 22 (51) days. Similarly for LB, the number of days since the last HB (LB) are 9 (2), 16 (4), 23 (12) and 37 (35).
Table 20 highlights that, on the one hand, a long wait between two successive HB benefits the returns; signals without HB in the preceding 22 days increase on average 0.36% more than penetrations with less time since the previous HB. On the other hand, signals that only have one day between two HB are generally worse on “day 1” but catch up the second and third quantile breakouts by “day 20”. Table 21 illustrates that the time since the last HB does a good job at filtering LB: again, the longer the time since the last HB, the stronger the performance. This is especially true between “day 1” and “day 5”. Intraday returns are only positive when HB take place in the 10 preceding days (quantile 1).
The time elapsed since the last HB discriminates best the upward potential of LB. LB which are not preceded by a HB in the last 22 days have a 14.8% probability of a bull BM and 12% of a bear BM. The HB again mainly differ in MUE.
Table 22 and 23 confirm that the time since the last opposite breakout distinguishes better than the elapsed time since a similar Donchian channel crossing. HB preceded by a LB in the last 12 days outperform other HB by far even though the intraday return is the worst, and accompany the strongest markets. LB are less influenced by time since the last low channel crossing. Nevertheless, LB in quick successions favor stronger returns. Finally, no LB in the last 36 days give strong 5- day returns.
Quite similar to figure 11, figure 12 is contrasted by the greater difference between quantile 1 and 5 HB and by the higher similarity of LB.
ROBUSTNESS
Time
The never-ending evolution of technology has heavily influenced markets over the last 30 years. To name just a few developments over the years, stock exchanges have been able to accommodate increased flows of transactions created by shorter term trading strategies, lower latency and end investor proliferation, the volume of derivatives has surged and the advent of ETFs has increased the diversification possibilities as never before. But has the market behavior, a mere reflection of human psychology, also changed heavily; and in this papers’ interest, have the Donchian channel filters stood the test of time?
In order to answer the preceding question, the data is first split in eight 4- year samples. The first and last samples are shorter than the others as January 1 of election years define the boundaries and the simulation starts in February 1985 and ends in October 2014. Then, three filters for which maximum and minimum 20-day returns differ most are compared. The assessment uses 20-day absolute and relative returns of the strongest and weakest filter environments and differentials of both are given.
Figure 13 details the time robustness results for HB. First, HB with LB in the last 12 days have not only the best overall 20-day return (1.351%), but also the least variation through time: the mean absolute deviation (MAD)3 is 287 basis points, 0.149% less than for HB without an upper penetration in the last 22 days. Moreover, worst 20- day returns for time filters do not appear during the subprime mortgage crisis, a period less favorable for the T60. Second, HB without LB in the last 55 days quantile 5) are the only signals that manage negative returns in the 2009-2012 period. The T60 has the highest variation (0.337%) for weak environments because the 3rd quantile of time since the last HB is not as volatile as its 5th. Third, time since LB has a generally stronger differential that varies more than for its two peers. Nevertheless, it is the only filter for which the best-worst-20-day-returndifference has stood the test of time. Fourth, looking at adjusted returns is not as appealing, but the different timing of entry between the benchmark and the breakout should be kept in mind. Relative results are significantly better between 2001 and 2004 and the T60 filter more often than not leads to the best adjusted returns. Fifth, the 20- day adjusted returns in the weakest environments are never the worst for time since the last HB. Finally, differentials on an adjusted performance basis are stronger for the T60 and variation is lower than for the peers too. Time since the low breakout is the only one with a sub-zero differential and the most volatile.
Figure 14 illustrates the consistency over time of the best LB filters. The first observation concerns 20-day returns: LB without HB in the preceding 37 days have the largest MAD (0.767%) and the lowest expected return over the sample period (2.372%). LB far below the MA50 perform best in 5 periods, but do not have the lowest dispersion. The worst performance of the three filters happens, surprisingly, during the 1988 Presidential Cycle. Further, 20-day returns shortly preceded by HB outperform and underperform the other filters’ weak quantiles half the time and vary more. Next, the relationship between the strongest and weakest filter environments is never maintained between 1988 and 1991. The differentials of the time based filter peaks at 4.54% between 2005 and 2008. The adjusted 20-day returns tell a slightly different story. On the one hand, the MA50 filter again has strongest consistent differentials and best quantile results. On the other hand, time since the last HB outperforms by 0.260% other filters in their weak quantiles and has similar adjusted returns in the good environment. As a consequence, their 20-day adjusted return differentials fluctuate greatly.
Sector
Sector rotation aficionados would probably ask themselves if (adjusted) returns of the 20-day Donchian channel breakouts with the best filters are dependent on the business type of the firm. This section, by following the same methodology as for time robustness, tries to demonstrate that sectors do play a role in the quality of the filter. The analysis uses the ten traditional sectors.
Figure 15 shows the sector robustness for the best HB filters. Firstly, with a MAD of 0.278 basis points, time since the last LB is again the most constant of the three filters. The 60-day linear regression slope yields particularly good 20-day returns for the energy and utilities sectors; time since the last LB is better for telecommunication services and materials. 20-day returns for utilities are particularly poor when contrasted with the other sector. Further, weak environment signals are mostly consistent for T60 (238 basis point MAD) and erratic for time since the last LB. Moreover, 20-day return differentials hold up well with the exception of utilities that do not consider time since the last breakout useful at all. A higher level of discrimination is achieved by the T60 (time since low) for IT and utilities (consumer staples, energy, materials and telecommunication services). Besides, time since LB is the most volatile when adjusted for market returns. Not only does the filter have more negative 20-day adjusted returns for strong environments, time since LB is also the only filter with negative adjusted return differentials: performance relative to the market for financials and telecommunication services are thus better when a LB is recent. Time since HB differentials impress by their consistency: the MAD is only 95 basis points. Finally, T60 has the best relative performance differential: whereas other filters lose performance when adjusted for market returns, T60 relative differentials maintain values close to those from the absolute difference.
Figure 19 highlights the characteristics of LB 20-day (adjusted) returns for the different sectors. First, 20-day returns of strong downward T60 and the relative position to the MA50 resemble each other. Nonetheless, while T60 has relatively weak results for utilities and the energy sector, LB far below the MA50 are more robust. As a consequence, the MAD is more than 100 basis points lower for the MA50 filter. 20-day returns for the time since the last HB are more volatile: the MAD is close to 0.9%. Second, the MA50 and T60 filters are generally consistent in the weaker environments with the exception of the financials, utilities and telecommunication services. Long times since the last HB seems to provide particularly attractive price levels in the healthcare sector. Third, the differentials show the strength of the position relative to the MA50 again as the other filters do not maintain a stable relationship across sectors. As such, energy and utilities prefer LB in a mild uptrend to a downtrend and time since the last HB does not play an important role. Fourth, utilities behave differently again for adjusted 20-day returns as it is the only sector managing negative adjusted returns for the three filters. Fifth, the relative performance of LB closely followed by HB is better than for other filters. As the absolute 20-day returns are not too dissimilar for the 3 filters, this proves that the market is usually weaker for this particular configuration. Finally, the adjusted return differentials underline the following three characteristics; the strength of the relative position to the MA50, the importance of the sector for the time-based approach and the special character of the utilities sector.
CONCLUSION
The original Turtles entered on 20-day Donchian channel breakouts in the futures market and achieved excellent results in the 1980’s. This paper analyzes almost 30 years of similar equity market breakouts in detail and searches for price, volume and time filters that can distinguish more promising signals from others. Results suggest that high breakouts perform especially well in certain circumstances; when preceded by a low breakout that does not decrease as much as it rises in the days following the signal, when in a weak 60-day uptrend, when preceded by a low breakout in the last 12 days and when there has not been a high breakout in the last 23 days.
However, low breakouts perform even better and are best filtered with; a long time elapsed since the last high breakout, a price level at least 10% lower than the 50-day moving average, a strong downward trend and a long distance between the day preceding the breakout close and the lower channel.
The filters that best distinguish promising and inauspicious breakouts are then tested across sectors and in Presidential cycles. The robustness tests suggest using time since the last low breakout, especially in the energy and materials sectors, to maximize returns of high breakouts. Low breakouts on their part favor the position relative to the 50-day moving average in every sector but utilities for which it does not discriminate as well. Volume, in the form of ratios, does not distinguish signals as well as the aforementioned price and time filters. Nevertheless, results suggest that extremely high or low readings help price movement in the direction of the trend.
Further studies could choose different paths. One of them is to combine the different filters to find stronger patterns. Another way of deepening the knowledge would be to focus on the subsequent price action as this study restricts itself to breakouts that are preceded by price action inside the 20-day Donchian channel.
BIBLIOGRAPHY
Connors, L. A., & Bradford Raschke, L. (1996). Street Smarts, High Probability Short Term Trading Strategies. Gordon Publishing Group.
Faith, C. M. (2007). Way of the Turtle: The Secret Methods that Turned Ordinary People into Legendary Traders. McGraw – Hill.
Simple and Effective Market Timing with Tactical Asset Allocation
by Lewis A. Glenn
About the Author | Lewis A. Glenn
Lewis A. Glenn is a Founding Partner and Chief Scientific Officer at Creative Solutions Associates LLC, a private investment and wealth management group with offices in the San Francisco Bay Area and Lausanne, Switzerland. In a previous life Dr. Glenn was the Computational Physics Group Leader in the Earth Sciences Division at the Lawrence Livermore National Laboratory in Livermore, California.
ABSTRACT
A simple market timing algorithm is examined that switches from an exchange traded fund representing U. S. equities to one holding treasury long bonds every month on the last day, the switch being made to whichever ETF has the greatest ratio of current adjusted closing price to adjusted closing price µ months earlier. The parameter µ is determined so as to maximize total return and minimize the total number of trades, however the results are relatively insensitive to µ over a fairly wide range. The performance of this scheme is compared to that of an Ivy 5 portfolio consisting initially of equal dollar amounts of ETFs in U. S. equities, foreign large blend, 7-10 year treasuries, real estate, and commodities. As with the paired switching approach, each ETF is purchased only once a month, on the last day, in this case only if its adjusted closing price exceeds the 10-month simple moving average (SMA). Otherwise that portion of the portfolio is invested in a cash surrogate.
Comparison is made over the 10-year period ending on 12/31/13. It is shown that the average annual return of the paired switching algorithm exceeds 30% in this period, which is 3 times greater than that of the Ivy 5. Moreover, only 45 trades were required for the paired switching approach, whereas the Ivy 5 required 70 in the same period. The maximum draw down was 14.6% for the Ivy 5 and 18.8% for paired switching.
INTRODUCTION
The so-called weak form of efficient-market theory (EMT) holds that future stock prices cannot be predicted on the basis of past stock prices[1] . Many advocates of this theory believe that the best strategy for the long-term investor is to “buy the market”, by which they generally mean to invest in an index fund that represents all, or a significant segment of, the equity market. The claim is that past performance has shown that, in the long run, buying and holding this class of asset will outperform any active management scheme.
One problem with this approach is the definition of the term long run. To paraphrase the famous British economist John Maynard Keynes, “ … in the long run we’re all dead”. In fact, many investors would consider a 5 -15 year investment as long-term whereas some would choose a period as long as 30 years and others as short as 6 months. A brief look at the large cap growth ETF (QQQ) that tracks the NASDAQ 100 (which consists of the largest non-financial securities listed on the Nasdaq Stock Market) exhibits what can happen over a long run. An investor purchasing this fund at its inception in March 1999 would have seen her investment more than double in the first year. Two and a half years later, by September 2002, this would have dropped by more than a factor of 5 and her holdings would be down to less than half the initial investment. It would take 5 more years for this investor to recoup the original investment, not including the interest she would have made had she remained in cash. And then her willpower would be once again tested as she watched the collapse of the financial markets in 2008 by the end of which the value of the initial investment would be once again halved.
It is this scenario that tactical asset allocation seeks to mitigate. The main idea is simply to diversify portfolio assets and employ a market timing solution. But what kind of solution? The literature is replete with different approaches. Our goal here is not to review the many ideas that have been suggested but rather to focus on 2 very simple schemes.
An especially popular method was described by Faber[2] and then further elaborated by Faber and Richardson[3] as the Ivy 5 portfolio. The basic idea is to create a portfolio of 5 sectors consisting of the S&P 500 index, the MSCI EAFE index, U. S. 10-year government bonds, a real estate index, and a commodity index. The initial study involved non-trade-able indices over a 35-year period. The trading rule was simply to buy each index when the monthly price exceeded the 10-month simple moving average (SMA) and to sell and move to cash otherwise. All entry and exit prices were on the day of the signal at the market close and the model was only updated once a month on the last day. Price fluctuations during the rest of the month were ignored.
Moving average methods are very useful for discerning trends and indeed this author has suggested several market timing algorithms based on them[4,5,6]. All moving average algorithms, however, exhibit latency, i.e., they introduce some lag in the calculation so that the MA lags the original signal. This means that the user will implicitly be late in leaving a declining market and, perhaps even worse, late in entering a rising one. And the lag time increases directly with the length of the moving average. The paired switching method to be described here does not depend on moving averages. In general, paired switching refers to investing in one of a pair of negatively correlated assets and periodic switching of the position on the basis of the relative performance of the two. Maewal and Bock[7] proposed an especially simple version that looked at the performance over a prior 13-week period and purchased the asset that had the higher return over that period. That position was then held for 13 weeks at which point the cycle was repeated. They looked at several different ETFs that served as surrogates for equities and bonds, including SPY (for the S&P 500) and TLT (for treasuries with maturities of 20 or more years). Cohn[8] devised a similar scheme but employed it on a daily basis. Instead of a look back period of 13 weeks (65 trading days), he found that 84 trading days produced the best results.
In what follows we first generalize the Maewal-Bock procedure by switching between SPY and TLT in the same way as with the Ivy 5, namely once a month on the last day, the switch being made to whichever ETF has the greatest ratio of current adjusted closing price to adjusted closing price µ months earlier. The parameter µ is determined so as to maximize total return and minimize the total number of trades. We then compare the results to those obtained with the Ivy 5 portfolio over a recent 10-year period and exhibit the results in a novel way that allows an investor to immediately visualize end-of-period performance based on date of first entry (purchase) with use of the algorithm.
THE PAIRED SWITCHING STRATEGY
First, a few definitions are in order. Let the value of the paired switching portfolio on day n be v(n), 1 ≤ n ≤N and the normalized value be V(n)=v(n)/C , where C is the cost basis. For the 10-year period ending 12/31/13, N = 2517 trading days. Also, let τ(n) be the number of trades, where either a buy or a sell is considered a trade.
Nominal trading costs are included on all transactions but these costs are generally negligible with electronic trading and the high volumes extant with these ETFs. However, slippage is not included and this might be significant with increasing trading frequency.
(expressed as a percentage) and we have taken a trading year to consist of 250 trading days. Note that, since N = 2517 here, equation (2) evaluated at n = N will slightly underestimate the true CAGR in the period of interest.
THE IVY 5 STRATEGY
In applying the Ivy 5 strategy we utilize the ETFs suggested by Faber and Richardson (FR)³with one minor exception. There were no broad sector commodity ETFs available on 1/1/2004, the beginning of our 10-year study period. So, we have substituted the PIMCO Commodity Real Ret Strat Instl mutual fund (PCRIX) for the commodity ETF recommended by FR (DBC). The modified Ivy 5 trading rule is then:
Purchase equal amounts of the following ETFs/funds on day 1 if and only if the adjusted closing price exceeds the 10-month simple moving average (SMA): SPY (S&P 500), EFA (MSCI EAFE index), IEF (7-10 year treasuries), IYR (real estate), PCRIX (commodities). Otherwise the amounts meant for purchase are instead left in cash. Then, at the last day of this and each following month, repeat the procedure. Price fluctuations during the rest of the month(s) are ignored. Cash positions earn interest using IRX (CBOE index that measures the discount rate of the most recently auctioned 13-week U. S. Treasury Bill) as a surrogate. And, again nominal trading costs are included on all transactions.
COMPARISON OF PAIRED SWITCHING WITH THE IVY 5
The open and closed circles represent end-of-month entry dates, corresponding to the allowable entry dates with the paired switching and Ivy 5 rules discussed above; purchases made at other dates are assumed to be invested in cash, without interest, until the next end-of-month, at which point the appropriate purchase is made. It can be seen that the paired switching scheme produces higher total return at the end of the windowed period than the Ivy 5 independent of the entry date. And, although an investor purchasing SPY at the end of February 2009 and holding through 12/31/13 would have done almost as well as with paired switching, it is highly unlikely that he would have been able to identify the bottom of the market that produced this result. Figure 4 shows the CAGR in the same context, and with similar modified nomenclature, as in figure 3. Over the entire set of entry dates the (minimum, mean, maximum) values for paired switching are (14.4, 20.4, and 31.9)% respectively; note that the CAGR values after 12/31/12 are suppressed for clarity because the results are highly volatile and not meaningful for n / 250 << in equation 2. The comparable CASG (minimum, mean, maximum) values for the Ivy 5 are (1.4, 6.6, and 9.8)%. Figure 5 shows the maximum draw down for each of the cases in figure 2 through 4 and exhibits the advantage of both paired switching and Ivy 5 schemes over buy-and- hold approaches.
With neither timing scheme did the maximum draw down exceed 20%, independent of entry date but the Ivy 5 had smaller draw downs for most entry dates. Finally, figure 6 depicts the number of trades employed (each switch counts for 2 trades with paired switching) for the Ivy 5 and paired switching schemes and it is clear that the latter has the advantage regardless of entry date. As mentioned earlier, even though the explicit cost of electronic trading is now generally negligible, the cost of slippage (the difference between the adjusted closing price and the price actually obtained in a transaction) is not easily accounted for. Slippage is not a deterministic process and, while bounds may be established using Monte Carlo methods, this is beyond the scope of the present study.
DISCUSSION
The results presented in the previous section would appear to show that the paired switching scheme has the advantage over the Ivy 5 unless draw down is the main concern, in which case the significantly increased return of the former would need to be balanced against its modest increase in risk. One question that arises, however, is the significance of these results in predicting future performance. For one thing, most of the ETFs involved in both schemes have been in existence only a short time so only a rather narrow 10-year period was considered. As mentioned earlier, this problem was dealt with by Faber[2] for the Ivy 5 by testing the algorithm with non-trade-able indices over a 35 year period. In their analysis of paired switching, Maewal and Bock[7] employed the statistical procedure devised by Henriksson and Merton[9] to evaluate the significance of their results. They also suggested a similar procedure to that employed by Faber for the Ivy 5, in this case the use of surrogate mutual funds – VFINX for the S&P 500 and VUSTX for the treasury long bond. The former has been in existence since 1980 and latter since 1986. Figure 7-11 below show the results of our applying the paired switching algorithm described in Section 2.0 with these two mutual funds over a 25-year period beginning on 1/4/1988 and terminating on 12/31/13. In this case, the number of trading days, N = 6966.
Finally, we should comment on the evaluation period, k . In this study we chose to evaluate the paired switching portfolio at the end of each month, in accordance with the rule used for the Ivy 5. However, as mentioned earlier, Maewal and Bock[7] only evaluated the possibility of switch every 3 months, corresponding to the 3-month look back period. Moreover, Cohn[8] employed an evaluation period of 1- day, together with a look back period of 84 days (it can be shown that, with a 1-day evaluation period, the optimal performance has a steep maximum about the 84-day look back period). Table 1 summarizes the performance for these 3 cases over the 10-year period ending on 12/31/13.
It can be seen that with k = 3 months, the number of trades is only slightly less than with k = 1 month, but the performance suffers significantly. Conversely if paired switching is evaluated on a daily basis, with a look back period of 84 days, a roughly 1% increase in CAGR is obtained, together with a small decrease in maximum draw down. However, in this case, the number of trades increases by a factor of 4 so that slippage becomes a significant factor.
CONCLUSION
Paired switching refers to investing in one of a pair of negatively correlated assets and periodic switching of the
position on the basis of the relative performance of the two. The specific pair examined in this study is the S&P 500 index and the index for U. S. treasury bonds with 20 or more years duration. It was shown that, using mutual fund surrogates VFINX for the former and VUSTX for the latter, and switching between the two at the end of each month based on whichever had the higher return over the past three months, a total return of almost 2400% could have been achieved over a period of 25 years ending on 12/31/13, double that obtained by buying and holding VFINX alone in this period. Moreover, this result obtained with significantly less risk; draw down was only a third that with buy-and-hold. Unfortunately, short-term trading restrictions on these mutual funds, still in place, would have made it difficult, if not impossible, for the average investor to use this method. With the advent of exchange traded funds, however, the game has changed. ETFs are readily available that track the S&P 500 index (SPY) and the long bond (TLT) and the low electronic trading cost coupled with the huge daily volumes provide the liquidity with which slippage can be minimized. Using the same paired switching rule just described it was shown that the total return over the 10-year period ending on 12/31/13 was almost 300%, three times higher than that obtained by buying and holding SPY alone, and also three times higher than that obtained with the popular Ivy 5 method described in section 3.0 above. And, as with the mutual funds over the 25 year period, the draw down with paired switching was only a third that with buy-and-hold of SPY. The number of trades with paired switching over the 10-year period was 45, compared with 70 for the Ivy 5, although the latter had slightly lower maximum draw down (14.6% versus 18.8% for paired switching).
REFERENCES
- Malkiel, B. G., “A Random Walk Down Wall Street” (2007), W. W. Norton & Company
- Faber, Mebane T., A Quantitative Approach to Tactical Asset Allocation (February 1, 2013). The Journal of Wealth Management, Spring 2007 . Available at SSRN: http://ssrn.com/abstract=962461
- Faber, M. T. and Richardson, E. W., The Ivy Portfolio – How to Invest Like the Top Endowments and Avoid Bear Markets, John Wily & Sons, 2009.
- Glenn, L. A., Market Timing with Volatility, Active Trader, 10, No. 7, July 2009, pp. 28-31; see also Beat the Market – A Strategy for Conservative Investors (December 12, 2008). Available at SSRN: http://ssrn.com/abstract=1315533
- Glenn, L. A., Market Timing using Exchange Traded Funds, The Technical Analyst, July-Sept. 2010, pp.16-24. Available at SSRN: http://ssrn.com/abstract=1591969; see also Playing Both Sides, Active Trader, 11, No. 9, September 2010, pp. 44-49
- Glenn, L. A., Adaptive Market Timing with ETFs (December 28, 2010). Available at SSRN: http://ssrn.com/abstract=1732010
- Maewal, Akhilesh and Bock, Joel B., Paired-Switching for Tactical Portfolio Allocation (August 22, 2011). Available at SSRN: http://ssrn.com/abstract=1917044
- Cohn, Marc, Return Like a Stock, Risk Like a Bond: 15.5% CAGR with 17% Drawdown, (February 23, 2014),Available at http://seekingalpha.com/article/2041703-return-like-a-stock-risk-like-a-bond-15_5- percent-cagr-with17-percent-drawdown?ifp=0.
- Henriksson, R. D. and Merton, R. C., On Market Timing and Investment Performance. II. Statistical Procedures for Evaluating Forecasting Skills, J. Business, 54, No. 4, October 1981, pp. 513-533.
Fibonacci Bands:
by David Linton, MFTA

About the Author | David Linton, MFTA
David Linton is the founder and Chief Executive Officer of Updata, an analytical service that aims to deliver the best technical analysis software running on as many data sources as possible. Professional traders and analysts now use Updata’s services in over forty countries around the world.
David is a well-known commentator on financial markets in the UK. He has appeared on BBC television, ITN News, Bloomberg and CNBC finance channels and has written for The Mail on Sunday, Shares Magazine and the Investors Chronicle. He has taught Technical Analysis to thousands of traders and investors in Europe over the last two decades with numerous financial institutions employing him to teach and train their trading teams.
David is a member of the UK Society of Technical Analysis (STA) where he teaches the Ichimoku technique as part of the STA Diploma Course and is a holder of the MSTA designation. He is a member of the Association of American Professional Technical Analysts (AAPTA) and was awarded the Master Financial Technical Analyst (MFTA) qualification by the International Federation of Technical Analysts (IFTA) for his paper on the Optimisation of Trailing Stop-losses in 2008. David is the author of Cloud Charts, Trading Success with the Ichimoku Technique, published by Updata. David lives in London and his interests include skiing and yachting.
ABSTRACT
Fibonacci Retracements are normally applied to charts manually by the technician. Most computerized technical analysis systems require the end user to specify a measured move in price between a low and a high, or vice a verse, to generate the key Fibonacci retracement levels between those points. This paper explores the idea of utilizing a key concept of Ichimoku charts to project retracements automatically into the future.
INTRODUCTION
Fibonacci Retracements are based on the idea that, after a significant move in price, a market or instrument will retrace a portion of that move. Technical analysts sometimes refer to a rule of thumb where prices pull back by half the original move up to find support. Similarly, a retracement to the upside may occur after a significant fall in price.
Fibonacci ratios are derived from the Fibonacci number sequence developed by the twelfth century mathematician, Leonardo Pisano Bogollo, known as Leonardo of Pisa or more commonly Fibonacci. He showed in his work that many patterns in nature followed these numbers and ratios. If the foundation of technical analysis is based on crowd behaviour it is reasonable to apply this branch of mathematics, that has been shown to map many natural phenomenon, to financial markets.
FIBONACCI RETRACEMENT LEVELS
The most referred to Fibonacci retracement level of 61.8% is based on the golden ratio 0.618. This value comes from the Fibonacci number sequence whereby each subsequent number is generated by adding the two previous numbers in the sequence:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144….
The golden ratio is derived by dividing adjacent numbers in the sequence, i.e.: 55/89, and as the sequence progresses this number approximates to 0.618. The 0.5 ratio is the starting ratio 1/2 before the approximation to 0.618. The 0.382 ratio, which also happens to be 1 – 0.618, is derived from dividing a number in the sequence by the number after the next number, i.e.: 55/144. There are other levels such as 23.6%, which comes from dividing a number in the sequence by three numbers ahead, but 38.2, 50 and 61.8% are generally considered the main Fibonacci retracement levels. The 38.2% retracement might be considered an ‘undershoot’ and the 61.8% level an ‘overshoot.’ Markets often overshoot and the 61.8% retracement level has become so revered among market participants, that it cannot be easily ignored.
Prices will often range between, or become bound by, Fibonacci retracement levels. We see this in Figure 1 where prices repeatedly find support on or around the 61.8% level and resistance at 23.6%.
THE ICHIMOKU TECHNIQUE
The application of Fibonacci Retracements to a chart is somewhat subjective. It is necessary to identify measured moves between significant lows and highs. These often only become clear in retrospect, by which time much of the retracement may already have occurred. One technique that could lend itself to running rolling Fibonacci retracement levels on a chart is that of the increasingly popular Japanese Ichimoku analysis.
Ichimoku charts were invented by Goichi Hosoda (1898 – 1982) in the 1930s. These charts, often referred to as Cloud Charts, are constructed with five key lines derived from price as follows:
- The Turning Line – plot the mid point of the lowest low and highest high in the previous 9 bars
- The Standard Line – the mid point of the lowest low and highest high in the last 26 bars
- Cloud Span A – the mid point of the Turning and Standard lines shifted forward 26 bars
- Cloud Span B – mid point of lowest low and highest high for last 52 bars shifted forward 26 bars
- The Lagging Line – plot of the closing price shifted back 26 bars
The cloud area between the cloud spans is normally shaded such that the cloud is red when Cloud Span A is below Cloud Span B at the top of the cloud (the shorter of faster average is below the slower one) and blue shading is used when Cloud Span B forms the base of the cloud. Ichimoku in Japanese roughly translates to ‘one look’ or on first appearance and it is this cloud visualization projecting support and resistance into the future that makes cloud charts compelling.
Figure 2 shows these five plotted lines and the shaded cloud labelled on the chart.
When developing the Ichimoku technique in the 1930s, Hosoda arrived at the construction values 9, 26 and 52 by hand through a process of trial and error assisted by a group of several students. There were 6 trading days in a week, and on average 26 trading days in a month in Japan at that time. It is quite likely that he found one month and two months to be good averaging periods and possibly arrived at 9 days after finding one week too sensitive and two weeks too slow an average. It is useful, that on weekly charts the look back period is a rolling 12 months and the cloud extends forward to give a visualization into the future of 6 months.
Cloud charts are unique due to the construction of the averages and the forward projection of the cloud. The turning and standard lines and the cloud spans are all based on the mid points of the extreme highs and lows of the averaging period. The cloud is interpreted as an area of support when it is below the price and area of resistance when above. A simple interpretation is a chart is bullish when the price is above the cloud and bearish when below. If price is in the cloud, it is bullish if it entered from above and bearish if it entered from below.
A trend change occurs when prices cross from one side of the cloud to the other and the lagging line crossing after this is the true confirming signal. These transitions are powerful signals for change in trend. Figure 3 shows how clearly the trend changes have shown up for the US stock market over the last twenty years. On the basis of the weekly cloud chart, one would have sold the market in the year 2000, bought in 2003, sold at the end of 2007 and bought in 2009 and held since.
Ichimoku analysis is a trend following technique, so the signals come after major tops and bottoms in the market. It is this aspect of Ichimoku charting that lends itself to projecting retracement levels for significant tops and bottoms into the future.
Cloud Span B
One of the most important aspects of Ichimoku charts is the way price interacts with the cloud as support and resistance. The base of the cloud is the critical support level in an uptrend and the top of the cloud is the resistance level in a downtrend. This is normally Cloud Span B. Figure 4 shows four touches marked where prices find support on or around Cloud Span B.
Because Cloud Span B is the midpoint of the highest high and the lowest low for the last 52 bars projected forward by 26 bars, it is effectively a rolling 50% retracement running ahead of the price. What is most interesting is when cloud span B ‘flat lines’ by moving horizontally because there is no new extreme high or low values entering the average calculation for the last 52 bars. This in turn implies that the extreme high and lows are significant because subsequent values lie between them and therefore the chart is probably undergoing a retracement. If Cloud Span B is ‘flat lining’, then a Fibonacci Retracement should be present.
When cloud span B is running sideways for an extended period it can run for 52 bars, starting 26 bars from the latter extreme point in price and ending 78 bars (52+26) after the first extreme point. After that the first extreme point drops out of the calculation. Traditional moving averages are often used to identify levels of support or resistance for the price, but they continue to change due to the calculation being based on each closing price in the averaging period. The averages used in Ichimoku charts are more ideal as a retracement measure as they are based on the median of the absolute trading range in the averaging period and can run sideways, only changing with new high or low information.
CONSTRUCTION OF FIBONACCI BANDS
Working on the basis that Cloud Span B is a rolling 50% retracement, retracement levels for 38.2% and 61.8% can be added. These bands 11.8% either side of the mid band gives a channel 23.6% wide. Fibonacci Bands are quite different from Bollinger Bands or Donchian Channels which are generally boundaries to the price. They should be read the same way as Fibonacci retracements would be with prices potentially finding support or resistance at these levels.
The algorithm for the bands is:
Middle Band 50% = ((highest high in last 52 bars + lowest low in last 52 bars) X 0.5) + 26 Bars
Upper Band 38.2% = ((highest high in last 52 bars + lowest low in last 52 bars) X 0.618) + 26 Bars
Lower Band 61.8% = ((highest high in last 52 bars + lowest low in last 52 bars) X 0.382) + 26 Bars
In the case of a move up in prices from a low to high, the upper band occurs at 0.618 of that up move. For a move down from a high to a low, the upper band now becomes the 61.8% retracement for any resulting counter move. The code for the bands in the Updata system, which can easily be adapted for other systems, is given in Appendix 1.
Figure 5 shows a chart with Fibonacci Bands marked on. The bands can be interpreted in a similar fashion as the cloud is read on an Ichimoku chart. When prices are above the bands, the bands become levels of support or retracement. When prices are below the bands, the bands become levels of resistance for the future. On this chart the down move marked ‘1’ corresponds to the flat bands also marked ‘1’, which start 26 days after the low. The bands start to rise when the September high is surpassed in November (dotted line). The flat line of the bands marked ‘2’, similarly corresponds to the up move marked ‘2’.
The Fibonacci Band and up move marked ‘3’ in Figure 5 do not in fact relate to each other. This part of the bands is generated from the move down previous to this. The mechanism of a ‘look back’ period for the mid point of trading and a ‘project forward’ have been borrowed from the Ichimoku technique for the purpose of generating Fibonacci Bands.
The chart in Figure 6 shows the same stock chart (Intel) as in Figure 5. In this instance no forward projection has been used. This is done by setting the Forward Shift variable to zero (See Appendix 1). While price appears to have interacted well with the Fibonacci bands both from above and below, this is easy to see in retrospect. Not having the forward projection of the retracement levels into the future and their ongoing adjustment for new price information would reduce their practical application.
Using a ‘look back’ and ‘project forward’ to construct Fibonacci Bands poses two central questions in attempting to automate the subjective technique of applying Fibonacci Retracements:
- How far does one look back to establish the significant highs and lows for retracements?
- What is the length of time for the influence of a significant move between highs and lows?
The technique of Fibonacci time extensions seeks to arrive at the relationship between these two questions, but this is again subjective. One way to answer the questions is to attempt to optimize the ‘look back’ and ‘project forward’ variables by back testing price histories. Backtesting touches or bounces on the Fibonacci Bands is difficult. To address this, some basic assumptions are needed for the development of a trading strategy.
FIBONACCI BANDS AS A TRADING STRATEGY
If the assumption is made that the 61.8 level is an acceptable retracement but a move through this level is not, a trading strategy based on this can be developed. Any move through the Fibonacci Bands, is not a retracement and may now be considered a trend change.
The algorithm for the a ‘long only’ trading strategy bands is:
SELL when price crosses below lower band
BUY when price crosses back above the upper band
This strategy is effectively making the assumption that a move below the 61.8% retracement (lower band) after an up move in prices is more than an acceptable retracement and the instrument should be sold. A move back above the upper band, which is in fact the 61.8% retracement from a down move, is also more than an acceptable reactive move and therefore the instrument should be bought back. The code for the trading strategy based on Fibonacci Bands for the Updata system is given in Appendix 2 and this could be easily adapted for other systems.
Figure 7 shows the result of this strategy for the Nasdaq 100 Index on weekly prices over the past twenty years. The market is bought in 1994 after crossing up through the Fibonacci Bands and sold in late 2000 when the lower band is crossed. The bottom window of this chart shows the market and the Equity Line (in green) for the trading strategy overlaid but resized on different scales. The Nasdaq continues to fall while the Equity Line moves sideways having sold. In 2003, the market moves above the upper band, the first retracement in years, so this is a buy. There is a subsequent sell and buy back in 2006 through some market chop. The market is sold again in early 2008 and bought back with a move back above the bands in 2009. This trade has been open ever since. While the market is still below the all time high set in the year 2000, the Equity Line of the trading strategy has well passed this level (marked blue dotted line) and is making significantly higher highs.
The short lived exit and entry in 2006 for the test on the Nasdaq 100 is the sort whipsaw trade that could be eliminated by finessing the system with a signal delay mechanism and optimisation of the variables. The idea of the signal delay is to wait a few bars and see in order to avoid to eliminate temporary breaches. This can be optimized between one and three bars and tests show it often pays to wait. Results may also be improved further by selecting the appropriate amount of back history to test. Figure 8 shows the chart for such a back test of the Fibonacci band system for the US stock market from the start of 2009. The signal delay was 3 days and the ‘look back’ and ‘project forward’ periods were optimized for maximum profit produced by the trading system by testing a range of variables for these between 1 and 255 (the approximate number of trading days for a year). The start of 2009 was chosen in order to test the characteristic for the current bull trend in the market (as confirmed by the weekly Ichimoku chart in Figure 3). In this case the optimum look back was 138 days instead of 52 and the optimum look forward was only 14 days instead of 26.
This is a long only trading system which has therefore been optimised only for retracements of up moves. What is most interesting is seeing how prices often bounced off the upper and lower bands in the period. There were periods of uncertainty in 2010 and 2011 and the sharp sell off in late 2014 gave a shorter temporary breach of the bands.
Figure 9 shows a shorter term trading system for the S&P 500 Index on 60 minute data from 1st March 2015 until mid June 2015 when the market traded mostly sideways. This is a long-short trading system whereby, when a long trade is exited, a short trade is entered with a move below the lower band. Then the short trade is exited on a move back above the upper band and another long trade is initiated. Here the optimization produces very short look back and project forward periods of 5 hours each. The system pulls the bands as close to the price as possible optimizing for maximum trading profit. The blue line is the market which is lower after a few months and the black line is the equity line which is moving to new highs.
CONCLUSIONS
The test for the S&P 500 Index from the start of 2009 gave optimum values of 138 and 14. This suggests an average look back of around six months is currently ideal for trapping significant up moves in the US market and resulting sell offs occur in only a couple of weeks for the current market trend. More importantly it suggests that the values borrowed from the Ichimoku Technique of 52 days and 26 days may well been too short and too long respectively. It implies that, for this market, measured moves take longer to build up and retracements play out in a relatively short time frame.
When Fibonacci Bands ‘flat line’ for an extended period, it implies that a retracement level is more likely to be valid due to the presence of significant high and low that are not being surpassed. More importance can be placed on the bands during these times.
Fibonacci Bands do not serve as a mathematical proof that Fibonacci retracements consistently work, but technicians should find them useful as rolling visual method of knowing where the retracement levels are in relation to the price action that has generated them.
REFERENCES
Software, back testing and charts: Updata Analytics Data: Bloomberg
Sigler, Laurence, 2003, Fibonacci’s Liber Abaci: A Translation into Modern English of Leonardo Pisano’s Book of Calculation (Sources and Studies in the History of Mathematics and Physical Sciences), Springer.
Murphy, John J, 1999, Technical Analysis of the Financial Markets: A Comprehensive Guide to Trading Methods and Applications, New York Institute of Finance.
Kirkpatrick II, Charles D. and Dahlquist, Julie, 2010, Technical Analysis: The Complete Resource for Financial Market Technicians (2nd Edition), Pearson Education.
Sasaki, Hidenobu, 1996, Ichimoku Kinko Studies (Japanese language edition), Toshi Raider Publishing
Linton, David, 2010, Cloud Charts: Trading Success with the Ichimoku Technique, Updata Publishing
APPENDIX 1
Code for Fibonacci Bands, Updata Programming Language, Updata Analytics
NAME “”FIBONACCI BANDS””
PARAMETER “Forward Shift Period” #FWD=26
PARAMETER “Lookback Period” #LB=52
DISPLAYSTYLE 7LINES
INDICATORTYPE TOOL
INDICATORTYPE2 TOOL
PLOTSTYLE DOT RGB(255,0,128)
PLOTSTYLE2 THICK2 RGB(255,0,128) PLOTSTYLE3 THICK2 RGB(255,0,128)
@UPPER=0
@LOWER=0
FOR #CURDATE=#LB TO #LASTDATE+#FWD
@UPPER=PHIGH(HIGH(#FWD-1),#LB)
@LOWER=PLOW(LOW(#FWD-1),#LB)
‘@PLOT2=@UPPER
‘@PLOT3=@LOWER
‘DRAW 50%,38.2%,61.8% RETRACEMENT LEVELS
@PLOT=(@UPPER+@LOWER)/2
@PLOT2=@LOWER+(@UPPER-@LOWER)*0.382
@PLOT3=@LOWER+(@UPPER-@LOWER)*0.618
NEXT
APPENDIX 2
Code for Fibonacci Bands Trading System, Updata Programming Language, Updata Analytics NAME “”FIBONACCI BANDS
TRADING SYSTEM “”
PARAMETER “Forward Shift Period” #FWD=26
PARAMETER “Lookback Period” #LB=52
PARAMETER “Buy Signal Delay” #X=3 PARAMETER “Sell Signal Delay” #Y=3 DISPLAYSTYLE 7LINES INDICATORTYPE
TOOL INDICATORTYPE2 TOOL
PLOTSTYLE DOT RGB(255,0,128) PLOTSTYLE2 THICK2 RGB(255,0,128) PLOTSTYLE3 THICK2 RGB(255,0,128) @
UPPER=0
@LOWER=0
#AllowLongExits=0
#AllowShortExits=0
FOR #CURDATE=#LB TO #LASTDATE+#FWD
@UPPER=PHIGH(HIGH(#FWD-1),#LB) @LOWER=PLOW(LOW(#FWD-1),#LB) ‘@PLOT2=@UPPER ‘@PLOT3=@
LOWER
‘DRAW 50%,38.2%,61.8% RETRACEMENT LEVELS
@PLOT=(@UPPER+@LOWER)/2 @PLOT2=@LOWER+(@UPPER-@LOWER)*0.382 @PLOT3=@LOWER+(@
UPPER-@LOWER)*0.618
IF SGNL(CLOSE>@PLOT3,#X,M)=1 AND CLOSE(#X+1)<HIST(@PLOT3,#X+1)
‘COVER – remove the ‘ here to make system Long Exit-Short Entry
BUY CLOSE ENDIF
IF SGNL(CLOSE<@PLOT2,#Y,M)=1 AND CLOSE(#Y+1)>HIST(@PLOT2,#Y+1) SELL CLOSE
‘SHORT – remove the ‘ here to make system Long Exit-Short Entry
ENDIF NEXT
Technical Analysis and the Carry Trade
by Larissa J. Miller
About the Author | Larissa J. Miller
Larissa J. Miller is an Associate Professor at Benedictine University. Her extensive experience in the financial markets includes many accomplishments ranging from founding Stuart Investments, an equity fund, to teaching at the University of Chicago Booth School of Business. Larissa has spent most of her time since earning her Masters degree in Financial Engineering from the Stuart Graduate School of Business working with different types of investment products. She has worked at many well-respected financial institutions, such as the Federal Home Loan Bank of Chicago, where she developed over-the-counter fixed income products; Factset where she modeled mortgage-backed securities; and PEAK6 where she created educational courses in ETFs and options. Larissa’s vast training and knowledge in equity research has formed the foundation for creating and understanding these more complex products.
Currently, Larissa is pursuing her PhD in Finance at the Illinois Institute of Technology and is ABD. Her research concentrations include currency pricing, ETFs, futures & options, VIX, economics, equities, asset pricing and financial education, specifically experiential learning. Additionally, Larissa continues to build bridges between the financial industry and academia by developing projects, such as the Implied Volatility Competition as sponsored by TD Ameritrade, as well as by managing the Americas championship CFA Research Team from Benedictine. She also sits on the Board of Stuart Investments.
by Deborah Cernauskas
About the Author | Deborah Cernauskas
Deborah Cernauskas is an Associate Professor of Finance and Business Analytics and Chair for Undergraduate Business at Benedictine University in Lisle, IL. She has extensive experience in academia and industry applying her quantitative skills in commodities trading research, corporate development, market research, and financial management. She has taught graduate and undergraduate courses in derivatives, financial management, time series analysis, statistics, agent-based modeling, data and text mining and social network analysis.
While at Benedictine University she co-founded the Institute for Business Analytics and Visualization which provides experiential learning opportunities for students in all areas of business. She is the author and co-author of several books and journal articles on risk management and quantitative methods.
ABSTRACT
Technical analysis and technical trading rules have proven both extremely profitable and popular with many traders in many different asset classes. The carry trade is extremely popular with foreign exchange traders due to two sources of potential alpha: the interest rate differential and currency appreciation. Applying technical analysis allows traders to develop sophisticated trading rules which reduces the overall number of trades executed while improving profitability. A hypothetical portfolio composed of 15 different foreign currencies of both mature and emerging markets was formed to test the trading rule. The currencies in the portfolio included: the Australian dollar (AUD), the Canadian dollar (CAD), the Switzerland franc (CHF), the British pound (GBP), the Japanese yen (JPY), New Zealand dollar (NZD), the Danish krone (DKK), the euro (EUR), the Norwegian krone (NOK), the Swedish krona (SEK), the Czech koruna (CZK) the Hungarian forint (HUF), the Mexican peso (MXN), the Polish zloty (PLN), the Singaporean dollar (SGD), the Russian ruble (RUB) and the South American rand (ZAR). The carry trade trading rule was evaluated using both moving averages and Bollinger Bands on a daily basis from August 14, 2003 through May 2, 2014.
INTRODUCTION
Technical analysis and technical trading rules have proven both extremely profitable and popular with many traders in many different asset classes. The carry trade is extremely popular with foreign exchange traders due to two sources of potential alpha: the interest rate differential and currency appreciation. Applying technical analysis allows traders to develop sophisticated trading rules which reduces the overall number of trades executed while improving profitability. A hypothetical portfolio composed of 15 different foreign currencies of both mature and emerging markets was formed to test the trading rule. The currencies in the portfolio included: the Australian dollar (AUD), the Canadian dollar (CAD), the Switzerland franc (CHF), the British pound (GBP), the Japanese yen (JPY), New Zealand dollar (NZD), the Danish krone (DKK), the euro (EUR), the Norwegian krone (NOK), the Swedish krona (SEK), the Czech koruna (CZK) the Hungarian forint (HUF), the Mexican peso (MXN), the Polish zloty (PLN), the Singaporean dollar (SGD), the Russian ruble (RUB) and the South American rand (ZAR). The carry trade trading rule was evaluated using both moving averages and Bollinger Bands on a daily basis from August 14, 2003 through May 2, 2014.
Key Words:
Carry Trade, Technical Analysis, Moving Average, Bollinger Bands, Foreign Exchange, Purchase Power Parity, Covered Interest Rate Parity, Uncovered Interest Rate Parity
PRICING THEORY
The carry trade becomes profitable when there is a disruption in purchase power parity, covered interest rate parity or uncovered interest rate parity. These three theories drive foreign exchange prices. Parity keeps prices in-line thus eliminating any arbitrage opportunities. When there is a disruption in the prices, traders quickly begin trading eliminating any excess profits.
Purchase Power Parity
Supply and demand dictate the equilibrium price in all financial markets as well as for all FX products, including exchange rates. The equilibrium price is the exchange rate between two countries, or the cost of one country’s currency in terms of the latter country’s currency. The price is determined via price discovery as buyers and sellers trade on the FX market. Under perfect conditions, the price equilibrium will be driven by the law of one price (LOOP), which states that the returns of currencies around the world will be consistent across international boundaries.
The parity conditions of the law of one price generate an arbitrage-free environment where trade occurs freely. Occasionally, the law of one price may be violated when the price for an asset is lower in one country than another (accounting for exchange rates). Once this happens, investors or consumers will begin buying the asset, creating more demand and eventually pushing the price higher until finally the equilibrium is once again established. This process works for the currencies of a given country as well as for consumer goods. If the law of one price is violated with regard to currency price, investors will rush in and purchase the ‘cheap’ currency (the currency which is under-priced or under-valued) until the law of one price is re-established with respect to the price of the currency.
There are many ways in which a discrepancy in the law of one price may be generated. Specific countries may impose quotas or tariffs, or have disagreements concerning the fundamental value of the currency. A global financial crisis may alter the fundamentals of trading in the foreign exchange market. Foreign exchange traders need volume, liquidity and stable markets to properly hedge and speculate on the direction of currency price. Times of crisis weaken the stability of the markets and prices will deviate from the law of one price.
Measuring whether the law of one price holds true can be a challenge for traders, investors, and corporations looking to hedge their profits. One common technique for evaluating the true cost of one currency compared to another uses the evaluation of the purchasing power parity (PPP). Measurement of the price of specific goods and commodities across different countries allows the investigator to better understand the power of their respective currencies. For example, if a basket of goods and services purchases an amount of goods x in one country and the same basket of goods and services purchases x/2 in another country, then the first country is said to have half the the purchasing power. Purchasing power parity can be interpreted as the foreign exchange rate of one country’s currency against another country’s currency. The law of one price dictates that the basket of goods and services should be equal across all currencies, so if there is a price misalignment, investors will demand the cheaper currency or basket of goods and services – that is, from the country whose currency has the lesser purchasing power. Demand for the latter country’s basket of goods and services will push prices up until the price reaches the natural law of one price equilibrium.
According to a literature review by Isard et al., several studies indicate that in time periods greater than one year, PPP does hold to be true (Isard, Faruqee, Kincaid, & Fetherston 2001). During shorter time periods the market will deviate from the PPP state of equilibrium. Eventually the market will correct itself by returning to the natural PPP state of equilibrium. Investors and traders don’t rely on PPP to make short term hedging or trading decisions (Cheung, Chinn & Marsh 2005). Usually, only those traders who have longer time durations such as fund managers are interested in PPP trends.
Covered Interest Rate Parity
Covered interest rate parity (CIRP) is an even stronger tool for understanding the fundamental price of a currency. Its underlying assumption is that two different assets with the same risk and same investment have the same expected returns. The objective of CIRP is to allow investors, traders or corporations to ‘cover’ themselves against changes in the currency prices.
Covered interest rate parity is expressed as:
where i is the interest rate of the domestic currency and i* is the interest rate of the foreign currency. Et is the spot price of the exchange rate where a spot contract or price is a contract for immediate execution meaning the transaction of exchange of currencies takes place at the moment of the transaction. F t+1 is the forward contract or price of the exchange rate which has an agreed upon price which will be executed at an agreed upon future date. If the trader would like to have the foreign currency in the future, the trader has a choice of either entering a forward contract today with the settlement date being the desired date in the future or entering a spot contract at future date. Entering into the forward contract allows for the trader to know today what the cost of the foreign currency will be on the day of expiration of the contract. On the other hand, the trader will not know the price of the foreign currency using a spot contract until the day of the trader enters into the contract.
The covered yields on the product are then the same allowing for adjustment of the exchange rate. The below two equations must be true for CIRP to hold.
The benefit of using CIRP over PPP is that it allows investors to build forward products synthetically. Traders are able to create synthetic forward contracts in three simple steps. The first step is borrowing the cost of the spot price of the currency today with a known interest rate and agreed upon expiration date. The trader will then take the borrowed proceeds and buy the currency at today’s spot price. Finally, at the expiration date of the loan, the trader will pay back the loan plus the interest. The trader will have paid nothing at the onset of the contract and pay the current spot price plus interest at maturity just like a forward contract. Being able to create and trade a synthetic forward product forces parity to hold because both the real product and the synthetic product generate the same pay-off. If the forward becomes expensive, traders will sell the forward and purchase the synthetic. Since the pay-off on the two is the same, the trader will collect the difference in price between the forward and synthetic. On the other hand, if the forward becomes cheap, the trader will purchase the forward and sell the synthetic product. Again, the pay-off’s earnings will cancel each other out and leave the trader with the difference in the prices of the two products. Trading these two products when there is a difference in price and the pay-off is the same generates a riskless profit for the trader. As a result, the trader will continue to exploit this disruption in pricing until the prices equalize. Demand for the cheap product will cause its price to rise. Selling the expensive product will cause the price to fall. Both of these pressures will force the two products to become the same price.
Typically, covered interest rate parity will hold within the developed market economies of OECD[1] countries. Any arbitrage opportunities which arise from a deviation in CIRP will quickly be exploited by the market. Typically these opportunities are exhausted immediately, causing covered interest rate parity to hold generally hold true (Frenkel, Levich 1975). Currencies in OECDcountries are traded with large daily volumes, implying many buyers and sellers. Any deviations from CIRP are quickly exploited due to the high volume and liquidity of the market.
Uncovered Interest Parity
Uncovered interest parity (UIP) takes covered interest rate parity one step further by taking into account the expected changes in the interest rate. This can be expressed mathematically:
The key difference between CIRP and UIP is the incorporation of forward contracts or forward products. The benefit of such a forward contract in the CIRP model is that the trader knows with certainty the future price of the asset. The lack of forward products requires investors to estimate or speculate upon the future value of the assets in UIP. Typically, the expected future price of the currency is reflected in the forward price. This is not always the case, however, and so there is an inherent risk of price uncertainty. Investors need to be properly compensated for holding risky assets. When a currency is perceived to be risky, the investor demands a high interest rate for compensation. On the other hand, when the currency is perceived to be less risky, the investor will demand a lower interest rate for compensation. The higher interest rate compensates the investor for the larger probability of potential loss from the riskier currency.
UIP describes the future demand for currency. Lower rates will entice investors to purchase goods which will cause the cheap currency to appreciate against the rest of the world. UIP describes how this works. Bilson (1981) demonstrates that UIP does not hold true. To demonstrate this he developed a regressions model:
where E t-1 is the spot price at the previous time period. i t-1 is the domestic currency from the previous time period and i* t-1 is the foreign currency from the previous time period. And
The regression demonstrates that the interest rate differential should be the unbiased predictor of the exchange rate appreciation. Uncovered interest rate parity model states that alpha should be zero and beta should be one. However, according to Bilson, this is not the case in the short-run. Most studies demonstrate that alpha is actually greater than zero and beta is less than one, indicating that the currency of the country with the higher interest rate will appreciate, not depreciate, against that of the country with the lower interest rate. Many researchers have subsequently also studied this same problem. The average beta of the researchers is -0.88 (Froot and Thaler 1990). Additional surveys of academic research have demonstrated that beta approaches zero (Hodrick, 1987 and Engel, 1996).
In the long-run, uncovered interest rate parity does indeed hold true, for a number of economic reasons: (a) overall macroeconomic risk stemming from high minus low portfolios; (b) covariance with risk factors; (c) higher currencies being more exposed to slope risk; and (d) the tendency, during times of increasing volatility, for the currencies of lower rate countries to become more exposed to common innovations and offer insurance.
CARRY TRADE
The carry trade allows traders to exploit deviations from uncovered interest rate parity by investing in the cheap currency and selling the expensive currency allowing for the trader to make the difference or the spread between the two currencies. As stated above, the uncovered interest rate parity model states that the difference between the interest rates of two countries is the expected change in exchange rates of the countries’ currencies. Any time this parity does not hold to be true, there is the potential for profit through the exploitation of this condition. Research conducted to demonstrate deviations from UIP extend into the usage of the carry trade. If UIP were to hold true in the short term time period, the carry trade shouldn’t make a profit. However, both academics and practitioners have demonstrated the carry trade can make a profit.
In the carry trade, the forward rate reflects the market risk-free rate in the two different countries. The expected spot rate of the next period tends to be the same as the forward price, although this does not necessarily have to hold true. This tendency indicates that uncovered interest rate parity holds, i.e., the forward rate is the market’s expectation of the future spot rate. This is true for most futures and spot pricing relationships. The interest rate differential (IRD) then drives the forward rate, which also serves as the approximation for whether the currency will appreciate or deprecate. The carry trade strategy borrows the cheap currency and sells the expensive currency, thus making the spread between the two different currencies. The carry trade trader can make an additional profit from the appreciation of the long currency.
In the long-run, carry trade returns exhibit significant skewness as compared with returns from other asset classes which are somewhat normally distributed with a mean of zero and a standard deviation of one. The carry trader may generate many small returns generating consistent returns during normal times. However, the trade also involves significant risk, with the possibility of substantial losses stemming both from the interest rate differential and from currency depreciation. When this happens, the bottom of the trade drops very quickly. Returns on the carry trade exhibit significant loss through negative skew (Brunnermeier, Nagel, & Pederson 2009).
DATA
Data Source
Finding appropriate data is challenging since there is no one single exchange for the FX market. As a result, no public record is kept of daily price and volume data; let alone bid and asks prices. Prices from published data sources such as Bloomberg are considered to be indicative implying actual prices may be different from posted prices on said retail platforms. These challenges are attributed to allowing for market makers to continue making a profit in the FX space (Yao 1997 and Lyons 2001).
Market makers make a living on the bid-ask spread. The bid price is the amount the investor is willing to buy the security and the ask price is the price the investor is willing to sell said security. The market maker takes both sides of the transaction by selling at the bid price and buying at the ask price. The larger the difference the more money the market maker makes. In many markets the advancement of technology has eliminated the need for market makers as technology can directly connect buyers and sellers. Since the FX market doesn’t have a centralized exchange, the need for market makers still exists. Additionally, without an exchange, prices become more opaque allowing for market makers to take advantage (Yao 1997 and Lyons 2001) of the situation.
The data used in this study was provided for by Bloomberg. Daily FX spot prices were used. The overnight interest rates were calculated based on the FX swap points.
FX swap points correspond to the general cost of carry model (Pojarliev, Levich 2012). Traders don’t buy and sell the products directly. Rather the investors go through investment banks and either receive or pay for the forward points depending on whether they are buying or selling the product. These points reflect the cost of carry model as would be executed in an actual trading system. The overnight forward points are used since we are using daily FX exchange rates.
Currency Data
Data availability for the different countries depends not only on the data collection by a firm in the financial markets as well as whether the country had a floating currency. A floating currency allows for the currency value to fluctuate with supply and demand for said currency. Some currencies are pegged to another currency. When a currency is said to be pegged, the currency derives its value and the fluctuation of the value from the currency it is pegged. Many currencies have been pegged to the US dollar. As a result, every time the US dollar increased in value so would the pegged currency. The challenge is working with pegged currencies is that the currency has the exact movements as with the currency it is pegged. When currencies are pegged, there is no spread which would cause the carry trade to have a profit of zero.
For a long period of history, many countries adhered to the gold standard, meaning their currency was valued based on the underlying price of gold. Having a set standard allowed for ease of trading among those countries on the gold standard as converting currencies into one currency for another was quick and simple. During this time period in world history economies had the opportunity to expand and grow leading to an openness of markets (Karczmarczyk 2010). However, the gold standard is not without its faults. With any precious metal or commodity, there is always a chance of a world-wide shortage, which is the world could run out of gold. Another aspect with gold shortages pertains to specific countries. As trade expanded, gold left the country to make purchases of goods and services. As gold left the country, fear crept into the markets as merchants became concerned about their ability to convert their goods to currency. Eventually, countries abandoned the gold standard in favor of floating their currency.
The currencies used for the portfolio are: the Australian dollar (AUD), the Canadian dollar (CAD), the Switzerland franc (CHF), the British pound (GBP), the Japanese yen (JPY), New Zealand dollar (NZD), the Danish krone (DKK), the euro (EUR), the Norwegian krone (NOK), the Swedish krona (SEK), the Czech koruna (CZK) the Hungarian forint (HUF), the Mexican peso (MXN), the Polish zloty (PLN), the Singaporean dollar (SGD), the Russian ruble (RUB) and the South American rand (ZAR).
The un-weighted excess returns are calculated through several steps due to the nature of the data. We have daily FX prices and daily overnight forward rates. These overnight forward rates are the interest rate differentials and are used as such. Additionally, we use the daily federal funds rate as a proxy for the overnight interest rate.
To calculate excess returns:
The dummy variables, a and out, are used solely for the purpose of breaking the equation into smaller pieces to ensure accuracy.
FX i,t is the FX price for currency i at time t. ON i,t-1 is the overnight forward price of currency i at time t. The investor is earning the interest rate overnight and thus we should use the previous overnight price
Out i,t is the outright price for every asset at every time step.
We are changing the price into an annual price. We will make this into a daily price in the proceeding step. We do this to mimic other methodologies for calculating excess returns of the carry trade.
Where R i,t is the excess return for each asset at each time step.
The first portion of the equation takes into account the percent change from the FX price from one time step to the next. The second part of the equation takes into consideration the investor earning the interest rate differential as well as the appreciation or depreciation of the home currency.
Before calculating the optimal portfolio weights, a matrix of returns for each of the asset positions needs to be developed. This matrix is then fed into each of the portfolio optimization techniques which are described in the portfolio optimization section. The returns are:
Where St is today’s spot price or foreign exchange price, S t-1 is yesterday’s spot price or foreign exchange price, i*t is the interest rate in the foreign country today and it is the interest rate in the domestic country. The interest rate differential is found in the overnight forward points. The value is denominated in US dollars, the domestic currency. The overnight forward points need to be manipulated into the form of an interest rate as described in detail in the data section. The manipulation is as follows:
The interest rate for the domestic overnight is the federal funds rate. Using the FX price as the spot price, the returns equation can be re-written as:
The returns are calculated on a daily basis for each of the fifteen currency assets. The returns consist of over ten years worth of data from August 14, 2003 through May 2, 2014.
MOVING AVERAGES
Incorporating technical analysis allows for a trader to incorporate the underlying princpals of the carry trade with the discipline of rules-based trading. The base model is an equally weighted portfolio of the 15 currencies. The base model trades every single day in every single currency. This creates excessive trading for the trader which results in excessive trading costs which generates a drag onto the portfolio. Using a moving average of 10-days, 15-days, 20-days, 25-days or 30-days reduces the number of trades by over half. The average number of traders per day is 6.9 as opposed to trading 15 currencies a day.
The moving average is determined by taking the average of the past returns of each currency based on a sliding window. For example:
The currency will be traded if the preceeding day’s returns are higher than the moving average. The position will be:
Where C t-1 is the previous day’s returns and C t is today’s returns. This rule is applied to each of the five different moving averages (10-day, 15-day, 20-day, 25-day and 30-day). Table 1 below provides the mean, standard deviations, skew, kurtosis, minimum and maximum values of the portfolio as well as the Sharpe ratios for each of the five different moving averages.
Bollinger Bands
Bollinger bands are a commonly used technical analysis tool for many different asset classes including the FX asset. The bands measure whether the current market price is high or low by comparing the price to recent prices, typically a 20 day moving average. The width of the bands is determined by the level of volatility as measured by the changing market prices of the asset (Williams 2006). The use of the Bollinger bands is popular as traders often use them to help determine various patterns in the price chart for technical analysis.
Bollinger bands can be used to generate trading signals. Using a bollinger band of 10-days, 15-days, 20-days, 25-days or 30-days reduces the number of trades by 94% – 95%. The average number of traders per day less than 1 as opposed to trading 15 currencies a day.
The upper Bollinger Band using a 10-day window is found by:
The lower Bollinger Band using a 10-day window is found by:
The bollinger band determines when the investment should be long or short as defined by:
Where C t-1 is the previous day’s returns and C t is today’s returns. This rule is applied to each of the five different bollinger bands (10-day, 15-day, 20-day, 25-day and 30-day). Below is a table of the mean, standard deviations, skew, kurtosis, minimum and maximum values of the portfolio as well as the Sharpe ratios for each of the five different moving averages.
CONCLUSION
The carry trade generates small but consist excessive returns over the federal funds rate during the time period between August 14, 2003 and May 2, 2014 on a daily basis. The consistent excess returns are popular with traders who are willing to take the potential risk of a currency crisis. Incorporating either moving averages or bollinger bands into a trading system allows for traders to be more disciplined. The additional benefit of using these trading rules results in less volume of trades executed resulting in less fees paid to brokerage firms. These savings reduce drag on the foreign exchange portfolio.
FOOTNOTES
- Organization for Economonic Co-operation and Development
REFERENCES
Isard, Peter, Hamid Faruqee, G. Russell Kincaid, and Martin Fetherston. 2001. “Methodology for Current Account and Exchange Rate Assessment.” International Monetary Fund Occasional Paper No. 209 (December).
Cheung, Yin-Wong, Menzie D. Chinn, and Antonio Garcia Pascual. “Empirical exchange rate models of the nineties: Are any fit to survive?. ”Journal of International Money and Finance 24.7 (2005): 1150-1175.
Frenkel, Jacob A., and Richard M. Levich. “Covered interest arbitrage: unexploited profits?.” The Journal of Political Economy (1975): 325-338.
Bilson, John F O, 1981. “The “Speculative Efficiency” Hypothesis,” The Journal of Business, University of Chicago Press, vol. 54(3) (July): 435-451.
Froot, Kenneth A. and Richard H. Thaler. 1990. “Anomalies: Foreign Exchange.” Journal of Economic Perspectives, vol. 4, no. 3 (Summer): 179-192.
Hodrick, Robert J. 1987. The Empirical Evidence on the Efficiency of Forward and Futures Foreign Exchange Markets. Reading, U.K.: Hardwood Academic Publishers.
Engel, Charles. 1996. “The Forward Discount Anomaly and the Risk Premium: A Survey of Recent Evidence.” Journal of Empirical Finance, vol. 3, no. 2 (June): 123 – 192.
Brunnermeier, Markus K., Stefan Nagel, and Lasse H. Pederson. 2009. “Carry Trades and Currency Crashes.” NBER Macroeconomics Annual, vol. 23, no.1 (April): 313-348.
Olivier, Jeanne, Paul Masson. 2000. “Currency Crisis, Sunspots and Markov-Switching Regimes.” Journal of International Economics. 50 (2000) 327-350.
Breeden, Douglas T. “An intertemporal asset pricing model with stochastic consumption and investment opportunities.” Journal of financial Economics 7.3 (1979): 265-296.
Grinold, Richard C., and Ronald N. Kahn. “Active portfolio management.” (2000).
Karczmarczyk, Catherine. “History of International Monetary System and its Potential Reformulation.” University of Tennessee Honors Thesis Project. May 2010 Unpublished.
Yao, Jian. 1997. “Essays on Market Making in the Interbank Foreign Exchange Market.” Unpublished PhD Thesis, New York University.
Lyons, Richard K. 2001. The Microstructure Approach to Exchange Rates. Cambridge, MA: MIT Press.
Pojarliev, Momtchil, and Richard M. Levich. “A new look at currency investing.” (2012): 1-94.
Asset Allocation Using the Fama-French Value Factor
by Kevin Oversby
About the Author | Kevin Oversby
Kevin Oversby manages a private family fund from Vancouver, Canada using similar methods to those described in this paper. He holds a Master of Engineering from the University of Cambridge and may be reached via either his ‘rrspstrategy’ wordpress blog or gmail handle.
ABSTRACT
The Fama-French three factor model is ubiquitous in modern finance. Returns are modeled as a linear combination of a market factor, a size factor and a book-to-market equity ratio (or “value”) factor. The success of this approach, since its introduction in 1992, has resulted in widespread adoption and a large body of related academic literature.
The risk factors exhibit serial correlation at a monthly timeframe. This property is strongest in the value factor, perhaps due to its association with global funding liquidity risk.
Using thirty years of Fama-French portfolio data, I show that autocorrelation of the value factor may be exploited to efficiently allocate capital into segments of the US stock market. The strategy outperforms the underlying portfolios on an absolute and risk adjusted basis. Annual returns are 5% greater than the components and Sharpe Ratio is increased by 86%.
The results are robust to different time periods and varying composition of underlying portfolios. Finally, I show that implementation costs are much smaller than the excess return and that the strategy is accessible to the individual investor.
INTRODUCTION
The Fama French Three Factor Model describes the expected return (r) on an asset as a result of its relationship to three factors: market risk (Mkt), size risk (SMB), and “value” risk (HML):
where rf is the risk-free return rate and the coefficients measure the exposure to each risk.
SMB, which stands for Small Minus Big, measures the excess return received by investing in stocks of companies with relatively small market capitalization. This additional return is often referred to as the “size premium.” SMB is computed as the average return for the smallest 30% of stocks minus the average return of the largest 30% of stocks in that month.
HML, which is short for High Minus Low, measures the “value premium” provided to investors for investing in companies with high book-to-market (B/M) values. HML is computed as the average return for the 50% of stocks with the highest B/M ratio minus the average return of the 50% of stocks with the lowest B/M ratio each month. Since its introduction by Fama and French (1992), a vast literature has been published on all facets of the model from correlation with global economic factors (Asness, Moskowitz and Pedersen 2013) to practical applications to fund management (Doskov, Pekkala and Ribeiro 2013).
All three factors are time varying. This raises the following questions:
- Is it possible to reliably allocate capital to the factor(s) likely to outperform in the next period?
- Do the costs of implementing such a strategy exceed the benefit?
- Is this type of strategy accessible to the individual investor?
The focus of this paper will be on the application of one specific property of the factors. That property is the monthly serial correlation, or autocorrelation, of the factors as illustrated in table 1:
Autocorrelation can be exploited to predict which segment of the market is likely to outperform in the next month. Historically, the HML factor exhibits the most stable autocorrelation and is therefore the primary focus of this paper. Kothari and Shanken (1998) note that if the B/M effect is related to risk and liquidity it can reasonably be expected to persist to some degree in the future. Asness, Moskowitz and Pedersen (2013) document liquidity effects in value and momentum returns. They find a positive relationship between value and liquidity risk.
Kothari and Shanken find that book-to-market effect is mainly concentrated in small firms. A paper by Loughran (1996) mirrors this finding for large firms. For the largest size quintile, which accounts for about three-quarters of all market value, he concludes that B/M has no reliable predictive power for returns in the 1963-94 period.
For these reasons, this paper studies return data from portfolios with firm size below the NYSE median. The rest of this paper is organized as follows. Section 2 describes data and methodology. Section 3 presents the empirical findings. Section 4 conducts the robustness checks and Section 5 concludes.
DATA AND METHODOLOGY
All raw data in this paper are taken from the on-line library provided by Kenneth French1. The portfolios are selected from the 6 portfolios formed on size and book-to-market. The precise definition, taken from the website, is as follows:
“The portfolios, which are constructed at the end of each June, are the intersections of 2 portfolios formed on size (market equity, ME) and 3 portfolios formed on the ratio of book equity to market equity (BE/ME). The size breakpoint for year t is the median NYSE market equity at the end of June of year t. BE/ME for June of year t is the book equity for the last fiscal year end in t-1 divided by ME for December of t-1. The BE/ME breakpoints are the 30th and 70th NYSE percentiles.The portfolios for July of year t to June of t+1 include all NYSE, AMEX, and NASDAQ stocks for which we have market equity data for December of t-1 and June of t, and (positive) book equity data for t-1. “
I use 30 years of monthly total-return data in my analyses. Portfolios are value weighted. For the robustness checks in Section 4, equal weighted portfolios and 60 year histories are also used.
METHODOLOGY
A realistic strategy must use only information that was available to the investor in real-time. The strategy described below uses the autocorrelation property of the value factor, HML. This property was strongly evident since at least 1975 (table 1) therefore this criterion is satisfied.
One discussion point, which applies to many of these types of study, is that of trading costs. For more than a decade, suitable low-cost factor based funds have been available. However, in the 1980s and earlier, trading costs were higher and use of appropriate funds may not have been possible in lieu of individual stocks. The effect may be to reduce the level of realism as test length increases.
The HML factor is time varying and autocorrelated, therefore the sign of the factor is a good predictor of the following month’s sign. A positive (negative) HML factor in the current month predicts value will outperform (underperform) growth next month. A strategy is proposed which switches 100% of capital monthly from value (high B/M) to growth (low B/M) based on the sign of the HML factor. This switching portfolio is itself autocorrelated therefore the strategy switches to risk-free if the previous monthly return is negative:
Let R be the monthly return with subscripts: p portfolio, v value, g growth, f risk-free and t time:
Rp,t = Rv,t if HMLt-1 > 0 and Rp,t-1 > 0
Rp,t = Rg,t if HMLt-1 < 0 and Rp,t-1 > 0
Rp,t = Rf,t if Rp,t-1 < 0
The value portfolio is the small high book-to-market Fama-French portfolio and the growth portfolio is the small low book-to-market Fama-French portfolio from the 2×3 series.
Many tactical portfolios use bonds as a non-correlated asset to reduce return volatility (e.g. Faber 2007). I choose to switch to risk-free rather than bonds to avoid biasing the results. Bonds have been in a bull market over the duration of the tests and typically negatively correlated with stocks. This return enhancer may not be available in the intermediate future, given that interest rates are currently close to zero.
RESULTS
Table 2 lists the results and statistics of the strategy compared with the individual components. Sharpe Ratios are calculated relative to zero and annualized. I report the statistical significance of the null hypothesis of zero mean return (t-stat). Results are frictionless; costs are discussed later.
The strategy annual return is 3.4% greater than the average of the base Fama-French portfolios. Sharpe Ratio increases from 0.7 to 1.1, signifying improved returns on a risk-adjusted basis.
ENHANCED RESULTS
Can this strategy be improved without adding complexity? Kothari and Shanken (1998) show that the growth portfolio has relatively low returns. Berger, Israel and Moskovitz (2009) describe an alternative and improved way to access growth: momentum. This anomaly is extremely robust and persists over most asset classes and time periods (Asness, Moskowitz and Pedersen 2013). They note that momentum is closely correlated with growth but with 3.3% greater annual returns from 1980-2009. The momentum portfolio has better performance, both in absolute terms and relative to a core index.
Israel and Moskowitz (2013) show that the momentum premium is present and stable across all size groups— there is little evidence that momentum is substantially stronger among small cap stocks over the entire 86-year U.S. sample period. The value premium, on the other hand, is largely concentrated only among small stocks. For consistency with the value portfolio, the small cap momentum portfolio is used.
Replacing the growth portfolio with a momentum portfolio leads to the result in table 3.
The annual return of 19.7% is 5.0% greater than the average of the component portfolios. Risk adjusted returns (measured by the Sharpe Ratio) increase by 86% over the value portfolio with a t-statistic of 7.1 and R2 of 99.7%.
The maximum drawdown (DD) is dramatically reduced from 62.8% to 18.3%.
Figure 1 shows compounded growth of $1 starting in 1984, for the value and momentum portfolios, and the switching strategy. The final worth of the strategy portfolio is about $200.
The lower part of the plot depicts the state of the strategy: up = value, down = momentum and center = risk-free. A moving average is overlaid for visualization of the market regime. The strategy typically switches 7 times per year.
Notice how the strategy moves aggressively into value after the recessions of 2002 and 2008. Daniel and Moskowitz (2013) find that momentum portfolios strongly underperform during this type of recovery or ‘rebound’ period.
FACTOR ANALYSIS
The portfolio returns were regressed on the risk factors using equation (1). Results are shown in table 4.
All portfolios load heavily on size, as expected due to the small portfolios used. Loading on the market factor, also known as beta, is slightly above one for the base portfolios. Beta is much lower for the switching strategy but the regression fit (R2) is poor. Adding the momentum factor (UMD) does not improve R2 therefore those results are omitted.
IMPLEMENTATION COSTS
Annual fees
Small value funds such as VBR1 are available for 10 basis points (bp) annually. Small momentum funds (for example DWAS²) are a more recent innovation and cost in the range of 60 bp. Assuming equal time in each fund, annual fees could average 35 bp.
Commissions and slippage
With today’s fixed brokerage costs, use of liquid funds and generous allowance for slippage, trades could be completed for 5 bp. Annual trading costs for 7 round-trip trades total 70 bp.
Therefore the total implementation costs, excluding taxes, are 105 bp annually. This is a fifth of the strategy excess return over the underlying component average.
ROBUSTNESS CHECKS
To check robustness, I repeated the calculations for the equally-weighted small value and momentum portfolios. The results are shown in table 5.
Performance is higher across the board for equal-weight portfolios (but may not be practically realizable due to large positions in illiquid stocks). However, for the purpose of comparison, the results show that the strategy generates a 78% improvement in Sharpe Ratio and an annual return increase of 5.6% over the base portfolio average.
Finally, the calculations were repeated for the time period from 1954 to 1984. Tests are frictionless which, as alluded to in the methodology section, would be less realistic for this epoch. The results are shown in table 6.
The strategy underperforms momentum by 1.3% annually on an absolute basis but the Sharpe Ratio of the strategy is 50% greater than the value portfolio and 36% superior to the momentum portfolio. Thus the risk adjusted returns are substantially larger. The strategy t-statistic is higher than the base portfolios and R2 is similar throughout.
CONCLUDING REMARKS
The analyses in this paper show that the monthly autocorrelation property of the HML (or “value”) factor can be reliably exploited to form an investment strategy. A strategy which switches capital from value to momentum portfolios based on the sign of the factor is demonstrated to have superior absolute and risk adjusted performance to the underlying instruments.
The questions posed in the introduction have been answered as follows:
- It is possible to reliably allocate capital using factor autocorrelation.
- The costs of implementing such a strategy are a fifth of the excess returns.
- The strategy uses liquid, low-cost funds and trades 7 times per year, therefore is readily implementable by the individual investor.
BIBLIOGRAPHY
Antonacci, G., 2012, Risk Premia Harvesting Through Dual Momentum
Asness, C. S., T. J. Moskowitz, and L.H. Pedersen, 2013, Value and momentum everywhere, The Journal of Finance 58, 929-895.
Berger A.L., Israel R. and Moskovitz T.J., 2009, The Case for Momentum Investing, AQR White Paper
Daniel K.D. and Moskowitz T.J., 2013, Momentum Crashes, Working Paper
Doskov, N., Pekkala, T., Ribeiro, R., 2013, Tradable Aggregate Risk Factors and the Cross-Section of Stock Returns.
Faber, M., 2007, A Quantitative Approach to Tactical Asset Allocation, Journal of Wealth Management, Spring 2007
Fama, E. F., French, K. R., 1992, The Cross-Section of Expected Stock Returns, Journal of Finance 47, pp. 427–465.
Fama, E. F., French, K. R., 1993, Common risk factors in the returns on stocks and bonds, Journal of Financial Economics 33, 3–56.
Fama, E. F., French, K. R., 2012, Size, Value, and Momentum in International Stock Returns
Israel R. and T.J. Moskowitz, 2013, The role of shorting, firm size, and time on market anomalies, Journal of Financial Economics 108, 275-301.
Kothari and Shanken, 1998, Beta and Book-to-Market: Is the Glass Half Full or Half Empty?
Loughran, T., 1997. Book-to-market across firm size, exchange, and seasonality: Is there an effect? Journal of Financial and Quantitative Analysis 32, 249–268.
Sharpe, W. F., 1964, Capital asset prices: A theory of market equilibrium under conditions of risk, Journal of Finance 19, pp. 425–442.
Volume Spikes During Swift Stock Trends
by Giorgos Siligardos, Ph.D.

About the Author | Giorgos Siligardos, Ph.D.
Giorgos Siligardos holds a PhD in Mathematics and a Market Maker certificate on derivatives from the Athens Exchange. He is a financial software developer and coauthor of academic books on finance. Giorgos has also been a research and teaching fellow to the University of Crete as well as a teaching fellow to the Department of Finance and Insurance at the Technological Educational Institute of Crete for many years teaching math and financial courses and supervising Masters dissertations. You may contact Giorgos at:
giorgos.siligardos@intalus.com
ABSTRACT
We study whether volume spikes during swift stock trends provide extra information about ensuing price thrusts. The article differs from other similar research work on the subject of trading volume in many ways and its results are of much practical interest to swing traders and trading system developers.
INTRODUCTION
Academic research suggests that substantial positive volume shocks in stocks (a.k.a “volume spikes” in the technical analysis jargon) are generally followed by excess price returns. The two most prominent explanations proposed for this phenomenon are the visibility hypothesis and the liquidity premium. The first one asserts that volume spikes increase the visibility of the stock and attract analysts’ and investor’s attention towards it (see for example Gervais, Kaniel and Mingelgrin (2001). This extra attention doesn’t generally come from those who own the stock but from new investors who mostly serve as potential buyers so the result is higher possibility for an increase in bids for the stock rather than an increase in offers. Moreover, as new analysts and traders arrive, the evaluation risk of the fair price of the stock becomes lower which in turn makes the stock more inviting. The second explanation is related to the fact that stocks with low trading volume are almost always illiquid. Illiquidity is undesirable for traders and investors who in turn are afraid to buy them so these stocks are generally traded at discount. On the other hand, higher trading volume is almost always related to higher liquidity. Higher liquidity means higher regard by the market participants for that stock which in turn must be traded at a premium known as Liquidity Premium (see Datar, Naik and Radcliffe (1998) and Chordia, Subrahmanyam and Anshuman (2001)). As a result, volume spikes for a stock increases its liquidity premium which in turn puts upward pressure in its price.
Although controverted by some studies (Lee and Swaminathan (2000)) the liquidity premium argument along with the more regarded visibility hypothesis seem to offer a plausible explanation for a positive effect of volume spikes (and high volume in general) in the price of stocks. Technical analysis on the other hand doesn’t consider volume in and of itself as clearly positive or negative for the future performance of a stock. A price gap down from a support level of a narrow trading range accompanied by a high volume for example is considered significantly bearish by technical analysts whereas a price gap up with high volume from a resistance level is considered bullish. Regarding price trends-as John Murphy (1999) points out-technical analysis considers volume as a gauge of “intensity or urgency” behind the price move. This means that advancing (declining) volume during uptrend is considered bullish (bearish) and advancing (declining) volume during downtrends is considered bearish (bullish). Technical indicators (like On Balance Volume) are usually employed to blend price and volume together but almost all such indicators are shocked when tall volume spikes emerge and it is not uncommon after volume spikes to see half of the indicators giving strong bullish signals and half of them giving strong bearish ones.
The present article contributes to the technical analysis literature by examining whether a tall volume spike during a swift trend (lasting no more than 2 calendar months) provided extra information about the short term prospects of the price in relation to the magnitude and the direction of the preceding trend in a number of various US stocks from the beginning of 1999 until the end of 2004. Statistical tests are used to evaluate the significance of the findings. The methodology employed is quite different from what is commonly used in the literature. First, we don’t define a volume spike via its simple moving average or percentiles but rather a volume reading which is significantly higher than all its past values during the trend. Second, we don’t evaluate performances by calculating the average percentage distance between two time instances but we use fictional trading systems for this purpose. Furthermore, we relate the performances to the height of the prior trend and introduce the Standardized Profit as a way to transform them in a standardized scale thus allowing direct comparison and equal treatment of the various cases. Third, trends are not identified via classic indicators (such as moving averages) or regression methods but via retracements. Fourth, we introduce a gauge to measure the “energy” potential of a situation to produce directional movement and fifth, we study long term stock market bullish and bearish periods separately.
DATA AND METHODOLOGY
Daily price and volume data of 4,352 US stocks from the beginning of 1999 until the end of 2004 formed a database (hereafter called: The Database) which was used for this study. The time period covered includes three years of tantalizing bearish stock market (2000, 2001 and 2002) which will be called Bearish Years and three years of stock market euphoria (1999, 2003 and 2004) which will be called Bullish Years so it offers a highly representative sample of various conditions. An algorithm (namely: Identification Algorithm) was run across all stocks in The Database to find cases where volume spikes emerged during swift trends (hereafter referred to as Spike Cases). Nested cases are allowed but the shorter duration case should last no more than half the number of candles that form the longer duration case. The algorithm identified a total of 1,706 Spike Cases (1,036 uptrends and 670 downtrends). Figures 1 and 2 illustrate what is the typical Spike Case for this algorithm in a chart window for swift uptrends and downtrends respectively. Here are the details of the algorithm:
- A Spike Candle is a candle which has such a high volume (V) so that at least 50 candles before it have volume less than 20% of V.
- For uptrends, the Start of Trend Value (STV) is the lowest low in the window whereas for downtrends it is the highest high. The candle which bears the STV is called Start of Trend Candle (STC).
- For uptrends, the End of Trend Value (ETV) is the highest high obtained by the price between the STC and the Spike Candle (including the Spike Candle) whereas for downtrends it is the lowest low obtained by the price between the STC and the Spike Candle (including the Spike Candle). The candle which bears the ETV is called End of Trend Candle (ETC).
- The duration of the trend (that is, the time distance from STC to ETC) must be at most 50 candles (approximately 2 calendar months) and at least 6 candles (approximately a week).
- Vertical distances measured in logarithmic scale are called Log-Distances. More precisely, if A and B are two price values, then the Log-Distance between A and B is defined as |ln(A)-ln(b)| (where | | denotes the absolute value and ln() is the natural logarithm function).
- The Log-Distance between the STV and the ETV is called “Trend Log-Height”
- The Log-Distance between the high (low) of the Spike Candle and the ETV must be less than 5% of the Trend Log-Height in the case of uptrend (downtrend).
- All retracements during the trend must not retrace more than 30% of the Trend Log-Height in LogDistance terms and their duration must not exceed 50% of the duration of the trend.
- The retracement (if exists) between the ETC and the Spike Candle must not exceed 15% of the Trend Log-Height in Log-Distance terms.
USE OF THEORETICAL TRADING SYSTEMS
In order to check whether the price thrusts in the direction of the trend or against it two theoretical trading systems are used. For every Spike Case there are two opposite trades to evaluate: One trade which can be long or short and it goes with the trend (With-Trade) and one opposite trade that goes against the trend (Against-Trade) so, the trades are distributed into two groups and form two systems (namely: With-System and Against-System) according to whether they are With-Trades of Against-Trades respectively. Opposite trades which refer to the same Spike Case will hereafter be called spouse trades.
Regarding exits, all systems are assumed to enter the market at the closing price of the Spike Candle and they exit the market either when a trailing stop (TS) is triggered or when the trade is still active but there is no more data in the database for the stock in which case the last closing price is considered the exit price. Furthermore, when stops are triggered, both systems are assumed to exit the market at exactly the TS even when the price gaps past it. The TS is first calculated using the closing price of the Spike Candle and it is applied to the next candle. It is then updated with every new subsequent candle. The calculation of TS is based on a percentage π% of the Trend Log-Height which will be called TS-Parameter. More precisely, suppose a position is active for the time-ordered subsequent candles A1, A2, … At (A1 being the Spike Candle) and we want to calculate the TS to be used in the next candle. Suppose further that M is the maximum of the closing price of A1 and the highest high of all A2, … At and that m is the minimum of the closing price of A1 and the lowest low of all A2, … At. The TS to be used in the next candle At+1 is given by exp(ln(M)+π%∙T) for long positions and exp(ln(m)-π%∙T) for short positions, where T is the Trend Log-Height and exp() is the natural exponential function. To capture price thrusts of various magnitudes, four values for the TS-Parameter were used: 10%, 20%, 30% and 40%.
The performance (positive or negative) of trades is expressed in a profit-per-unit-of-initial-risk way and will be called Standardized Profit. Neither transaction costs nor any kind of market frictions are assumed. If a long position is opened at P, the initial stop is at S and the trade is exited at price E then the profit is (E-P) and the initial risk is (P-S) so the Standardized Profit equals (E-P)/(P-S). In the case of short position the profit is (P-E) and the risk is (S-P) so the Standardized Profit equals (P-E)/(S-P) which again can be expressed as (E-P)/(P-S). The idea behind the use of Standardized Profit is to serve as a normalizer that will allow treating all Spike Cases equally with respect to their performance in relation to the magnitude of their trend. This is attainable because the stop is expressed in terms of Trend Log-Height. Also, the Standardized Profit serves as a profit factor for the trades (because, when the Standardized Profit is f and the trade was taken by risking $1 based on the initial stop then the profit of the trade is $f) which means that the mean of all Standardized Profits for Against-Trades across all Spike Cases is exactly the expectation (Edge) of the Against-System for Spike Cases. Similarly, the mean of all Standardized Profits for With-Trades across all Spike Cases is exactly the expectation (Edge) of the WithSystem for Spike Cases. Note that the design of the systems doesn’t allow for simultaneously profitable spouse trades. However, it allows for simultaneously unprofitable spouse trades.
COMPARISON METHODS
In principle, we are interested in two checks. The first check regards the direction preference of price thrusts in relation to the direction of the trend and the second check regards the difference between the expectations (Edges) of the two systems as rendered by the difference of their mean Standardized Profits. Both checks will be presented from the point of view of Against-System. More precisely, let Aprof and Wprof be the number of profitable trades of Against-System and With-System respectively (Note again that the design of the systems forbids the existence of simultaneously profitable spouse trades). For the first level of check we calculate the percentage Aprof/(Aprof+Wprof)∙100% which shows whether there is more tendency for the price to thrust against-the-trend rather than with-the-trend. This percentage will hereafter be called as Against the Trend Preference (ATP). If the ATP is higher than 50% then there is more tendency for the price to thrust in against-the-trend direction and if it is lower than 50% then there is more tendency for the price to thrust in the direction of the trend. For the second check, let AEdge and WEdge be the average standardized profits (Edges) of the Against-System and With-System respectively in all trades. The difference AEdge-WEdge will be called Against the Trend Edge Premium (ATEP) and it quantifies how much better is the overall Edge of the Against-System from that of the With-System.
Since we want to study what happens when volume spikes emerge we would have to reapply the systems and calculate the ATP and ATEP numbers in what we shall call Control Cases and contrast them with the ATP and ATEP numbers of the Spike Cases. The Control Cases are exactly the same as the Spike Cases but without the necessity of the presence of volume spikes and serve a similar benchmark purpose (although not the same) to our study as the control groups in biostatistics. Contrasting the ATP and ATEP numbers of the Spike Cases with the ATP and ATEP numbers of Control Cases will isolate the relationship and the significance between the appearance of volume spikes and the subsequent thrusting behavior of price. For this purpose, a modification of the Identification Algorithm (eliminating the necessity for volume spikes) was run in The Database which collected a total of 52,304 Control Cases (29,161 uptrends and 23,143 downtrends). The ATP and ATEP numbers were then calculated for these cases to create benchmarks for evaluating the ATP and ATEP numbers of the Spike Cases.
The Against and With systems are of course directional ones and it will be shown later in the article that statistical significance with respect to differences in their performances between Spike Cases and Control cases is strongly present only in the category of downtrends during bearish years for the whole stock market. The fact, however, that the Spike Cases do not seem to differ (in a statistically significant way) from the Control Cases in other categories regarding a stable against-the-trend or with-the-trend directional preference doesn’t necessarily mean that Spike Cases and Control Cases are the same regarding their potential energy to produce price thrusts. To test whether there is indeed difference in this kind of potential energy we introduce the Potential Directional Energy (PDE) number. The PDE mixes the number of positive trades of the systems with the number of cases where spouse trades were simultaneously unprofitable and yields a percentage. Its formula is.
PDE =( Aprof + Wprof)/( Aprof+ Wprof+Unprof)∙100%
where, as previously, Aprof and Wprof are the number of profitable trades of Against-System and With-System respectively and Unprof is the number of cases where spouse trades were simultaneously unprofitable. The higher the PDE, the higher the potential directional energy for the cases considered in the calculation of the PDE. For example, a PDE of 0 shows that there is no potential directional energy because not even one trade was profitable (all trades in both systems were unprofitable). In contrast, a PDE of 100 shows that there was a winning trade in every couple of spouse trades so there was potential directional energy every time. The difference between the PDE of the Spike Cases and the PDE for the Control Cases will uncover whether the Spike Cases have higher degree of potential directional energy than the Control Cases.
All comparisons are made separately for Bullish Years and Bearish Years and statistical tests (chi-squared and t-test) are employed to check statistical significance. Simply put, the statistical tests show how unlikely is that any observed differences found comparing the Spike Cases with the Control Cases in The Database are due to pure luck (and in effect, how seriously they should been taken). This unlikeliness is expressed via what is called p-value. The p-value is a basically a probability and so it takes values from 0 to 1. The lower the p-value, the more unlikely is that any differences found between the Spike Cases and Control Cases in The Database are due to luck and the more significant (from a statistics point of view) the differences between the Spike and Control cases are. The typical broadly accepted threshold of p-value to signify statistical significance (and the one used in this article) is 0.05.
RESULTS
TABLE I summarizes the results of all the systems and tests and presents them in four categories (columns): Uptrends in Bullish Years, Downtrends in Bullish Years, Uptrends in Bearish Years and Downtrends in Bearish Years. Positive values for the differences (values form Spike Cases minus values from Control Cases) are colored blue whereas negative values for these differences are colored red. When p-values are significant (that is, when they are lower than 0.05) they are colored white in black background.
It is obvious that the most significant and stable results are those of the Downtrends in Bearish Years (category IV). The Against-System was highly superior to the With-System in the Spike Cases versus the Control Cases in this category and the statistical significance of this superiority was unequivocal as indicated by p-values which are almost zero. Not only volume spikes in this category increased the default odds (as presented by those of the Control Cases) for against-the-trend price thrusts but also the odds were clearly in favor for against-the-trend thrusts (rather than with-the-trend ones) and moreover betting in these thrusts was highly rewarded on average whereas betting in with-the-trend ones was a losing game. In Tables II and III more details about this category are presented to support this last statement. In Spike Cases and for a TS-Parameter equal to 30%, for example, one can note that out of the 293 Spike Cases in this category 128 produced profits for the Against-System and losses for the With-System, 87 produced losses for the Against-System and profits for the With-System and 78 produced losses for both systems. This means that the Against-System was overall profitable in 43.69% of all cases whereas the With-System was overall profitable in only 29.69% of all cases. The Against-System had an overall positive Edge of 0.426269184 (or 42.63%) whereas the With-System had an overall negative Edge of -0.324931894 (or -32.49%). Put simply, on average the trades of Against-System made $42.63 for every $100 of risk and the trades of With-System lost $32.49 for every $100 of risk.
The next interesting category is the Downtrends in Bullish Years (category II). Although the differences are positive (indicating that Against-System was better than the With-System in The Database) no statistically significant superiority of Against-System was found there. There is however statistically significance from the PDE side in three out of four TS-Parameters meaning that volume spikes seem to “build” directional energy. In the other two categories (I and III) the differences between Spike Cases and Control Cases-although mostly positive-are not at all statistically significant so, no strong statement can be made for them at this point. Even the comparisons of PDE’s were statistically significant in only half of the four TS-Parameters in these categories. It must be noted however that the lack of statistical significance in categories I, II and III doesn’t necessarily mean that there is no connection between volume spikes and ensuing price thrusts there. Maybe there is such connection but it is quite lax (or subtle) and as a result much more data for Spike Cases in these categories are needed to produce statistical significance. We experimented using candle highs where lows were initially used for TS and vice versa. We also changed exits to account for opening price gaps past the stops so that the Edges come closer to those of real trading. The results regarding the relation between volume spikes and ensuing price thrusts in uptrends and downtrends did not practically differ when any of these changes were made. For comparison purposes TABLE IV shows the Edges in downtrends during Bearish Years when exits are modified to account for opening price gaps past the stops. Such modification is expected to only worsen the Edges of both systems and as can be seen (comparing TABLE III and TABLE IV) it had a very small effect in the difference between Spike Cases and Control Cases.
CONCLUSION AND SUGGESTIONS FOR FURTHER RESEARCH
Regarding swift uptrends, we haven’t found any statistically significant association between the appearance of volume spikes and ensuing price thrusts relative to the magnitude of the trends. Regarding swift downtrends, on the other hand, we found an overall very strong association. The results suggest that when volume spikes appear during downtrends in the context of a prolonged bearish stock market environment an ensuing price upthrust (which may be a temporary re-tracement or a trend reversal) is quite probable and swing traders will find profitable opportunities in it. When volume spikes appear during downtrends in the context of a prolonged bullish stock market environment we found statistically significant increment in potential energy for the price to thrust in either direction but no statistical significance regarding specific direction.
The lack of statistical significance regarding specific direction of thrusts in three of the four categories studied might be due to the fact that much more Spike Cases are needed to be taken into account so further research should be conducted using much more historical data (from US or other stock markets) in order to have a very large number of Spike Cases. Until such a research-which will provide a clearer answer-is done, however, the only substantial results are those regarding downtrends in US stocks during broadly bearish periods.
In this article we dealt with the association between volume spikes during swift trends and ensuing price thrusts without considering anything else. Perhaps inclusion of additional information (such as the behavior of volume or price before the spike appears) provides much more clues regarding what to expect after the spikes. Another idea for research is to check whether a bit of information (on price or volume) after the volume spike gives clues about the longer term prospects of the price. In other words, when the behavior of price and volume before the spike does not provide any significant information it may be because spikes changes things in a way of building energy whose effects might be traced only after looking how the price behaves a couple of candles past the candle bearing the spike.
REFERENCES
Chordia, T., A. Subrahmanyam, and V. R. Anshuman (2001), Trading activity and expected stock returns, Journal of Financial Economics, v59, Issue 1, 3-32.
Datar, V., N. Naik, and R. Radcliffe (1998), Liquidity and asset returns: an alternative test, Journal of Financial Markets, v1, Issue 2, 203-220.
Gervais Simon, Ron Kaniel, and Daniel Mingelgrin, 2001, The high-volume return premium, Journal of Finance, v56, 877-919.
Lee, C. and B. Swaminathan (2000), Price momentum and trading volume, Journal of Finance, v55 (Issue 5), 2017-1069.
Murphy John, 1999, Technical Analysis of the Financial Markets: A Comprehensive Guide to Trading Methods and Applications, (New York Institute of Finance)
Signal Correlation Applied to Charting Techniques:
by Michael C. Thomsett
About the Author | Michael C. Thomsett
Michael C. Thomsett is a fulltime investor and author. He has been trading options and stocks since 1978 and has written many books on the topic, including the best-selling Getting Started in Options (John Wiley & Sons) which has sold over 300,000 copies and currently is in its 9th edition. He has authored 11 other options books as well, and a book on the topic of signal correlation. Profiting from Technical Analysis and Candlestick Indicators (FT Press) was published in 2015.
Thomsett has taught classes for Moody’s Investor Services on options topics and is a frequent speaker at trade shows for investors and traders. He also posts blogs daily on options topics at www.TheStreet.com, Seeking Alpha, Chicago Board Options Exchange (CBOE) and social media outlets.
ABSTRACT
Debate among market analysts concerning the ability to forecast price direction is focused on two primary theories, the random walk hypothesis (RWH) and the efficient market hypothesis (EMH). This paper presents a theory challenging the long-standing criticism of technical analysis; and provides proof of outcomes consisting of 578 trades, of which 91.5% were profitable over two years. The test of this theory over a two-year period demonstrated net returns 2.8 times better than average in the market (based on comparison to the Dow Jones Industrial Average), with test results yielding annual returns of 37.8% from the 578 trades versus 13.5% annual yield based on movement of the DJIA.
This theory is termed signal correlation, and is based on a set of conditions identified as appropriate to identify actionable trades with high confidence. In the study of signal correlation, the process has been designed to identify conditions of high confidence and to then observe these conditions using actual stock charts to test outcomes.
INTRODUCTION
The question of whether or not it is possible to predict a stock’s price movement and, more specifically, its direction, is unsettled within the segment of the market focused on chart analysis and technical analysis.
This paper sets forth the theory of signal correlation, which claims that specific attributes of price behavior can be applied to significantly outperform the average returns from investing (in comparison to benchmark index returns from the DJIA); and further demonstrates how specific price patterns provide exceptional opportunities to exploit inefficiencies in price movement, expanding on the concept of informational efficiency to offer an alternative: that price forecasting may be efficient when specific attributes are present, so that an Efficient Forecasting Hypothesis (EFH) may be applied to redefine technical market efficiency. This is based not on past price behavior or the traditional theory of efficiency, but on the theory that forecasting based on a highly reliable system is a reasonable alternative.
The purpose of this paper is to demonstrate that prediction of price movement can be accomplished with a high degree of success if based on strict adherence to a list of specific attributes, collectively referred to as signal correlation. This consists of a series of observations, development of cases through analysis of stock charts, and analysis of the results.
LITERATURE REVIEW
At the core of this study is the topic of Japanese candlesticks, formations of price direction in use in Japan since the 18th century as a means for tracking rice futures. Candlesticks were introduced in the United States by Steve Nison, who published a description of candlestick charting signals and techniques for the Western market. (Nison, 2001)
Controversy about any purely technical system is based on two concepts, the random walk hypothesis (RWH) and the efficient market hypothesis (EMH). RWH was first introduced in 1900 with the concept that prediction of price direction is not possible as all price movement is entirely random. (Bachelier, 1900) EMH was introduced in 1970 and states that all stock prices have been discounted for publicly known information; as a result, all pricing of stocks is informationally efficient. The introduction of EMH further concluded that achieving higher than average returns consistently is impossible when based on he study of historical prices and volume. (Fama, 1970)
Among the popular theories of price behavior, the two most widely accepted and best known are the random walk hypothesis (RWH) and the efficient market hypothesis (EMH). The random walk hypothesis (RWH) was best described by its primary modern supporter, who argued that
if the random walk theory is an accurate description of reality, then the various “technical” or “chartist” procedures for predicting stock prices are completely without value. (Fama, 1995)
Chartist techniques are not always based on past behavior of a price series, but more often on very recent price pattern recognition, as examination reveals in coming sections of this paper.
Fama continued his work on RWH to further argue that chartist techniques cannot be reliably used to predict prices. Fama referred to the related concept of the efficient market hypothesis and concluded:
In an efficient market, competition among the many intelligent participants lead to a situation where, at any point in time, actual prices of individual securities already reflect the effects of information based both on events that have already occurred and on events which as of now the market expects to take place in the future. (Fama, 1995)
The case for EMH and for RWH are made strongly if the examination ends there. In other words, the assumption that efficiency and randomness rule the markets is assumed to be true because chartists may not be able to provide proof to the contrary. Fama contended that
In fact, the analyst will do better than the investor who follows a simple buy-and- hold policy as long as he can more quickly identify situations where there are non-negligible discrepancies between actual prices and intrinsic value than other analysts and investors … That is, in a random-walk-efficient market, on the average, a security chosen by a mediocre analyst will produce a return no better than that obtained from a randomly selected security of the same general riskiness. (Fama, 1995)
Studies involving candlestick signals and confirmation have provided a convincing case for the ability to predict price movement. Chartists are inclined to seek very recent past price patterns with demonstrated reliability in order to forecast price responses. The reliance on very recent patterns is quite different than reliance on past price behavior.
Three specific sub-topics that follow challenge these assumptions. These are: pattern recognition, investor psychology, and swing trading.
Pattern recognition – Contradicting the widely held belief that charting techniques rely on a likely repetition of past price patterns, chartists tend to rely not on the past price behavior but on the nature of very recent price patterns. Recognizing that specific patterns forecast reversal or continuation of the current trend, chartists apply this theory most effectively by combining these patterns with proximity, the relationship between price levels and support or resistance. Pattern recognition is effective in empowering chartists to combine subjective interpretations of patterns with carefully observed rules for accepting a forecast for coming price reversal or continuation. For this reason
technical analysis has survived through the years, perhaps because its visual mode of analysis is more conducive to human cognition, and because pattern recognition is one of the few repetitive activities for which computers do not have an absolute advantage (yet). (Lo, Memaysky, Wang, 2000)
It is not only true that chartists are able to recognize price patterns and forecasts of coming price direction; this also means chartists are able to recognize actionable changes in current trends, and to act on them:
Using charts, technical analysts seek to identify price patterns and market trends in financial markets and attempt to exploit these patterns. (Hoon, Hong, Wu, 2014)
The exploitation of price patterns refers to traders’ ability to generate profitable trades. Effective application of price pattern recognition has demonstrated that this is possible. In fact
Significant return from technical analysis, even in conjunction with valuation methods, tends to argue against the efficient market hypothesis. Consequently, there is a close link between the validity of technical analysis and the inefficiency of the market. (Caginalp, Balenovich, 2003)
As a profit-generating methodology, technical analysis has to be properly defined. The distinction between past price behavior and recent price patterns is substantial, and while some observers continue to assume that the practice of technical forecasting is based on past price behavior, this is not universally true. If a technical system is based on the assumption that past prices are likely to repeat in the future, the argument presented by Fama (1995) and others against technical analysis is valid. However, modern chartist techniques are not necessarily based on past price behavior, but on pattern recognition.
Investor psychology – The second important distinction to offer between EMH and technical analysis involves investor psychology. EMH is based on the belief that prices are informationally efficiently; that is, all known information has been factored into prices of stocks. This belief does not give weight to the importance of investor psychology. Market behavior is far from efficient, as studies have observed. Traders tend to react to stock market news in the most inefficient manner, by overreacting to price action after news, especially earnings surprises or announcements of coming acquisitions and other unexpected announcements.
Thus, markets may be informationally efficient but realistically inefficient. If it is true that the market tends to overreact to news and events, then EMH is drawn into doubt. (Barberis, Shleifer, Vishny, 1998)
Combining price pattern analysis with the human element, investor psychology makes short-term markets highly inefficient; and that this inefficiency leads to opportunities for chartists to exploit exaggerated price movement. Chartists are most likely to study trends and patterns, recognizing not only that current information is far from dependent on past price patterns, but also includes the influence of the human element:
Chartism is based on the assumption that trends and patterns in charts reflect not only all available information but the psychology of the investor as well. (Acar and Satchell, 1997)
The distinction is a crucial one. Under Fama’s (1995) assumption that chartist techniques rely on past behavior, it would be true that price prediction is flawed if, in reality, recent prices are not dependent on more distant past price performance. However, if chartist techniques are more closely associated with the assumption by Acar and Satchell (1997) and involves a combination of information and investor psychology, then market psychology is an observed fact of life over many years. Rather than being ruled by efficiency, prices tend to move in the short term based on emotions:
Since the technical approach is based on the theory that the price is a reflection of mass psychology (“the crowd”) in action, it attempts to forecast future price movements in the assumption that crowd psychology moves between panic, fear, and pessimism on one hand and confidence, excessive optimism, and greed on the other. (Pring, 1991)
Accepting the reality of crowd mentality and the emotional response to price movements, a chartist is compelling to understand both price patterns and the human element:
Technical analysts focus mainly on future stock prices given past patterns in stock price movements, but they also take into account psychological aspects in the demand for a company’s stock. (Hoon et al, 2014)
This leads to an understanding of the mechanism used in technical analysis – the chart – as a means for quantifying the price pattern and investor psychology, so that “charts are used as landmarks for human perception.” (Phillips, Todd, 2003) However, even with the understanding that chartist techniques combine pattern recognition with investor psychology, the debate about the validity of technical analysis is not settled. A 1992 study applied 26 trading rules to the 30 stocks in the Dow Jones Industrial Average and concluded that these rules led to significant out-performance compared to the benchmark of simply holding cash. This conclusion strongly supports the concept that random price movement is not a valid hypothesis. (Brock, Lakonishok, LeBaron, 1992) This study was supported by further testing of Brock’s theory, and concluded that “several technical indicators do provide some incremental information and may have some practical value.” (Lo et al, 2000) However, these two studies are contradicted by a 1999 study, whose larger sample and longer testing period concluded that Brock’s application of trading rules failed to generate superior price performance out-of-sample. (Sullivan, Timmermann, White, 1999)
The contradictory studies of chartist techniques are problems for chartists in a general sense, who may rely on a general theory of prediction; but it does not mean that specific attributes of price and trends patterns cannot be employed to consistently outperform the average returns from the market; focusing on short-term trends and swing trading favor technical analysis over fundamental analysis.
Swing trading – Short-term trading, also known as swing trading, is a system for moving in and out of trading positions rapidly, often with a turnaround time as short as three days:
The swing trader’s actions are goal driven rather than time driven – unlike the day trader, he has no set time at which he expects to exit his positions, but will wait for a specific price objective to be met. Fortunately, that wait usually isn’t too long – typically as little as three to five days, though sometimes as long as two to three weeks. (Spears, 2003)
Swing trading is acknowledged as a desirable medium for application of technical analysis, based on the tendency for short-term price to be predictable in the three to five day range. “Technical analysis can accurately forecast short-term stock price movement and lead to profitable trading decisions.” (Jagedeesh, Titman, 1993) The balance between technical and fundamental analysis is understood as one aspect of swing trading. In a balanced approach such as signal correlation, the first step is selection of stocks based on strong fundamentals; and subsequent steps employ technical analysis to determine trend reversal or continuation. This combination makes sense for traders, and also takes into account the reliability of each form of analysis:
Combining both technical and fundamental analysis can better explain stock price movements, compared to the case where technical or fundamental analysis is employed independently. (Hoon et al, 2014)
In swing trading, emphasis is on the short-term price direction, but both technical and fundamental approaches contribute to the overall quality of analysis:
Technical analysis tends to support superior outcomes in the short term, and fundamental analysis is believed to lead to superior outcomes in the long term. (Taylor, Allen, 1992)
The application of technical analysis, based on first establishing superior fundamental attributes of a company and then focusing on price patterns, is a starting point for signal correlation. The analysis of chart patterns is then based on a study of candlestick signals with confirmation of several types (other candlesticks, traditional Western technical price signals, volume, moving averages, and momentum oscillators). Candlesticks are at the core of generating forecasts based on the high reliability and predictive qualities of many signals. One study involved three-session candlestick reversal signals for all of the stocks in the S&P 500:
All 3-day candlesticks were tested for 265,000 days on each stock in the S&P 500, and the conclusion was that these reversals were significantly predictive. (Morris, 1992)
The Morris (1992) study supports the belief that price patterns, when organized into observable signals for reversal or continuation can accurately forecast price movement yet to occur. This is confirmed by the most definitive study of candlestick reliability, which was performed by Thomas Bulkowski. This study analyzed 4.7 million candlesticks and ranked their performance over 15 years. Each was then assigned a numerical ranking and percentage of statistical outcomes based on expectations versus actual results. (Bulkowski, 2008) Bulkowski concluded that certain candlestick signals are highly predictive and may provide high confidence in forecasting of immediate price movement. If these signals are also confirmed, as Bulkowski’s study revealed, the use of candlestick signals makes it possible to predict movement of price based on analysis of particularly strong candlesticks, confirmation, and proximity.
METHODOLOGY
The starting point in a swing trading strategy should be stock selection. In the two-year study of 578 trades, criteria were applied to select stocks on a strictly fundamental rationale. However, these criteria often were overridden by more immediate considerations, notably strong earnings surprises accompanied by proximity of price to strong signals and confirmation.
Without considering the technical aspects, the quantified fundamental criteria were rated based on the summary in Table A. The outcomes of these applied values may be called confidence levels in the decision to trade on a specific stock based on fundamental attributes.
A company may earn up to 27 points based on these criteria, with a possible low rating of 3 points (1 point for the first three classifications, minus 6 points for the last 6).
On a technical level, A similar rating system is applied to six specific attributes of (a) the independent variable of proximity and (b) dependent variables of signal strength, confirmation strength, and the prior trend’s strength. This is a method to arrive at confidence level for trading based on technical attributes. The rating system is summarized in the Table B.
In the rating of proximity and other attributes, the possible range is from zero to 9 (2 points for each of 4 attributes and maximum of 1 for discovering multiple confirmation. These may be equated with percentages of confidence, with maximum confidence a 9, or 90%. This is treated as maximum confidence in signal correlation, recognizing that 100% confidence is not possible even in the best of circumstances.
The selection of stocks and decision to enter a trade were based on quantified values, even though the technical values were given priority over the fundamental. The rationale for this include:
- Earnings surprises were of greater importance than other signals, due to the high frequency of correction after exaggerated price movement. Thus, the earnings surprise is a fundamental indicator with a technical symptom.
- A swing trading strategy is a purely technical system based on price movement and confirmation and without consideration for fundamental indicators.
- News of mergers, product development, litigation, and other influences also affect selection of a company’s stock for swing trading.
Of dozens of candlestick signals, many produce accurate forecasts at or close to only 50% of the time. In these instances, candlesticks provide no useful data since a 50% likely reversal is not a strong indicator. In the signal correlation analysis, only 10 candlesticks were used to predict price movements, and these were based on high predictive values found in the Bulkowski study. (Bulkowski, 2008) These are summarized in Table C.
In the signal correlation testing study described in coming sections of this paper, these 10 signals were employed in order to demonstrate the likely outcome for exceptionally strong signals and confirmation. However, an additional restriction was placed on this testing. Most candlestick analysts adopt the position that a reversal signal located in the wrong proximity acts as a continuation signal; and that a continuation signal in the wrong proximity provides a reversal signal. For example, if a bearish reversal appears during a downtrend, this point of view recognizes this as a bearish continuation. Contrary to this belief, the theory of signal correlation assumes that reversals work only when in the proper proximity (bearish reversal during an uptrend, bullish reversal during a downtrend, bearish continuation during a downtrend, and bullish continuation during an uptrend).
The convertibility of signals is assumed in this study to not be applicable in chart analysis, preferring the assumption that when a signal appears in the wrong proximity, it provides no actionable indication whatsoever.
In addition to seeking these 10 exceptionally strong reversal and continuation signals, numerous Western signals were used in conjunction with candlesticks. These included wedges and triangles, head and shoulders, double tops and bottoms, and gaps among others. However, the strong candlestick signals were the first choice for signal location and strength of predictive attributes.
The theory of signal correlation: To support the value of technical analysis and its predictive properties, a theory is offered and based on the observation of recent pattern trends (versus past price pattern behavior). This theory is stated as:
Theory of signal correlation: Candlestick signals are reliable only when correlated.
A candlestick indicator serves as an initial forecast of coming price reversal or continuation. However, it is not dependable, even with confirmation, unless and until it is analyzed in the context of the independent variable of proximity (to resistance or support). It is further quantified based on several dependent variables, including the strength of the preceding trend, strength of the initial signal, and strength of the confirmation signal.
Three additional hypotheses, all of which were tested in the two-year study consisting of 578 trades, may be applied to specific chart-based analysis. These are:
H-I: Strong reversal is likely to be followed by equally strong confirmation.
When a reversal signal is strong in its pattern, confirmation signals are also likely to exhibit strength. This places reversal confidence at its highest possible level, meaning a retreat from the previous trend and price movement in the opposite direction. A strong reversal is also most likely to lead to a strong reversal trend.
H-II: Strong continuation is likely to be followed by equally strong confirmation.
Strong continuation signals are likely to be accompanied by equally strong confirmation signals. Under these conditions, continuation confidence is at its highest possible level, meaning likely breakout above resistance or below support. A strong continuation is also most likely to lead to further movement with strength at least equal to the preceding trend.
H-III: Weak signals are most likely to be follows by weak confirmation.
This applies in all levels of proximity and to all levels of trend strength (and most likely when the preceding trend was weak as well). The weakness of signals and confirmation is most likely to be located at mid- range and not in close proximity to resistance or support.
Each of these three tested hypotheses are examined in detail in the next section of this paper. These further explorations include stock charts demonstrating how each statement may be observed in the forecasting of price movement, and how various levels of strength and weakness affect outcomes.
RESULTS
The trades executed in this two-year study report consistent profitable outcomes based on observation of the attributes of signal correlation. Examples of applying the trading rules associated with signal correlation and provided in several stock charts. These charts cover the period from December 1, 2013 through May 31, 2014, and the significant signals and trends are highlighted on each.
Following are detailed explanations of charts for the three hypotheses:
H-I: Strong reversal is likely to be followed by equally strong confirmation.
The first hypothesis concerns strength of reversal as measured by confirmation. This signal can be treated as reliable only when proximity to support or resistance exists, and only when the reversal or continuation are strongly confirmed.
Visa (V) was quantified first on a fundamental basis, where it scored 59% or 16 points of a possible 27. These results are summarized in Table D. (Standard & Poor’s, 2015)
On the chart of Visa, a rising wedge appeared. This was a bearish reversal signal, and it was quickly confirmed by a bearish candlestick reversal signal in the form of three black crows. The proximity of these signals conformed to the standards of signal correlation; both signals appeared at the top of a prior uptrend that, even with the temporary decline to the beginning of the rising wedge, identified the point of reversal. As expected, the signal and confirmation led to a downtrend.
The technical quantification of Visa resulted in a rating of 5 out of a possible 9 points, or 56%. This is summarized in Table E.
H-II: Strong continuation is likely to be followed by equally strong confirmation.
The second hypothesis statement reveals that continuation signals often appear at a point where price breaks through support or resistance and if strongly confirmed, these continuation signals forecast successful breakout and establishment of a new, revised trading range.
Wal-Mart (WMT) was quantified first on a fundamental basis, where it scored 85% or 23 points of a possible 27. These results are summarized in Table F. (Standard & Poor’s, 2015)
The chart of Wal-Mart (WMT) contained several examples of this form of signal correlation. It began with the continuation signal of the downtrend at the beginning of the chart. The three-line strike forecasted continued price decline. This was confirmed by the descending triangle. This combined set of signals moved confidence up to a near certainty of continuation, although the duration of that trend could not be known. Chartists who are tracking continuation signals can only wait for new signals to emerge before taking action.
A bullish reversal appeared in the form of three white soldiers in very close proximity to the bottom of the downtrend. This was confirmed by a concurrent ascending triangle, a form of bullish continuation. As price broke through the resistance line set by the top of the ascending triangle, two strong bullish continuation signals appeared, three-line strikes. Finding two of these in such close proximity is rare and provides a strong signal. As price broke through, resistance flipped to set up a new level of support.
The technical quantification of Wal-Mart resulted in a rating of 8 out of a possible 9 points, or 89%. This is summarized in Table G.
H-III: Weak signals are most likely to be follows by weak confirmation.
Just as strong reversal and continuation signals raise confidence of the forecast price direction,weak signals create confusion about the next price direction. An example is shown on the chart of Caterpillar (CAT).
CAT was quantified first on a fundamental basis, where it scored 70% or 19 points of a possible 27. These results are summarized in Table H. (Standard & Poor’s, 2015).
Resistance rose gradually through most of the period shown. However, two bearish engulfing signals forecast downtrends that did not materialize. Two aspects of these signals explain why the signal failed. First, both examples of the bearish engulfing can be described as weak. A strong reversal signal will be characterized by strong differences between two consecutive sessions, such as a small session followed by a much larger session. In these instances, both sessions barely met the criteria for the bearish engulfing signal. Although the second day did engulf the first, the difference in size was minimal. The second flaw in this set of signals was that one weak signal was confirmed by another weak signal. This explains why the signals themselves were not convincing in their forecast of a downtrend. To the contrary, as the chart concluded, a new line of support was set in a gradually rising formation, further contradicting the forecast of a reversal.
The technical quantification of Caterpillar resulted in a rating of 3 out of a possible 9 points, or 33%. This is summarized in Table I.
DISCUSSION
This paper includes the analysis of 578 virtual trades over two years, applying the principles of signal correlation. This study used option contracts. Options, derivatives of stocks, present both problems and opportunities. In this study, the intended short-term holding period was designed to test the effectiveness of signal correlation with the use of a highly leveraged instrument. Options were generally selected to expire within one to two months, based on cost as well as time value. Time value is one of three parts of an option’s total premium. It is predictable in the sense that value declines and accelerates as expiration approaches. A second form of value is intrinsic value, equal to the number of points an option is in the money. The third and final is extrinsic value, also named implied volatility. This value is based on underlying stock volatility and perceptions in the market about the likelihood of price movement between a specific date and expiration date of the option.
The use of options is advantageous based on the tendency of stock prices to overreact to news, notably the unexpected, such as earnings surprises. When price moves in an exaggerated fashion, it also tends to correct within a few sessions. In these rapid changes in a stock’s price, option values tend to move equally and often more than the underlying stock. This tendency reveals that price tends to consist of a series of overreactions to immediate news followed by corrective reversals.
When this reality is combined with a program of signal correlation, the potential for consistent outcomes based on accurate forecasting is significant. The two-year study performed to test the signal correlation hypothesis resulted in a two-year average net return of 37.8%. In comparison, during the same period (September, 2012 through August, 2014), the Dow Jones Industrial Average advanced from approximately 13,000 to 16,500, a return of 27% over two years, or an average return of 12.5% per year.
The observation of trades also concluded that shorter holding periods for options were likely to yield higher numbers of profitable outcome, and longer holding periods were likely to yield a declining scale of profitable trades. The breakdown by holding period of the trades during the two-year period is:
1 – 5 days 95.3% profitable
6 – 10 days 89.5% profitable
11 – 20 days 78.8% profitable
Over 20 days 79.3% profitable
To some extent, the short-term trades represent a distorted view of outcomes. Positions open for five days or less are likely to be closed only when profitable. If not profitable, they are kept open for a longer period of time. So the holding period contains an element of validity, but it cannot be used to conclude that shorter holding periods are always advantageous.
All 578 of these trades are available for public view on the author’s website, www.thomsettpublishing.weebly.com. Unusually high losses (100%) occurred on 25 trades, and unusually high gains (above 100%) occurred in 14 trades. These are summarized in the Appendix.
Even before trades were entered, a specific course of analysis was undertaken. This began with selection of high-quality companies whose stocks were traded with options. The methodology for stock selection was as follows, with analysis based on five fundamental attributes over the most recent 10 years:
- Dividend yield and history – higher-yielding stocks were evaluated as favorable, and raising dividends over the past 10 years was further evidence of a company’s fundamental quality. A company raising dividends for all 10 of the past 10 years was most desirable.
- P/E ratio range – the annual range from high to low price/earnings ratio was also evaluated, with the most desirable range identified between 25 and 10.
- Revenue growth – annual increases in revenues were also studied, and the more years of increased dollar value of revenues, the more favorable the opinion.
- Earnings growth – increases in net earnings were also evaluated in three separate ways: number of years reporting increases in dollar value of earnings, the same analysis for S&P core earnings, and a study of the net return (earnings divided by revenues).
- Debt ratio – this ratio compares debt capitalization to total capitalization. The lower the debt ratio, the more desirable; a flat or falling debt ratio was treated as a positive result, and a rising debt ratio was treated as a negative condition.
Additional factors were also considered in the selection of a stock for analysis. These included news concerning mergers or acquisitions, purchase by the company of its own stock, and earnings reports. An earnings surprise (positive or negative) was likely to result in an exaggerated move in the stock price, and in most instances, the price move retreated within two to three trading sessions. A study of 82,705 earnings surprises between 1984 and 2009 revealed an average of -7.49% of what the market expected. (Shon and Zhou, 2011) The occurrence of earnings surprises presents additional opportunities to enter short-term trades.
Combinations of the five key fundamentals with other factors (especially earnings surprises), served as the basis for companies selected for analysis. Trades were entered during trading hours and based on study of the price chart provided by StockCharts.com – with short positions the net of bid prices minus trading costs assessed by Charles Schwab & Co. of $8.75 per trade for the first option plus $0.75 for each additional option traded; and with long positions the sum of ask prices plus trading costs. The principles of signal correlation were applied to price charts, with the requirement for an initial signal plus confirmation, with proximity to support or resistance. Unusual situations were of special interest; these include large price gaps with volume spikes, price movement through support or resistance representing overreaction to earnings, rumors and other news, and exceptionally strong signals and confirmation.
The price charts, used as tests of the several hypothesis elements, reveal that many repetitive attributes of charts are easily recognized, and this further supports the primary claims of signal correlation:
- Independent variable: Proximity of signals to support or resistance is a key attribute to either reversal or continuation. The combination of proximity and confirmation moves confidence higher.
- Dependent variables: Technical confidence levels developed through analysis and ratings of preceding trends, initial signals, and confirmation.
The analysis of signal correlation provides convincing arguments favoring application of charting techniques to create reliable timing for trades. When all of the expected attributes are present (strong signal, confirmation, and proximity to support or resistance), signal correlation tends to produce expected results. In the two-year study, profits resulted in 91.5% of trades. Yield overall was 37.8% per year over a two-year test period. The strength in the analysis of signal correlation is that repetitive expected outcomes were found in the analysis of dozens of charts.
Theories may only offer one concept of how price movement works, and how success or failure should be interpreted. In this study, failed signals were traceable to weak trends, signals and confirmation; however, this does not guarantee that weakness will always lead to failure, or that strength will always lead to success. Short-term price trends tend to be highly chaotic and volatile. One interpretation of this problem is that price movement truly is random. However, that conclusion may be challenged by observation of longer-term price trends.
No theory about price movement can be final or conclusive. The many influences on price cannot be articulated in isolated signals and confirmation and cannot account for the psychology of investors. The track record of testing of the signal correlation statements makes a convincing case favoring this charting technique.
CONCLUSION
Signal correlation is a set of beliefs concerning the strength or weakness of reversal or continuation signals and confirmation of those signals. It addresses both strength and weakness in signals and likely outcomes resulting from those attributes.
The rules observed in signal correlation were tested over a two-year period in a virtual portfolio. The standards were applied consistently. Trades were identified during active trading hours. Short trades were the net dollar value of the bid price minus trading costs; and long trades were the sum of ask prices plus trading costs. Options were used in a majority of these trades; some trades combined option positions with the underlying stock, involving strategies such as covered calls. The results included 578 trades, with 529 (91.5%) resulting in profitable outcomes and 49 (8.5%) resulting in net losses. Highlights of these trades are listed in the Appendix. Because options were used in these trades, some trades exceeded 100% profits and some represented 100% net losses. Options are highly leveraged, which explains these extremes, often in a very small window of time.
Several positions appear multiple times on a single purchase date. This is due to the type of strategy employed. For example, iron butterfly strategies of three separate expiration periods included 12 specific options, half long and half short, all opened on the same date; and numerous iron butterflies were included in the two-year test of signal correlation. In more basic strategies, simple long or short option strategies were applied.
The selection of long or short positions relies on the premium value of each option, proximity between option strike price and value of the underlying stock, and time remaining to expiration. An attempt was made to create uniform dollar values in trades to spread risks. This avoids experiencing high dollar losses on single trades. The typical trade value was between $300 and $600 in most cases. This self-imposed limitation also mandated the selection of expiration date and proximity of strike to the underlying stock price. Finally, the dollar range also determined whether long or short positions should be entered. The closer the strike to underlying value, the greater the long advantage, and the greater the short risk.
With these concerns in mind, a bullish trade would consists of a long call or a short put, and a bearish trade would consist of a long put or a short call. Alternatives beyond long or short options included synthetic long or short stock positions, collars, butterflies, covered calls, spreads and straddles.
Once a stock was selected for analysis (based either on fundamental criteria or other methods such as earnings surprises) a technical review was applied based on the chart. Ratings above 50% were desirable, meaning that out of a possible 9 points, a technical rating of 5 or more was required. In the three examples, strong technical ratings were found in Visa (V) and Wal-Mart (WMT) and a weak technical rating was found in Caterpillar, the chart used to demonstrate weakness. Under the conditions of the signal, confirmation and trend, the technicial rating was only 33%, so Caterpillar did not represent a viable candidate for a swing trade. This was based on technical criteria at the time of the chart, which did not apply throughout the two-year period studied. Every company is likely to evolve through strong and weak periods, defining swing trading as a timing strategy and not as a universal standard for timing trades.
Options were selected in this study due to their superior leverage and potential to control and minimize market risks. The advantage to long options is limited cost and capped maximum loss. The disadvantage is that time decay reduces the value of the option, so even when positive movement occurs, time value offsets increases in intrinsic value. The advantage to short options is that time decay reduces the premium value; and since short options are sold, reduction of value increases profitability. Those trades identified as 100% profitable are invariably due to worthless expiration of short options. The disadvantage to short options is that if they move in the money, they will be exercised. This is avoided by closing the positions at a small profit or loss, or by rolling forward to later-expiring short positions.
A bullish trade consists of a long call or short put. A bearish trade consists of a long put or short call. A decision to select one over the other is based on (a) premium value of each options, (b) profit potential versus collateral requirements (uncovered short options require collateral on deposit equal to 100% of the exercise value); (c) time to expiration; (d) proximity of the option strike to value of the underlying stock; and (e) volatility of the underlying stock in the days and weeks preceding the trade.
Closing of positions was based on a desire to achieve double-digit returns. Once this was accomplished, open positions were closed and profits taken. If profits did not materialize, positions were closed to reduce losses, allowed to expire or (in the case of short options) rolled forward to avoid impending exercise.
The specific technique employed began with an analysis of the stock chart for a number of selected companies. These were chosen based on fundamental strength as a first criterion. The principles of signal correlation were applied to identify high-confidence trading opportunities. Upon discovery of conditions favoring trades, an options trade was next selected and entered. This is a departure from the more common practice of timing options trades based on movement in implied volatility. The study was based entirely on signal correlation and analysis of the stock chart, and then selection of options based on timing of forecast price movement.
REFERENCES
Acar, E. and S. Satchell. “A theoretical analysis of trading rules: an application to the moving average case with Markovien returns,” Applied Mathematical Finance 4 (1997), 165-180
Bachelier, L. “Théorie de la Spéculation,” Doctoral dissertation in mathematics, University of Paris. (1900) English translation by Cootnerr, P.H. (ed.), 1964
Barberis, N., A. Shleifer and R. Vishny. “A Model of Investor Sentiment.” Journal of Financial Economics 49 (1998), 307-343
Brock, W., J. Lakonishok, and B. LeBaron. “Simple technical trading rules and the stochastic properties of stock returns. Journal of Finance 47 (5) (1992), 1731-1764
Bulkowski, T. Encyclopedia of Candlestick Charts. Hoboken NJ: John Wiley & Sons, 2008 Caginalp, G. and D. Balenovich. “A Theoretical Foundation for Technical Analysis.” Journal of Technical Analysis, Winter-Spring 2003
Fama, E. “Efficient Capital Markets: A Review of Theory and Empirical Work.” Journal of Finance 25 (1970), 383-417
“Random Walks in Stock Market Prices.” Financial Analysts Journal. (1995) January-February, 75-80
Hoon, K., J. Hong and E. Wu. “Can Technical Analysis be used to Enhance Accounting Information Based Fundamental Analysis in Explaining Expected Stock Price Movements?” University of Technology. (2014) Sydney, Australia
Jagedeesh, N. and S. Titman. “Returns to buying winners and selling losers.” Journal of Finance 48 (1993), 65-91
Lo, A., H. Memaysky, and J. Wang. “Foundations of technical analysis: Computational algorithms, statistical inference, and empirical implementation.” Journal of Finance 4 (2000), 1705-1770
Morris, G. Candlepower. Chicago: Probus, 1992
Nison, S. Japanese Candlestick Charting Techniques, 2nd ed. New York: Prentice Hall Press, 2001
Phillips, F. and J. Todd. “Perceptual Representation of Visible Surfaces.” Perception and Pschophysics 65 (2003), 747-762
Pring, M. Technical Analysis Explained. New York: McGraw-Hill, 1991
Spears, L. Swing Trading Simplified. Columbia MD: Marketplace Books, 2003
Standard & Poor’s Stock Reports, retrieved March, 2015 StockCharts.com, price charts, downloaded at StockCharts.com
Sullivan, R., A, Timmermann, and H. White. ”Data-snooping technical trading rule performance and the bootstrap.” Journal of Finance 54 (5) (1999), 1647-1691
Taylor, M. and H. Allen. “The use of technical analysis in the foreign exchange market.” Journal of International Money and Finance 11 (1992), 304-314
Zhou, J. and P. Zhou, Trading on Corporate Earnings News, Upper Saddle River, NJ: FT Press, 2011
APPENDIX– SELECTED PROFITS AND LOSES IN THE TEST PERIOD
The overall outcomes of the 578 trades are summarized in Table J.
Losses equal to 100% were caused by long options that expired worthless. There were 25 of these. Long option purchases were normally limited to $500 or less, with some exceptions. The 25 positions experiencing 100% losses ranged between $94 and $1,124 and averaged $391 in losses, for a total of $9,784.
The emphasis on trades resulting in 1-25% demonstrates the swing trading goal of achieving profits rapidly and in double digits. Once double digits are reached, the standard is to close the position and take profits.
Profits exceeding 100% consisted of 14 trades and ranging in holding period from one to 79 days. These are shown in Table K.
METHODOLOGY
All trades were entered during trading hours and based on analysis of very recent price chart as provided by StockCharts.com – with short positions the net of bid prices minus trading costs assessed by Charles Schwab & Co. of $8.75 per trade for the first option plus $0.75 for each additional option traded; and with long positions the sum of ask prices plus trading costs.
The principles of signal correlation were applied to price charts, with the requirement for an initial signal plus confirmation, with proximity to support or resistance. Unusual situations were of special interest; these include large price gaps with volume spikes, price movement through support or resistance representing overreaction to earnings, rumors and other news, and exceptionally strong signals and confirmation.
Outcome: 578 total trades: 529 profitable outcomes (91.5%) and 49 losses (8.5%)