JOURNAL OF
TECHNICAL ANALYSIS

Issue 65, Summer/Fall 2008

Editorial Board

Julie Dahlquist, Ph.D., CMT

J. Ronald Davis

Golum Investors, Inc.

Cynthia A Kase, CMT, MFTA

Expert Consultant

Philip J. McDonnell

Michael J. Moody, CMT

Dorsey, Wright & Associates

Ken Tower, CMT

Chief Executive Officer, Quantitative Analysis Service

Timothy Licitra

Marketing Services Coordinator, Market Technicians Association, Inc.

CMT Association, Inc.
25 Broadway, Suite 10-036, New York, New York 10004
www.cmtassociation.org

Published by Chartered Market Technician Association, LLC

ISSN (Print)

ISSN (Online)

The Journal of Technical Analysis is published by the Chartered Market Technicians Association, LLC, 25 Broadway, Suite 10-036, New York, NY 10004.New York, NY 10006. Its purpose is to promote the investigation and analysis of the price and volume activities of the world’s financial markets. The Journal of Technical Analysis is distributed to individuals (both academic and practicitioner) and libraries in the United States, Canada, and several other countries in Europe and Asia. Journal of Technical Analysis is copyrighted by the CMT Association and registered with the Library of Congress. All rights are reserved.

Letter from the Editor

by Connie Brown, CFTe, MFTA

While the goal of the Journal of Technical Analysis is to present to you a publication of the highest standard with the best academic work in our industry, it is also our goal to push a few buttons so that readers and contributors alike may reflect on the direction of our industry and provide fuel for thought for our further development.

In this issue you will find six eloquently written papers with opinions backed and derived from testing. Some readers may have read a few of these papers before as several are award winners. But as your editor, let me ask you to revisit all these papers because they shed a light on our industry in a way that is meaningful as a collected body of work. One question we need to ask ourselves is, Where do we want technical analysis to be in five, ten, or perhaps twenty years from now? Another question I challenge our entire industry to reflect upon is this; What responsibility does a technical analyst have to minimize risk to principle and minimize capital drawdown? Are these responsibilities entirely in the hands of the trader? As an example you will find Parker Evans, in An Empirical Study of Rotational Trading Using the %b Oscillator, offers in his own conclusion, “Admittedly, we have presented back test results that fly in the face of the well-worn trader’s axiom, Cut your losses short; let your profits run. Table 2 confirms that the %b BW system offers no protection against ruinous losses at the asset level.” In the paper named Ichimoku Kinko Hyo, by Véronique Lashinski, Table I: offers total results showing the percentage of winners is less than 40%. In Buff Dormeier’s paper, Price and Volume, Digging Deeper, early versions had a bullish bias to the paper that was identified by the Judges for the Charles Dow Award. When a larger look back period within the charts was requested, it was discovered the equity curve in Chart 7 experienced a sharp drawdown in 1998, though the summarized results would not be significantly impacted.

If any logic being examined experiences a 40 – 50% drawdown at any point, yet still ends with a strong finish that yields a statically backed positive conclusion, is this something we can view as reality and genuine growth to our Body of Knowledge about our tools and methods? Would a professional analyst still be employed the full duration of a test interval if a sharp equity curve “blip” occurs? We each have different views about acceptable risk exposure. While this is not an easy answer, the question must be considered.

Statistics is essential to prove the validity of our methods, but are our methods being tested to mirror how they are used by the most skilled technicians? If only small integral parts of a skilled technicians’ logic tree is tested, does it help or hurt our industry and the method being examined? I do not know how to bring the best technicians and most accomplished academic statisticians together, but I am confident our industry as a whole would benefit if we could find a solution to this pickle we find ourselves currently. These papers deserve the highest recognition, but it is my hope they prompt you to give the goals of our industry and how best to move our craft forward much deeper thought.

The final article is a reprint excerpt from the book Benner’s Prophecies of Future Ups and Downs in Prices written by Samuel Benner in 1884. Benner touches upon Fibonacci cycles and considers cycles of prosperity and contraction in several markets. We will begin to include a reprint in each Journal issue from hard to find works that mark historical milestones in our industry. Benner’s Prophecies is a most appropriate selection to begin this new addition to the Journal because the book is recognized as the first financial book written in North America with technical forecasts. Considering current global equity market trends you will find this work written over a century ago on market panics a most intriguing read.

Respectfully,

Connie Brown, CMT

BACK TO TOP

The Boundaries of Technical Analysis

by Milton Berg, CFA

About the Author | Milton Berg, CFA

Milton Berg, CFA, is the CEO and Chief Investment Strategist of MB Advisors, LLC. He has worked in the financial services industry since 1978, with an extensive background in various roles on the buy side. Milton founded MB Advisors in 2012 to address a need for high-quality independent research with a macro, technical and historical focus.

Milton began his career as a Commodities Analyst and Trader at Swiss-based Erlanger and Company. In 1980, he was a Fund Manager at First Investors Corp. and managed a natural resource fund as well as an option writing fund. In 1984, he moved to Oppenheimer and managed three mutual funds which were each ranked as the top performer over a five-year period by Lipper. Milton then became a Partner at Steinhardt, one of the earliest hedge funds on Wall Street. More recently, he has worked with well-known titans of the hedge fund world including Michael Steinhardt, George Soros, and Stanley Druckenmiller (Duquesne).

Milton’s work has been featured in the Wall Street Journal, New York Times, Barron’s, and Institutional Investor, in addition to other media outlets. His groundbreaking report The Boundaries of Technical Analysis was published in the summer of 2008 in CMT Association’s Journal of Technical Analysis. His 2015 research report Approach to the Markets outlines his method for analyzing the stock market.

Milton has held a Chartered Financial Analyst designation since 1979. The Institute for Economic Research named Milton as the Mutual Fund Manager of the Year in 1987 given his performance during the crash. That same year, Milton was jointly named with Stanley Druckenmiller as Mutual Fund Manager of the Year by Sylvia Porter’s Personal Finance Magazine.

Market Prognostication

In his treatise on stock market patterns, the late Professor Harry V. Roberts[1] observed that “of all economic time series, the history of stock prices, both individual and aggregate, has probably been most widely and intensively studied,” and “patterns of technical analysis may be little if nothing more than a statistical artifact.”[2] Ibbotson and Sinquefield maintain that historical stock price data cannot be used to predict daily, weekly or monthly percent changes in the market averages. However, they do claim the ability to predict in advance the probability that the market will move between +X% and -Y% over a specific period.[3] Only to this very limited extent – forecasting the probabilities of return – can historical stock price movements be considered indicative of future price movements.

In Chart 1, we present a histogram of the five-day rate of change (ROC) in the S&P 500 since 1928. The five-day ROC of stock prices has ranged from -27% to + 24%. This normal distribution[4] is strong evidence that five-day changes in stock prices are effectively random. Out of 21,165 observations of five-day ROCs, there have been 138 declines exceeding -8%, (0.65% of total) and 150 gains greater than +8% (0.71% of total). Accordingly, Ibbotson and Sinquefield would maintain that over any given 5-day period, the probability of the S&P 500 gaining or losing 8% or more is 1.36%. Stated differently, the probabilities of the S&P 500 returning between -7.99% and +7.99% are 98.64%.

Professor Jeremy Siegel adopts this idea. Siegel states, “The total return on equities dominates all other assets.”[5] Based on probabilities, we can be nearly certain that over the long-term, stocks will outperform bonds, gold, commodities, inflation, real estate, and other tradable investments.

Are these ideas true? Are stock price movements effectively random? Do historical stock market returns indicate probabilities of future returns? Can statistical analysis tell us that the equities market will continue to outperform all other assets? Can stock market data never indicate that over a given period of time the market will increase at a rate greater than its historical gain? Can stock market data never point toward the probability of a decline overwhelming the probability of a rally?

Chart 1

Roll of the Dice

Let us compare the capital markets to a pair of dice, and the shooting of the double sixes to an investment in the equity market. Let us assume that beginning in the year 1900 only one pair of dice existed and all gamblers played with that dice. Let us assume that the dice were weighted and biased towards the shooting of the double six. Rather than the honest odds of 2.78% for the throwing of the double six, let us assume the odds were 5.00%. It is logical to assume that all those who bet on or against the double six would seek compensation commensurate with the perceived (but inaccurately considered) risk. After a few years however, some gamblers may begin to notice a statistical anomaly. It would seem as if the double six were favored. Those gamblers would seek to adjust to the perceived new reality. As more and more gamblers took notice, and accepted the fact that the dice are inherently biased, they will adjust their betting odds accordingly.

Academics are, in fact, comparing the capital markets to those loaded dice. By studying historical market data, they have discovered the true nature of those dice. Ibbotson, Sinquefield, and Siegel can now state with certainty that stocks will outperform bonds and probabilistically return between +X% and -Y% over the next day, week, month or decade.

It is not just members of academia who have discovered the positive bias of the stock market. Investors in general seem to compare the market to those inadvertently loaded dice as well. Historically, investors wrongly assumed that buying stocks was a risky endeavor. As compensation for taking that risk, investors in equities

  • Required a cash yield higher than that of long-term corporate bonds[6]
  • Sought high absolute dividend yields[7]
  • Invested only a small portion of their assets in stock[8]
  • Limited their margin exposure[9]

Not yet realizing that the capital markets (dice) were positively biased towards the equity market (double sixes), investors liquidated en masse when dividend yields declined or economic slowdowns materialized. Experiencing decade after decade of stocks outperforming bonds, investors have come to realize that the market compensates for the risks assumed. They no longer require stock yields to be greater than the bond yield.[10] They no longer require a high absolute dividend yield.[11] High long-term exposure to the equity market is common.[12] Investing on margin is an accepted norm.[13] Further confirming the market’s positive bias, the 1987 crash passed with nary an effect, and the 2000-2003 Internet-stock implosion did not destroy well-diversified portfolios. The Dow, small-cap, mid-cap, and emerging markets worldwide continue making new, all-time highs. The wealth-creating machine continues running as expected. Investors know that over the longterm (measured in decades), stocks create wealth. Over the short-term (measured in days, months, and years), stock market direction is unpredictable!

Statistics vs. Markets

We disagree with the view of the academics, and deem the application of conventional statistical analysis to stock market prices as misguided. Stock market returns and risks cannot be compared to the probable outcomes of the throw of a pair of dice.[14] Nor can a bell-shaped curve generated by historical stock price movements be compared to the bell-shaped curve generated by a Quincunx board.[15] This is because an economic system is not the same as a physical system. In a physical system, predicted outcomes of dice rolls and Quincunx ball drops are true by definition. Trials or historic tests are not required to determine future outcomes. The probabilities of the outcomes are inherent within the nature of the object or system.

In economic systems such as the Capital Asset Price Structure of the United States markets, there are no physical objects or material systems to analyze. Historical returns and risks may never be replicable. The structure is in a constant state of unrest. Economies based on capitalism can turn to socialism. Heavily regulated or protected industries can be liberalized. Thriving industries can virtually vanish due to foreign competition. Industries prosperous in a free environment may encounter excessive regulation or nationalization by a socialistically inclined Congress. Tax rates may be raised or lowered. The unit of account itself (the currency) may be recalibrated. The Federal Reserve may mismanage the supply of money and credit, transform mild recessions into deep depressions, or turn normal cyclical recoveries into credit based booms. In short, when measuring the capital markets, particularly the stock market, one is measuring the results of a myriad of factors that may or may not repeat. Unique factors that may affect the markets in the future are not necessarily part of the historic system being measured.

Most importantly, statistical analysis of stock prices does not measure any of the various financial statistics of the companies that make up the market. Nor does statistical analysis measure any of the economic and political factors that contribute to the wealth of the nation. All that is actually being measured are the prices that investors are paying for those economic entities. Prices paid for marketable securities are far removed from a physical or natural system suitable to the rigors of statistical dissection.

We therefore believe that based on statistical analysis one can only affirm that the stock market may or may not outperform bonds in the future or that stocks may or may not exhibit a long-term rising price trend in the future. We can only know with a certainty that stocks may or may not compensate investors for risk assumed, and we can have no idea where the market will trade one day, one week, one month, one year, or one decade from the present.

We plainly disagree with Ibbotson, Sinquefield and Siegel, and do not recognize the ability to predict probabilities of stock market fluctuations. We take note that Nobel Prize winning economists portray the movement of stock prices as a random or drunkard’s walk.[16] Does this understanding of stock price movements mark the futility of technical market analysis?

Paradox of Prediction

In fact, were the movements of stock market prices to be of a random nature, the ultimate price trend may still be known and predictable in advance. This apparent paradox – that directionality can be predicted even if price movements were random – is based on a unique exception to the drunkard’s walk rule.

The famed zoologist and writer Stephen Jay Gould gives the following example. “A man staggers out of a bar dead drunk. He stands on the sidewalk in front of the bar, with the wall of the bar on one side and the gutter on the other. If he reaches the gutter he falls down into a stupor and the sequence ends. For simplicity’s sake, [and this example fits with the linear direction of stock price movement, either up or down] we will say that the drunk staggers in a single line only, either toward the wall or toward the gutter. He does not move at right angles along the sidewalk parallel to the wall and gutter.

“Where will the drunkard end up if we let him stagger long enough and entirely at random? He will finish in the gutter absolutely every time and for the following reason: Each stagger goes in either direction with 50% probability. The bar wall at one side is a ‘reflecting boundary.’ If the drunkard hits the wall, he just stays there until a subsequent random stagger propels him in the other direction. In other words, only one direction of movement remains open for continuous advance – toward the gutter.

“In a system of linear motion structurally constrained by a wall at one end, random movement, with no preferred directionality whatsoever, will inevitably propel the average position away from a starting point at the wall. The drunkard falls into the gutter every time, but his motion includes no trend whatever toward this form of perdition.”[17]

We posit that rigorous technical analysis can identify areas of “reflecting boundaries” in the capital markets. The direction of stock price movements can therefore be predicted in advance despite the perceived random nature of their daily and weekly moves.

Graham & Dodd Meet Technical Analysis

Value investors admit that stock prices do not always reflect the many financial statistics of the companies they value. The only certainties that stock prices do reveal are the levels at which buyers and sellers have agreed to transact.[18] The discipline of value investing depends on this fact, that stock price fluctuations are not always value driven. Stock price movements must be radically independent of fluctuations in the value of the underlying entity in order for value investing to be effective. If stock prices always reflect the underlying value of a company, how could a company whose intrinsic value was $50 ever trade at $20? How could a company worth $50 ever trade at $100? How could a stock, or for that matter the market, ever be overpriced or undervalued?

A more philosophical complexity is the following: If a stock appraised at $50 can be found to trade at $20, why can it not forever remain at $20? How can we be confident that this stock will return to intrinsic value? Why should a market that evaluates securities incorrectly be assumed to correctly price those very same securities in the future?

Benjamin Graham was asked this very question. In testifying before Congress, Graham stated, “That is one of the mysteries of our business, and it is a mystery to me as well as to everybody else. We know from experience that eventually the market catches up with value.”[19]

Graham, the father of fundamental security analysis considered the philosophy behind his discipline to be a “mystery.”[20] By our understanding, value investing works because excessively low or high stock prices relative to intrinsic valuation serve as a technical indicator of the proximity of a reflecting boundary. That reflecting boundary exists at a price level and during a time period when many diverse fundamental and technical factors converge. Low valuation is one of the factors that can contribute to that reflecting boundary. Low valuation itself is not that boundary, for if it were, then levels of undervaluation that determine a bottom would remain consistent over time. However a stock or market may bottom at 40% of intrinsic value, at other times it may do so at 50% or 30% of intrinsic value. There must be other factors that combine to contribute to that reflecting boundary. We do not attempt to discover those factors. We use technical data to discover when and at what level these reflecting boundaries exist. In our view, the primary causes of stock price movements are too diverse, complex, and hidden to be analyzable. What we as technicians attempt to do is recognize the symptoms that lead and accompany directional movement of stock market prices.

We posit that “reflecting boundaries” exist in the stock market. We do not know the nature of these reflecting boundaries. They are clearly not a predetermined boundary that can be measured and calculated. Nor are they fixed at a specific price level or calendar date. Their existence can at times be temporary, or very long lasting. There can be a single boundary or a series of boundaries at successively higher or lower prices. For reasons not knowable through direct analysis, these boundaries can cause stock prices to find support against further decline, or conversely they can cause stock prices to find resistance against further rally.

Discovering the Boundaries

Having theorized that stock price movements are generally random but are affected by boundaries of support and resistance, let us now reveal methods of discovering those boundaries. Let us return to Chart 1, the five-day rate of change.

This is a simple indicator, one that is based solely on price and time. Note that the curve generated by five-day rates of change is a standard curve. This fiveday data should proffer no predictive edge, and a statistician would conclude that these five-day rates of change are random. They are random in the sense that they cannot be predicted in advance. But where others perceive randomness, we take notice. Why would buyers be willing to pay 8-24% more for a diversified portfolio of stocks than they were willing to pay five days prior? Why would sellers be willing to accept 8-24% less than they were willing to receive five days prior? We do not care to know the answer. We care that it is a good question. We care that the action of those buyers and sellers are effectively aberrant.

Our notion is that the only information that can be gleaned from stock prices is the willingness of investors to pay those prices. We therefore study the tails of standard statistical curves and take note when they reflect anomalous behavior on the part of those who determine market prices. The specific times at which this action takes place cannot be predicted in advance, and their occurrence is effectively random. But those uncommon actions, when they do take place, signal the proximity of that “reflecting boundary.” When an apparent reflecting boundary has been hit by a myriad of buyers and sellers, the market inevitably propels away from that boundary.

Chart 1

Chart 2 displays an arrow each time the S&P 500 has rallied 8%[21] or more over a five- day period. See Appendix 1 for all signal dates

Chart 2

Five-Day ROC +8%

Note that the periods during which those extraordinary events occur are often proximate significant turning points.[22]

Technical indicators do not reveal causes of market movement. They simply indicate the proximity of a reflecting boundary. We therefore use technical indicators only in context of a potential reflective boundary. When creating models we utilize data only when they are proximate to a measured high or low, a potentially precise turning point.

Using the five-day Rate of Change we eliminate all signals that are not proximate to potential and significant short term lows. Each signal date that is more than six days after the markets lowest low over the previous 90 days is therefore ignored. Additionally, we void of any thrust type indicator that signals just one to three days after a market low. We therefore eliminate any signal that flashes only one to three days after a 90 day low. This five-day ROC indicator then signals whenever the market has gained 8% or more over five days, as well as having made a new 90 day low within the previous four, five, or six days. See Appendix 2.[23] Table 1 presents all of the final five-day +8% ROC signals.

Table 1

Five-Day ROC 8% or greater 4-6 days after 90-day low

Recognizing the existence of reflecting boundaries and using price and time data alone, we have created an indicator in the S&P 500 Index that signaled within four to six days of the historic lows of:

  • November 13, 1929
  • June 1, 1932
  • February 27, 1933
  • June 26, 1962
  • May 26, 1970
  • October 3, 1974
  • August 12, 1982

And that signaled within four days of the triple bottom that began the latest bull market:

  • July 23, 2002
  • October 9, 2002
  • March 11, 2003

We have displayed the right tails of the five-day ROC curve. We have established that random movements of stock prices in conjunction with boundary analysis can be used to pinpoint proximate turning points. We now turn to the left tails of the same standard curve. Chart 3 displays an arrow each time the S&P 500 has declined 8% or more over a five-day period. See Appendix 3 for all signal dates.

Chart 3

Five-Day ROC -8%

Note that these signals, which use a negative 8% parameter, often occur directly proximate a significant turning point.

Using these -8% five-day ROC signal dates, we eliminate all signals that are not proximate to potential lows. We therefore include only those signals that take place as the market is trading at a maximum of one day[24] after a six month low. All signal dates that are two days or more after a six-month low are eliminated.

Having utilized the two main legs of technical analysis, price and time, we will now introduce the third leg of technical analysis, volume. Five-day market volume can be represented by a standard curve, yet significant increases in market volume are not randomly distributed. The following (Chart 4) indicates each time the five-day average of daily volume was highest in 250 days. Out of 20,876 observations since 1929, there have been 425 instances (2.04% of total) of five-day average daily volume at a 250 day high. See Appendix 4 for all dates on which this occurred.

Chart 4

Five-Day Volume Highers in 250 Days

We wonder why sellers would accept 8-24% less than they were willing to obtain five days prior. More importantly, we note that their urgency to sell (as reflected in the 250-day volume figures) increases dramatically as prices decline. In Table 2 we combine price, time and volume. Table 2 lists all periods during which both the five-day rate of decline was -8% or greater (price and time) and the five-day average of volume was highest within 250 days (time and volume). Additionally, in seeking indications of a technical reflecting boundary, we consider only those dates on which the price the sellers receive for their index of stocks was within one day of the lowest price they could have received during the previous six months (price and time). Results in Table 2 are compelling. By observing aberrations in price, time, and volume, we have created a viable capitulation-defining indicator.

This method can be refined further. We wait until a series of five-day 8% declines ends. Since we cannot know when that final day of a series occurs until a day after the series ends, we set our signal dates as one day after a -8% ROC extreme. To accommodate this adjustment we allow our buy signal to lag the 250-day volume boundary and the six-month low boundary by a maximum of seven days. (see table 3)

Recognizing the existence of reflecting boundaries, and using price, volume and time alone, we have created an indicator in the S&P 500 Index that signaled within four-days of the historic lows of: November 13, 1929; October 19, 1987; July 23, 2002; and near the final low of June 26,1962.

TRIN + Five-Day Volume

This concept that markets turn at reflecting boundaries permits the same indicators to call both tops and bottoms. It depends on whether those indicators are signaling at a potential top boundary or at a potential bottom boundary. An excellent example is the S&P 500 TRIN indicator.

We consider a reading on the S&P 500 TRIN at or below .50 as representing extreme urgency to buy. Since 1957 there have been 530 (4.13% of total) days in which TRIN was at .50 or below. Looking at the five-day volume figures, we find that since 1957 there have been 240 instances (1.87% of total) in which the five-day average volume was highest in 375 days. (see appendix 5) When TRIN trades at or below.50 on a given day or on the previous day, and the five-day average volume is highest in 375 days on that day or on the previous day, we have an indicator suggestive of a potential market turn.

If the market has traded at a new one-year low within the previous ten days (supportive boundary), we get a buy signal. See Table 4, and note that all seven signals resulted in long-term bull markets. (see chart appendix 6)

If however the market is trading at a new three year high (potential top boundary) and during the previous five days TRIN traded at or below .50, and the five-day average volume was highest in 375 days within one day of the TRIN extreme, we get a sell signal. (see table 4a) Note that all signals led to bear markets. (see chart appendix 7)

Technical Analysis Redefined

Combining extremes in TRIN, volume, and proximity to potential reflective boundaries, creates an indicator that correctly identified seven major bull markets and four major bear markets. (see charts appendix 6 and 7)

We have demonstrated that at significant turning points, the ultimate trend of the market can be predicted. We have introduced a new idea in technical analysis, the idea of “reflecting boundaries.” While in this paper we have demonstrated longer term boundaries, this idea can be used for the shorter term as well. This concept when used in conjunction with existing technical indicators can greatly assist the analyst in pinpointing market turning points. We hope this paper opens new possibilities for those who work at this mystifying discipline.

Footnotes

  1. Graduate School of Business, University of Chicago 1949-1992
  2. The Journal of Finance, Vol. 14, No. 1 (Mar., 1959), Roberts does admit that “phenomena that can be only described as chance today,” such as the behavior of stock prices and the emission of alpha particles in radioactive decay, “may ultimately be understood in a deeper sense.”
  3. Stocks, Bonds, etc: 1989 edition. Ibbotson & Sinquefield Ch. 10
  4. The true normal distribution is a mathematical abstraction, never perfectly observed in nature
  5. Stocks for the Long Run, J. Siegel
  6. From 1871-1938 dividend yields averaged 1.1/4 times bond yields. From 1938-1955 they averaged 2 times the bond yield. Security Analysis, Graham and Dodd 1962 edition page 420
  7. At the eight market peaks from 1901 to 1929 yields averaged 3.55%. At the 10 market peaks from 1930 to 1956 yields averaged 4.74%. At the 10 market peaks from 1960 to 1984 yields averaged 3.11% At the five market peaks since 1987 yields averaged 1.97% (Ned Davis Research reports 405 and 400)
  8. NDR charts # S485 and S486.
  9. Investors have increased margined investments as % of GDP from .43% in 1950 to 2.00% currently. NDR charts 20420
  10. Bond yields are currently 2.4 times stock yields
  11. At the five market peaks since 1987, yields averaged 1.97% (Ned Davis Research reports 405 and 400)
  12. NDR charts # S485 and S486
  13. Investors have increased margined investments as % of GDP from .43% in 1950 to 2.00% currently. NDR charts 20420
  14. Paul M. Montgomery Universal Economics Jan 2, 2007 (757-597-9528)
  15. See http://www.jcu.edu/math/isep/Quincunx/Quincunx.html
  16. William Sharpe, et al. Investments, (6th Ed.)
  17. Full House by Stephen Jay Gould, pages 149-151
  18. See The Essays of Warren Buffet. Cunningham, Pg. 65
  19. 84th Congress, 1st session, “Factors Affecting the Buying and Selling of Securities,” March 11, 1955
  20. Technical disciplines are indeed a mystery. We do know from experience though, that these disciplines work
  21. We are not the first to notice the predictive ability of this raw Five-day ROC data
  22. Readers should note that prior to March, 1957, the S&P 500 consisted of only 90 stocks and was therefore less suitable to general market analysis
  23. William J. O’Neill has elaborated on this concept in his market studies
  24. Oversold action may signal within one day of a low. Only thrust action within three days of a low is suspect

Appendices

BACK TO TOP

Price & Volume, Digging Deeper

by Buff Dormeier, CMT

About the Author | Buff Dormeier, CMT

Buff Dormeier serves as the Chief Technical Analyst at Kingsview Wealth Management.  Previously, he was a Managing Director of Investments and a Senior PIM Portfolio Manager at the Dormeier Wealth Management Group of Wells Fargo Advisors.

In 2007, Dormeier’s technical research was awarded the prestigious Charles H. Dow Award. Also an award winning author, Buff authored “Investing with Volume Analysis“. Partnering with Financial Times Press, Pearson Publishing and the Wharton School, this book is the only one to win both Technical Analyst’s Book of the Year (2013) and Trader Planet’s top Book Resource (2012) to date.

Buff’s work has also been featured in a variety of national and international publications and technical journals. Now, with Kingsview Wealth Management’s affiliation, Buff’s expertise and proprietary work on technical and volume analysis shall become much more accessible to journalists and other media alike.

As a portfolio manager, Buff was featured in “Technical Analysis and Behavior Finance in Fund Management” – an international book comprised of interviews with 21 PM’s across the world who utilize technical analysis as a portfolio driver. In his new role with Kingsview Wealth Management, Buff’s unique performance driven strategies will be now be available to a wide audience of financial advisors and institutional clientele.

Buff has a Bachelor’s Degree of Science (B.S.) in Business and a Bachelor of Applied Science (B.A.Sc.) in Urban and Regional Planning from Indiana State University.

When securities change hands on a securities auction market, the volume of shares bought always matches the volume sold on executed orders. When the price rises, the upward movement reflects demand exceeds supply or that buyers are in control. Likewise, when the price falls it implies supply exceeds demand or that sellers are in control. Over time, these trends of supply and demand form accumulation and distribution patterns. What if there was a way to look deep inside price and volume trends to determine if current prices were supported by volume. This is the objective of the Volume Price Confirmation Indicator (VPCI), a methodology that measures the intrinsic relationship between price and volume.

The Volume Price Confirmation Indicator or VPCI exposes the relationship between the prevailing price trend and the volume, as either confirming or contradicting the price trend, thereby giving notice of possible impending price movements. This paper discusses the derivation and components of the VPCI, and explains how to use the VPCI. We also review comprehensive testing of the VPCI, and presents further applications using the indicator.

In exchange markets, price results from an agreement between buyers and sellers to exchange, despite their different appraisals of the exchanged item’s value. One opinion may have legitimate fundamental grounds for evaluation; the other may be pure nonsense. However, to the market, both are equal. Price represents the convictions, emotions and volition of investors.[1] It is not a constant, but rather is changed and influenced over time by information, opinions and emotions.

Market volume represents the number of shares traded over a given time period. It is a measurement of the participation, enthusiasm, and interest in a given security. Volume can be thought of as the force that drives the market. Force or volume is defined as power exerted against support or resistance.[2] In physics, force is a vector quantity that tends to produce acceleration.[3] The same is true of market volume. Volume substantiates, energizes, and empowers price. When volume increases, it confirms price direction; when volume decreases, it contradicts price direction. In theory, increases in volume generally precede significant price movements. This basic tenet of technical analysis, that volume precedes price, has been repeated as a mantra since the days of Charles Dow.[4] Within these two independently derived variables, price and volume, exists an intrinsic relationship. When examined conjointly, price and volume give indications of supply and demand that neither could provide independently.

Deriving the Components

The basic VPCI concept is derived by examining the difference between a volume-weighted moving price average (VWMAs) and the corresponding simple moving price average (SMA). These differences expose information about the inherent relationship between price and volume. Although, SMAs demonstrate a stock’s changing price levels, they do not reflect the amount of investor participation. On the other hand, with VWMAs, price emphasis is adjusted proportionally to each day’s volume, and then compared to the average volume over the range of study. The VWMA is calculated by weighting each time frame’s closing price with the time frame’s volume compared to the total volume during the range:

volume-weighted average = sum {closing price (I) * [volume (I)/(total range)]} where I = given day’s action.

This is an example of how to calculate a two-day moving average, using both the SMA and VWMA for a security trading at $10.00 a share with 100,000 shares changing hands on the first day, and at $12.00 a share with 300,000 shares changing hands on the second day. The SMA calculation is Day One’s price plus Day Two’s price divided by the number of days, or (10+12)/2, which equals 11. The VWMA calculation would be Day One’s price $10 multiplied by Day One’s volume which is expressed as a fraction of the total range: (100,000/400,000 = 1/4) plus Day Two’s price $12 multiplied by Day Two’s volume of the total range expressed as a fraction (300,000/400,000 = 3/4), which equals 11.5 (2.5 Day One + 9 Day Two)[5].

The VWMA measures investor’s commitments expressed through price, weighted by each day’s corresponding volume (participation), compared to the total volume (participation) over time. Thus, volume-weighted averages weight closing prices in exact proportion to the volume traded during each time period. Keeping in mind how VWMAs work, an investigation of the VPCI may begin.

The VPCI involves three calculations: 

  1. volume-price confirmation/contradiction (VPC+/-),
  2. volume-price ratio (VPR), and
  3. volume multiplier (VM).

The VPC is calculated by subtracting a long-term SMA from the same time frame’s VWMA. In essence, this calculation is the otherwise unseen nexus between price and price proportionally weighted to volume. This difference, when positive, is the VPC+ (volume-price confirmation) and, when negative, the VPC- (volume-price contradiction). This computation is the intrinsic relationship between price and volume symmetrically distributed over time. The result is quite revealing. For example, a 50-day SMA might be $48.5, whereas the 50-day VWMA may be $50. The difference of 1.5 represents price-volume confirmation (VWMA – SMA). (see Chart 1) If the calculation were negative, then it would represent price-volume contradiction. This calculation alone provides purely unadorned information about the otherwise unseen relationship between price and volume.

The next step is to calculate the volume price ratio (VPR). VPR accentuates the VPC+/- relative to the short-term price-volume relationship. The VPR is calculated by dividing the short-term VWMA by the short-term SMA. For example, assume the short-term timeframe is 10 days, and the 10-day VWMA is $68.75, while the 10-day SMA is $55. The VPR would equal 68.75/55, or 1.25. This factor will be multiplied by the VPC (+/-) calculated in the first step. Volume price ratios greater than 1 increase the weight of the VPC+/-. Volume-price ratios below 1 decrease the weight of the VPC+/-.

The third and final step is to calculate the volume multiplier (VM). The VM’s objective is to overweight the VPCI when volume is increasing and underweight the VPCI when volume is decreasing. This is done by dividing the short-term average volume by the long-term average volume. As an illustration, assume SMA’s short-term average volume for 10 days is 1.5 million shares a day, and the long-term average volume for 50 days is 750,000 shares per day. The VM equals 2 (1,500,000/750,000). This calculation is then multiplied by the VPC+/- after it has been multiplied by the VPR. Now we have all the information necessary to calculate the VPCI. The VPC+ confirmation of +1.5 is multiplied by the VPR of 1.25, giving 1.875. Then 1.875 is multiplied by the VM of 2, giving a VPCI of 3.75. Although this number is indicative of an issue under very strong volume-price confirmation, this information serves best relative to the current and prior price trend and relative to recent VPCI levels. Next, we discuss how to properly use the VPCI.

Using the VPCI

We have previously expressed price as the emotion, conviction and volition of investors. Logically, we could then also define a price trend as the emotion, conviction and volition of investors expressed over time. Generally, a buyer’s underlying emotion or motivation is greed. Greed is the desire to obtain a profit. An uptrend could be viewed then as an accumulation of greed over time.

Many times, but not always, an investor who creates supply, a seller, is motivated by the fear of losing value in his investment. Likewise, a downtrend would then be the accumulation of fear over time. We also spoke of volume as the force that sustains price. Force implies energy. A rising volume trend would represent a buildup in energy or fuel. A decrease in volume would then represent the loss of fuel, nonworking energy or entropy.

Greed or an uptrend needs fuel to build and sustain itself. Greed’s growth cannot be sustained without energy. An investor will lose interest and move on to better opportunities. Whereas, an investor who is a seller, maybe bearish or fearful, but not necessarily. A seller could be motivated by greed and sell, allowing participation in a more lucrative investment. Or a seller could be motivated by greater emotions than greed, such as lust or personal responsibilities. In such cases, the investor will sell his investment to buy material pleasures or to satisfy his responsibilities. In this way greed (bulls) need fuel (volume) to expand but fear (bears) do not necessarily need volume to fall.

Confirming Signals

Several VPCI signals may be employed in conjunction with price trends and price indicators. These include a VPCI greater than zero, which shows whether he relationship between price trends and volume confirms or contradicts the price trend.[6] More importantly, a rising or falling VPCI, provides the trend direction of the VPCI, revealing the direction of confirmation or contradiction. And a smoothed volume-weighted average of VPCI called “VPCI smoothed” demonstrates how much the VPCI has changed from previous VPCI levels, and is used to indicate momentum. Bollinger Bands[7] maybe also applied to the VPCI, exposing VPCI extremes.

Fundamentally, the VPCI reveals the proportional imbalances between price trends and volume-adjusted price trends. An uptrend with increasing volume is a market characterized by greed supported by the fuel needed to grow. An uptrend without volume is complacent and reveals greed deprived of the fuel needed to sustain itself. Investors without the influx of other investors (volume) will eventually lose interest and the uptrend should eventually breakdown.

A falling price trend reveals a market driven by fear. A falling price trend without volume reveals apathy, fear without increasing energy. Unlike greed, fear is self-sustaining, and may endure for long time periods without increasing fuel or energy. Adding energy to fear can be likened to adding fuel to a fire and is generally bearish until the VPCI reverses. In such cases, weak-minded investor’s, overcome by fear, are becoming irrationally fearful until the selling climax reaches a state of maximum homogeneity. At this point, ownership held by weak investor’s has been purged, producing a type of heat death capitulation. These occurrences may be visualized by the VPCI falling below the lower standard deviation eight of a Bollinger Band of the VPCI, and then rising above the lower band, and forming a “V” bottom.

It’s important to note when using the VPCI that volume leads or precedes price action. Unlike most indicators, the VPCI will often give indications before price trends are clear. Thus, when a VPCI signal is given in an unclear price trend, it is best to wait until one is evident. At Point 1 in Chart 3, TM – Toyota Motor, is breaking out of a downtrend and the VPCI confirms this breakout immediately as the VPCI rises, crossing over the VPCI smoothed and then the zero line. This is an example of VPCI’s bullish confirmation of a price trend. Later, the VPCI begins to fall during the uptrend, suggesting complacency. By Point 2, the VPCI crosses under the VPCI smoothed warning of a possible pause within the new uptrend. This is a classic example of a VPCI bearish contradiction. Before we reach Point 3, the VPCI makes an interesting pattern forming a “V” bottom. This is a bullish sign, often indicating the sell off has washed out many of the sellers. Later at Point 3, the VPCI confirms the earlier bullish “V” pattern with a bullish crossover leading to a strong bull rally.

Comparing the VPCI to other Price Volume Indicators

There are many price volume indicators one could use to compare the VPCI to. However, the most acclaimed is Joe Granville’s original on-balance volume (OBV) indicator.[9] Recognizing volume as the force behind price, Granville, created OBV by assigning up days as positive volume (measured by an up close) and then subtracting volume on down days. OBV is price-directed volume, the accumulation of +/- volume flows based upon price direction. Granville’s original objective with on-balance volume was to uncover hidden coils in an otherwise non-eventful, non-trending market.[10] With his OBV indicator, Joe Granville, became a renowned market strategist. In so doing, he popularized OBV and the wisdom of using volume in securities analysis. Now, OBV is a standard application on charting software and there are many OBV practitioners. However, few are able to interpret the indicators indications as competently as Granville.

The VPCI differs from OBV in that it calculates the proportional imbalances between price trends and volume- weighted price trends. This exposes the influence volume has upon a price trend. Although both contain volume- derived data, they convey different information. In composition, the VPCI is not an accumulation of history like OBV but rather a snapshot of the influence of volume upon a price trend over a specified period of time. This enables the VPCI to give faster signals than accumulation indicators similar to an oscillator. In contrast to OBV, the VPCI’s objective is not to uncover hidden coils in trendless markets, but to evaluate the health of existing trends.

Comparing the VPCI to OBV

To illustrate the effectiveness and proper use of the VPCI, a test was conducted comparing the VPCI to OBV. The most general VPCI buy signal is when the VPCI crosses above the VPCI smoothed in an up-trending market. This indicates the VPCI is rising relative to previous VPCI levels. The traditional OBV does not have a lagging trigger like the VPCI smoothed, so I amended the OBV by adding an additional eight-period simple moving average of OBV. The net effect gives OBV a corresponding trigger to the VPCI smoothed. OBV crossovers of OBV smoothed would give indications of OBV rising relative to previous OBV levels. Remember, VPCI is designed to be used in a trending market, with a trending indicator. Thus we need two additional tools to complete this test. First, we’ll need an indicator to verify whether or not we are in a trending market. A seven-period ADX (Average Directional Index by Welles Wilder) indicator fulfills this criterion by indicating an intense trend when ADX equals or is greater than 30.[11] Next, we will need a trend indicator to show the trend’s direction. Gerald Appel’s MACD (Moving Average Convergence Divergence) with the traditional (12, 26, 9) settings was used to provide buy entry signals for this test.[12] Finally, we will need a test subject which illustrates how these indicators work across a broad market. I can think of no better or popular vehicle for this experiment than the SPDR S&P 500; exchange traded fund. The testing period was conducted from inception (February, 1993) until the end of 2006. Standard specifications were used on both indicators (OBV – 20 day and VPCI 5/20 {5 day short-term trend & 5*4 day long-term trend}). Results were not optimized in any way. (Please note that the examples provided are for informational purposes only. This is in no way a solicitation or offer to the fore mentioned security.) In this system, long positions are taken only when the above conditions are met when accompanied by OBV crossovers in the first test, or by VPCI crossovers in the second test. Long positions are exited with crossunders of OBV smoothed in the first test or with VPCI crossunders in the second study. Although this test was created rather simplistically and traditionally for both observational and creditability purposes, the results are quite stunning.

Excluding dividends or interest, OBV’s annualized rate of return in the above system was -1.57%, whereas the VPCI’s annualized return was 8.11%, an outperformance of over 9.5% annualized. The VPCI improved reliability, giving profitable signals over 65 percent of the time, compared to OBV at only 42.86 percent. Another consideration in evaluating performance is risk. The VPCI had less than half the risk as measured by volatility, 7.42 standard deviations compared to OBV with 17.4 standard deviations from the mean. It is not surprising, then that the VPCI had much better risk adjusted rates of return. The VPCI’s Sharpe Ratio from inception was .70 and had a profit factor of 2.47, compared to OBV with a -0.09 Sharpe Ratio and a profit factor of less than 1. Admittedly, this testing environment is an uneven match. The VPCI uses information from volume-weighted prices to gauge the health of existing trends, whereas OBV accumulates volume flows as directed by price changes to uncover hidden coils. Thus the conditions setup in this system, a trending market with apparent price direction, is one in which the VPCI is designed to succeed. Although, OBV was not necessarily setup for failure either, this study does illustrate how less savvy practitioners often fail to use the indicators’ information correctly or fail to coordinate the indicators properly.

What if an investor had just used the MACD buy and sell signals within this same system, without utilizing the VPCI information? In this example, this investor would have lost out on nearly 12% annualized return, the difference between the VPCI’s positive 8.11% versus the MACD’s negative -3.88% rate of return, while significantly increasing risk. What if this investor had just employed a buy-and-hold approach? Although this investor would have realized a slightly higher return, he/she would have been exposed to much greater risks. The VPCI strategy returned nearly 90% of the buy-and-hold strategy return while avoiding about 60% less risk as measured by standard deviation. Looking at risk-adjusted returns another way, the five year Sharpe Ratio for the SPDR 500 was only .1 compared to the VPCI system of .74. Additionally, the VPCI investor would have been invested only 35% of the time, allowing the investor the opportunity to invest in other investments. During the 65% of the time the investor was not invested, he/she would have only needed a 1.84% money-market yield to exceed the buy-and-hold strategy. Moreover, this investor would have experienced a much smoother performance, without such precipitous capital draw downs. The worst annualized VPCI return was only a measly -2.71% compared to the underlying investments worst year of -22.81%, more than 20% difference in the rate of return! If an investor had invested in a money-market instrument, while not invested in the SPDR S&P 500, this VPCI strategy would not have experienced a single down year.

Other Applications

Further testing not covered in this research report suggests the VPCI may be used broadly across most markets exhibiting clear and reliable price and volume data such as individual equities, exchange traded funds, and broad indices. The raw VPCI calculation may also be used as a multiplier or divider in conjunction with other indicators, such as moving averages, momentum indicators, or raw price and volume data. For example, if an investor has a trailing stop loss order set at the five-week moving average of the lows, one could divide the stop price by the VPCI calculation. This would lower the price stop when price and volume are in confirmation, increasing the probability of keeping an issue under accumulation. However, when price and volume are in contradiction, dividing the stop loss by the VPCI would raise the stop price, preserving more capital. Similarly, using VPCI as a multiplier to other price, volume, and momentum indicators may not only improve reliability but it could increase responsiveness as well.

Conclusion

The VPCI reconciles volume and price as determined by each of their proportional weights. This information may be used to deduce likelihood of a current price trend continuing or reversing. I believe this study clearly demonstrates that adding the VPCI indicator to a trend-following system resulted in consistently improved performance across all major areas measured by the study. It is my opinion that in the hands of a proficient investor, the Volume Price Confirmation Indicator is a capable tool providing information which may be useful in potentially accelerating profits, reducing risk and empowering the investor towards sound investment decisions.

End Notes

  1. Arms, Richard W. (1996) Trading Without Fear New York, NY (John Wiley & Sons)
  2. Ammer, C. (1997). The American Heritage Dictionary of Idioms. Boston: Houghton Mifflin Company
  3. The American Heritage Stedman’s Medical Dictionary. (2002). Boston: Houghton Mifflin Company
  4. Edwards, R.D., & Magee, J. (1992). Technical Analysis of Stock Trends. Boston: John Magee Inc
  5. Buff Dormeier, “Buff Up Your Moving Averages” Technical Analysis of Stocks & Commodities Volume 19-2, February (2001)
  6. Christopher Narcouzi, “Chaikin’s Money Flow” Technical Analysis of Stocks & Commodities Volume 18-8, August (2000)
  7. Bollinger, John (2002), Bollinger on Bollinger Bands (McGraw-Hill, New York, NY)
  8. A measure of the dispersion of a set of data from its mean. The more spread apart the data is, the higher the deviation
  9. Granville, Joseph E (1960). A Strategy of Daily Stock Market Timing for Maximum Profit
  10. Carl Ehrlich, “Using Oscillators with On-Balance Volume,” Technical Analysis of Stocks and Commodities, Volume 18, September (2000)
  11. Wilder, J. Welles (1978). New Concepts In Technical Trading Systems, Trend Research
  12. Murphy, John J. (1999). Technical Analysis of The Financial Markets, New York Institute of Finance

Technical analysis is only one form of analysis. Investors should also consider the merits of Fundamental and Quantitative analysis when making investment decisions. Technical analysis is based on the study of historical price movements and past trend patterns. There is no assurance that these movements or trends can or will be duplicated in the future.The solutions discussed may not be suitable for your personal situation, even if it is similar to the example presented. Investors should make their own decisions based on their specific investment objectives and financial circumstances.Wachovia Securities did not assist in the preparation of this report, and its accuracy and completeness are not guaranteed. The opinions expressed in this report are those of the author(s) and are not necessarily those of Wachovia Securities or its affiliates. The material has been prepared or is distributed solely for information purposes and is not a solicitation or an offer to buy any security or instrument or to participate in any trading strategy. Wachovia Securities, LLC, member New York Stock Exchange and SIPC, is a separate non-bank affiliate of Wachovia Corporation.

BACK TO TOP

Inferring Trading Strategies From Probability Distribution Functions

by John Ehlers

About the Author | John Ehlers

John Ehlers is Chief Scientist and President of MESA Software, Inc. He is a technical analyst and Electrical Engineer, with a BSEE and MSEE from the University of Missouri, who completed his doctoral work at The George Washington University, specializing in Fields & Waves and Information Theory. John retired as a Senior Engineering Fellow from Raytheon, and has been a private trader since 1976. With his engineering training, he quickly gravitated towards technical analysis of the market. He originally questioned what was magic about a 14-day RSI, or any other period. Eventually, he concluded there was no unique answer and that one should adapt to current market conditions by using the measured cycle.

John is a pioneer in introducing the MESA cycles-measuring algorithm and the use of digital signal processing in technical analysis.  He discovered Maximum Entropy Spectrum Analysis (MESA) while attending an Information Theory seminar in 1978. He quickly reduced the theory to a computer program useful for trading; it was written originally for S-100 computers, and he sold the source code to a few brave traders who pioneered the use of PCs for technical trading. He converted the program for the APPLE II computer to take advantage of its graphics capability and data availability via modems. The program has evolved with the increased capacity of modern computers.

John has written extensively about quantitative algorithmic trading using advanced DSP (Digital Signal Processing) and has spoken internationally on the subject. His books include MESA and Trading Market Cycles, Rocket Science for Traders, and Cybernetic Analysis for Stocks and Futures.  His approach is unique in its holding that any technique must first work on theoretical waveforms before testing against real-world data is attempted.

Background

The primary purpose of technical analysis is to observe market events and tally their consequences to formulate predictions. In this sense market technicians are dealing with statistical probabilities. In particular, technicians often use a type of indicator known as an oscillator to forecast short-term price movements.

An oscillator can be viewed as a high pass filter in that it removes lower frequency trends while allowing the higher frequencies components, i.e., short-term price swings to remain. On the other hand, moving averages act as a low pass filters by removing short-term price movements while permitting longer-term trend components to be retained. Thus moving averages function as trend detectors whereas oscillators act in an opposite manner to “de-trend” data in order to enhance short term price movements. Oscillators and moving averages are filters that convert price inputs into output waveforms to magnify or emphasize certain aspects of the input data. The process of filtering necessarily removes information from the input data and its application is not without consequences.

A significant issue with oscillators (as well as moving averages) for short term trading is that they introduce lag. While academically interesting, the consequences of lag are costly to the trader. Lag stems from the fact that oscillators by design are reactive rather than anticipatory. As a result, traders must wait for confirmation; a process that introduces additional lag into the ability to take action. It is now widely accepted that classical oscillators can be very accurate in hindsight but are typically inadequate for forecasting future short-term market direction, in large part due to lag.

Probability Distribution Functions

The basic shortcoming of classical oscillators is that they are reactive rather than anticipatory. As a result, the undesirable lag component in oscillators significantly degrades their usefulness as a tool for profitable short-term trading. What is needed is an effective mechanism for anticipating turning points.

The Probability Distribution Function (PDF) can be borrowed from the field of statistics and used to examine detrended market prices for the purpose of inferring trading strategies. The PDF offers an alternative approach to the classical oscillator; one that is non-causal in anticipating short-term turning points.

PDFs place events into “bins” with each bin containing the number of occurrences in the y-axis and the range of events in the x-axis. For example, consider the square wave shown in Figure 1A. Although unrealistic in the real world, if one were to envision the square wave as “quantum” prices that can only have values of -1 or +1, the resultant PDF consists simply of two vertical “spikes” at -1 and +1 as shown in Figure 1B. Such a waveform could not be traded using conventional oscillators because any price movement would be over before the oscillator could yield a signal. However as the PDFs below will show, the theoretical square wave is not far removed from real-world short term cycles.

As a practical example, a theoretical sine wave can be used to more accurately model real-world detrended prices. An idealized sinewave is shown in Figure 1C and its corresponding PDF in Figure 1D. The PDFs of the square wave and that of the sine wave are remarkably similar. In each case there is a high probability of the waveforms being near their extremes as can be seen in the large spikes in Figure 1D. These spikes correspond to short-term turning points in the detrended prices. The probability is high near the turning points because there is very little price movement in these phases of the cycle, with prices ranging only from about 0.8 to 1.0 and -0.8 to -1.0 in Figure 1C.

The high probability of short term prices being near their extreme excursions is a principal difficulty in short-term cycle and swing trading. The move has mostly occurred before the oscillators can identify the turning point. The indicator “works” but only in hindsight limiting its usefulness for predicting future price movements.

A possible solution to this lag dilemma is to develop techniques to anticipate turning points. Although exceedingly difficult to accomplish with classical oscillators, the PDF affords us the opportunity to anticipate turning points if properly shaped or to use two alternative methods:

  1. Model the market data as a sine wave and shift the modeled waveform into the future by generating a leading cosine wave from it.
  2. Apply a transform to the detrended waveform to isolate the peak excursions, i.e., rare occurrences – and anticipate a short-term price reversion from the peak.

Each of these approaches will be examined below. However it is instructive to begin with an analogy for visualizing a theoretical sine wave PDF and then examine PDFs of actual market data. As will be shown, market data PDFs are neither Gaussian as commonly assumed nor random as asserted by the Efficient Market Hypothesis.

Measuring Probability Distribution Functions

An easy way to visualize how a PDF is measured as in figure 2B is to envision the waveform as beads strung on parallel horizontal wires on vertical frames as shown in Figure 2A. Rotate the wire-frame clockwise 90 degrees (1/4 turn) so the horizontal wires are now vertical allowing the beads to fall to the bottom. The beads stack up in Figure 2B in direct proportion to their density at each horizontal wire in the waveform with the largest number of occurrences at the extreme turning points of +1 and -1.

Measuring PDFs of detrended prices using a computer program is conceptually identical to stacking the beads in the wireframe structure. The amplitude of the detrended price waveform is quantized into “bins” (i.e. the vertical wires) and then the occurrences in each bin are summed to generate the measured PDF. The prices are normalized to fall between the highest point and the lowest point within the selected channel period.

Figure 3 shows actual price PDFs measured over thirty years using the continuous contract for US Treasury Bond Futures. Note that the distributions are similar to that of a sine wave in each case. The non-uniform shapes suggest that developing short term trading systems based on sine wave modeling could be successful.

Normalizing prices to their swings within a channel period is not the only way to detrend prices. An alternative method is to sum the up day closing prices independently from down days. That way the differential of these sums can be normalized to their sum. The result is a normalized channel, and is the generic form of the classic RSI indicator. The measured PDF using this method of detrending of the same 30 years of US Treasury Bonds data is shown in Figure 4. In this case, the PDF is more like the familiar bell-shaped curve of a Gaussian PDF. One could conclude from this that a short-term trading system based on cycles would be less than successful as the high probability points are not near the maximum excursion turning points.

Because the turning points have relatively low probability an alternate strategy can be inferred. The idea is to buy when the detrended price crosses below a threshold near the lower bound in anticipation of the prices reversing to higher probability territory. Similarly, the strategy would sell when the detrended price crosses above a threshold near the upper bound. Note that this is not the same as using classical 30/70 or 20/80 thresholds for signals with the RSI because signal is not waiting for confirmation crossing back across the thresholds. Here we are anticipating a reversal to a higher probability occurrence – we expect a reversion to normalcy. Using this anticipatory method in the case of a classic indicator such as the Stochastic oscillator can be costly because the Stochastic can easily remain at the extreme excursion point (or “rail” in engineering parlance) for long periods of time.

As previously mentioned, another way to detrend the price data is to use high pass filter to remove its lower frequency trend components. Once detrended, the result must be normalized to a fixed excursion so that it can be properly binned before applying the PDF. The resulting PDF is shown in Figure 5. In this case, the PDF shape is nearly uniform across all bins. A uniform PDF means the amplitude in one bin is just as likely to occur as another. In this case neither a cycles-based strategy nor a strategy based on low probability events could be expected to be successful. The PDF must somehow be transformed to enhance low probability events in order to be useful in trading.

Transforming the PDF

Not all detrending techniques yield PDFs that suggest a successful trading technique. In much the same way that an oscillator can be applied to price data to enhance short-term turning points, a transformation function can be applied to the detrended prices to enhance identification of “black swan,” i.e., highly unlikely events and to develop successful trading strategies based on predicting a reversion back to normalcy following a black swan event.

For example, a PDF can be enhanced through the use of the Fisher Transform. This mathematical function alters input waveforms varying between the limits of -1 and +1 transforming almost any PDF into a waveform that has nearly Gaussian properties. The Fisher Transform equation, where x is the input and y is the output is:

Unlike an oscillator, the Fisher Transform is a nonlinear function with no lag. The transform expands amplitudes of the input waveforms near the -1 and +1 excursions so they can be identified as low probability events. As shown in Figure 6 the transform is nearly linear when not at the extremes. In simple terms, the Fisher Transform doesn’t do anything except at the low-probability extremes. Thus it can be surmised that if low probability events can be identified, trading strategies can be employed to anticipate a reversion to normal probability after their occurrence.

The effect of the Fisher Transform is demonstrated by applying it to the HighPass Filter approach that produced the PDF in Figure 5. The output is rescaled for proper binning to generate the new measured PDF. The new measured PDF is displayed in Figure 7, with the original PDF shown in the inset for reference. Here we have a waveform that suggests a trading strategy using the low probability events. When the transformed prices exceed an upper threshold the expectation is that staying beyond that threshold has a low probability. Therefore, exceeding the upper threshold presents a high probability selling opportunity. Conversely, when the transformed prices fall below a lower threshold the expectation is that staying below that threshold is a low probability and therefore falling below the lower threshold presents a buying opportunity.

Derived Trading Strategies

It is clear that no single short term trading strategy is suitable for all cases because the PDFs can vary widely depending on the detrending approach. Since the PDF of data detrended by normalizing to peak values has the appearance of a theoretical sinewave, the logical trading strategy would be to assume the waveform is, in fact, a sine wave and then identify the sine wave turning points before they occur. On the other hand, data that is detrended using a generic RSI approach or is detrended using a HighPass filter with a Hilbert Transform should use a trading strategy based on a more statistical approach. Thus, for the RSI and Hilbert Transform approaches, the logical strategy consists of buying when the detrended prices cross below a lower threshold and selling when the detrended prices cross above an upper threshold. Although somewhat counterintuitive, this second strategy is based on the idea that prices outside the threshold excursions are low probability events and the most likely consequence is that the prices will revert to the mean.

Both short term trading strategies share a common problem. The problem is that the detrending removes the trend component, and the trend can continue rather than having the prices revert to the mean. In this case, a short term reversal is exactly the wrong thing to do. Therefore an additional trading rule is required. The rule added to the strategies is to recognize when the prices have moved opposite to the short term position by a percentage of the entry price. If that occurs, the position is simply reversed and the new trade is allowed to go in the direction of the trend.

The “Channel” Cycle Strategy finds the highest close and the lowest close over the channel length by computing a simple search algorithm over a fixed lookback period. Then, the detrended price is computed as the difference between the current close and the lowest close, normalized to the channel width. The channel width is the difference between the highest close and the lowest close over the channel length. The detrended price is then BandPass filtered[1] to obtain a near sine wave from the data whose period is the channel length. From the calculus it is known that d(Sin(ωt))/dt =ωCos(ωt). Since a simple one bar difference is a rate-change, it is roughly equivalent to a derivative. Thus, an amplitude corrected leading function is computed as the one bar rate of change divided by the known angular frequency. In this case, the angular frequency is 2π divided by the channel length. Having the sine wave and the leading cosine wave, the major trading signals are the crossings of these two waveforms. The strategy also includes a reversal if the trade has an adverse excursion in excess of a selected percentage of the entry price. The Generic “RSI” Strategy sums the differences in closes up independently from the closes down over the selected RSI length. The RSI is computed as the differences of these two sums, normalized to their sum. A small amount of smoothing is introduced by a three tap FIR filter. The main trading rules are to sell short if Smoothed Signal crosses above the upper threshold and to buy if Smoothed Signal crosses below the lower threshold. As before, the strategy also includes a reversal if the trade has an adverse excursion in excess of a selected percentage of the entry price.

The High Pass Filter plus Fisher Transform (“Fisher”) strategy filters the closing prices in a high pass filter.[2] The filtered signal is then normalized to fall between -1 and +1 because this range is required for the Fisher Transform to be effective. The normalized amplitude is smoothed in a three tap FIR filter. This smoothed signal is limited to be greater than -.999 and less than +.999 to avoid having the Fisher Transform blow up if its input is exactly one. Finally, the Fisher Transform is computed. The main trading rules are to sell short if the Fisher Transform crosses above the upper threshold and to buy if the Fisher Transform crosses below the lower threshold. As before, the strategy also includes a reversal if the trade has an adverse excursion in excess of a selected percentage of the entry price.

The three trading strategies were applied to the continuous contract of US Treasury Bond Futures for data five years prior to 12/7/07. The performance of the three systems is summarized in Table 1. All three systems show respectable performance, with the RSI strategy and Fisher strategy having similar performance with respect to percentage of profitable trades and profit factor (gross winnings divided by gross losses). All results are based on trading a single contract with no allowance for slippage and commission. It is emphasized that all settings were held constant over the entire five year period. Since the trading strategies have only a small number of optimizable parameters, optimizing over a shorter period is possible without compromising a trade-to-parameter ratio requisite to avoid curve fitting. Thus, performance can be enhanced by optimizing over a shorter time span.

Annualized performance of the trading strategies was assessed by applying the real trades over the five year period to a Monte Carlo analysis for 260 days, an approximate trading year. In each case the Monte Carlo analysis used 10,000 iterations, simulating nearly 40 years of trading. Software to do this analysis was MCSPro[3] by Inside Edge Systems. Due to the central limit theorem, the probability distribution of annual profit has a Normal Distribution and the Drawdown has a Rayleigh Distribution. While the Monte Carlo analysis reveals the most likely annual profits and drawdowns, it can also assess the probability of breakeven or better. Furthermore, one can make a comparative reward/risk ratio by dividing the most likely annual profit by the most likely annual drawdown. One can also evaluate the amount of tolerable risk and required capitalization in small accounts from the size of the two or three sigma points in the drawdown.

The Monte Carlo results for the Channel strategy are shown in Figure 8. The most likely annual profit is $11,650 and the most likely maximum drawdown is $7,647 for a reward to risk ratio of 1.52. The Channel strategy has an 88.3% chance of break even or better on an annualized basis.

The Monte Carlo results for the RSI strategy are shown in Figure 9. The most likely annual profit is $17,085 and the most likely maximum drawdown is $6,219. Since the profit is higher and the drawdown is lower than for the Channel strategy, the reward to risk ratio is much larger at 2.75. The RSI strategy also has a better 96.6% chance of break even or better on an annualized basis.

The Monte Carlo results for the Fisher strategy are shown in Figure 10. The most likely annual profit is $16,590 and the most likely maximum drawdown is $6,476. The reward to risk ratio of 2.56 is about the same as for the RSI strategy. The Fisher Transform strategy also has about the same chance of break even or better at 96.1%.

These studies show that the three trading strategies are robust across time and offer comparable performance when applied to a common symbol. To further demonstrate robustness across time as well as applying to a completely different symbol, performance was evaluated on the S&P Futures, using the continuous contract from its inception in 1982. In this case, we show the equity curve produced by trading a single contract without compounding. There is no allowance for slippage and commission. The shape of the equity curves are explained, in part, by the change of the point size from $500 per point to $250 per point, by inflation, by the increasing absolute value of the contract, and by increased volatility. The major point is that none of the three trading strategies had significant dropouts in equity growth over the entire lifetime of the contract.

The robust performance of these new trading strategies are particularly striking when compared to more conventional trading strategies. For example, Figure 14 shows the equity growth of a conventional RSI trading system that buys when the RSI crosses over the 20% level and sells when the RSI crosses below the 80 % level. This system also reverses position when the trade has an adverse excursion more than a few percent from the entry price. This conventional RSI system was optimized for maximum profit over the life of the S&P Futures Contract. Not only has the conventional RSI strategy had huge drawdowns, but its overall profit factor was only 1.05. Any one of the new strategies I have described offers significantly superior performance over the contract lifetime. This difference demonstrates the efficacy of the approach and the robustness of these new systems.

Conclusions

The PDF has been shown to offer an alternative approach to the classical oscillator, one that is non-causal in anticipating short-term turning points. Several specific trading strategies have been presented that demonstrate robust performance across long time-spans to accommodate varying market conditions; across a large number of trades to avoid curve fitting; and among different markets to demonstrate freedom from market personalities.

In each case the PDF can infer a trading strategy that is likely to be successful. When no strategy is suggested, the Fisher Transform can be applied to change the PDF to a Gaussian distribution. The Gaussian PDF then infers that a trading strategy using a reversion to the mean can be successful.
 

Endnotes

  1. John Ehlers, “Swiss Army Knife Indicator,” Stocks & Commodities Magazine, January 2006, V24:1, pp28-31, 50-53
  2. John Ehlers, “Swiss Army Knife Indicator,” Stocks & Commodities Magazine, January 2006, V24:1, pp28-31, 50-53
  3. MCSPro, Inside Edge Systems, Bill Brower, 200 Broad St., Stamford, CT 06901

Appendices

Bibliography

Arthur A. Merrill, “ Filtered Waves,” The Analysis Press, Chappaqua, NY 1977

MCS Pro, Inside Edge Systems, Bill Brower, 200 Broad Street, Stamford, CT 06901

www.eminiz.com, Corona Charts

Jonathan Y. Stein, “Digital Signal Processing,” John Wiley & Sons, New York, 2000

Perry J. Kaufman, “New Trading Systems and Methods,” John Wiley & Sons, New York, 2005

BACK TO TOP

An Empirical Study of Rotational Trading Using the %b Oscillator

by H. Parker Evans, CFA, CFP, CMT

About the Author | H. Parker Evans, CFA, CFP, CMT

H. Parker Evans, CFA, CFP, CMT is a Vice President and Senior Portfolio Manager with Fifth Third Private Bank in Clearwater, Florida. Parker has over twenty years experience as a professional investment advisor. He has a penchant for technical analysis of alpha persistence and mean reversion in securities prices.

Introduction

Academic finance is replete with studies supporting or denying the existence of serial correlation in securities prices.[1] In effect, such studies test the weak form efficient market hypothesis (EMH). Simply put, can investors use technical analysis to beat the market?

Before an attempt is made to answer that question, it is necessary to define “the market.” For the purposes of this paper, “the market” is the constituent stocks of the S&P 500 Index. The S&P 500 Index, after all, is probably the most widely recognized market proxy and in practice, investors index billions of dollars to it. S&P 500 stocks are liquid and extensively researched by a multitude of technical and fundamental analysts. Consequently, one might expect that these stocks would represent a highly efficient segment of the stock market.

Bollinger Bands and the %b Oscillator

The %b Oscillator is a technical indictor derived from the well-known, popular Bollinger Bands indicator. “Bollinger Bands are a technical trading tool created by John Bollinger in the early 1980s. They arose from the need for adaptive trading bands and the observation that volatility was dynamic, not static as was widely believed at the time.”[2] Bollinger bands are moving average envelopes typically plotted two standard deviations above and below a moving average of price closes. In an end-of-day price chart, %b plots as an oscillator, measuring the closing price in relation to its upper and lower Bollinger Band. An analogous technical indicator is the raw stochastic %K oscillator.[3] Raw %K measures the closing price relative to the high and low price of a trading range of specified length. By definition, %K oscillates between 0 and 100. Zero means the stock closed at the low of the trading range, 100 means the stock closed at the high. Likewise for %b, except that on rare occasions a stock can close with %b below 0 or above 100, representing a two-sigma event. Conceptually, %b numerically identifies the closing stock price relative to its volatility-adjusted trading range.

In Figure 1, the dprice chart for Wal-Mart stock (WMT) covers five years of daily high-low-close prices. In the top pane, is plotted a simple 65-day moving average of closing prices represented by the middle blue line. The related Bollinger Bands are plotted in red, exactly two-sigma above and below the middle blue moving average line. In the bottom pane, plot %b is plotted as an oscillator. Here %b is defined as overbought when it is greater than 90 and oversold when less than 10. The overbought %b condition is highlighted in red and oversold is highlighted in green.

Rotational Trading

Rotational trading is a method of using rank-ordered asset lists to construct investment portfolios. For example, both Value Line and Zacks Investment Research offer well-known research products featuring proprietary stock timeliness rankings. These services assign a rank, one to five, to each asset in their coverage universe. Using these rankings, a rotational system might buy stocks ranked #1, sell when they drop below rank #2, and rotate those proceeds back into stocks ranked #1. For many years, Investor’s Business Daily has published proprietary relative strength rankings for stocks ranging from one to ninety-nine. Such increased granularity is useful for active rotational trading, as will be demonstrated further on.

As always, a complete trading system must address position sizing and answer: What percentage risk of total portfolio equity can be exposed on any given trade or asset?

Portfolio Selection Using Relative %b

The basis for using %b as a momentum oscillator stems from the empirical observation that extreme price excursions have a tendency for mean reversion, i.e. possible negative serial correlation. In Technical Analysis Explained (Pring, Martin J., McGraw-Hill, 2002), Martin Pring warned against relying solely on momentum oscillators when analyzing individual securities. “Momentum signals should always be used in conjunction with a trend reversal signal by the actual price.” This paper will test the opposite idea within a portfolio context. We will boldly buy weakness and sell strength without waiting for evidence of a reversal in price. To mix metaphors, the strategy will systematically “catch the falling knife” and sell the “dead cat” bounce without regard to any other technical indicator. Specifically the trading algorithm will buy stocks with the very lowest %b ranks and sell when they increase rank relative to other stocks. Understand that a portfolio of stocks will be bought that have the lowest %b relative to all other stocks in a specified selection universe.

In Technical Analysis from A to Z (Achelis, Steven B. Chicago: Irwin, 1995), John Bollinger states, “When prices move outside the bands, a continuation of the current trend is implied.” Because a reasonable observer could interpret this rule as a contradiction to what we propose to test, we will also consider what happens if we reverse our trading rule, buying strong stocks with the very highest %b (presumably stocks “outside the band”) and selling only when they drop in rank.

Now to answer the original question, by using the %b oscillator coupled with rotational trading rules, we can select stock portfolios that beat the risk-adjusted return of the S&P 500 index. We report empirical evidence supporting this thesis later in the results section of this paper. In addition, an important purpose of this paper is to provide sufficient detail to allow other analysts to replicate our (back-tested) results and to modify or adapt our methods if desired. That detail comes next in the methods and materials section of this paper.

Methods & Materials

Sample Selection

Acquiring an appropriate sample for back testing proved daunting. Initially we ran some preliminary back tests of our proposed %b indicator on a sample consisting of those stocks in the S&P 500 as of February 2007. This back test generated very impressive results from 1990 forward. In fact, the results seemed too good to be true. We realized that other analysts could justifiably criticize the backtest sample as suffering from survivor bias[4] and look-ahead bias.[5] Look-ahead bias results from using information in a backtest that was unknown during the period analyzed. Clearly, investors in 1990 had no way to know what stocks would constitute the S&P 500 in 2007. Survivor bias results when a study fails to account for stocks that have ceased trading due to mergers, acquisitions or bankruptcies. Survivor bias also results when for other reasons an index selection committee deletes and replaces a constituent.

What we wanted for our sample was the full history of closing quotes for all stocks that were in the S&P 500 from 1990-2006 during the time those stocks were in the index, including the non-surviving stocks. We were unable to acquire that sample. Instead, we created a sample selection universe using the following protocol. Our sample contains end-of-day-prices for seventeen years, 1990-2006, on S&P 500 constituent stocks. From 1990-1997 we included only stocks that were on the January 1990 S&P 500 constituent list. From 1998-2006 we included prices for all stocks on the January 1998 constituent list. Starting in 2004 for the period 2004-2006, we added all prices for all stocks appearing on the January 2004 constituent list. For all spans and the full period, we included prices of non-surviving stocks up to the date that they ceased trading. Our sample contains 815 stocks, 490 of which were trading at year-end 2006.

Software Tools and Data Services

We downloaded constituent lists for the S&P 500 and prices for inactive, non-surviving stock from the Bloomberg Professional Terminal. We downloaded surviving stock price histories from Yahoo Finance. We primarily used Amibroker[6], a popular technical analysis and charting software application, and the Amibroker Formula Language to design and test trading strategies and indicators. We used also Microsoft Excel for various purposes in our study.

Variables, Trading Algorithm, Code

We tested a system based on 65-trading-day (three months) %b against our sample selection universe. At the close of every trading day over the test period, our trading algorithm ranked all stocks from highest to lowest according to %b score. On the first trading day, January 2, 1990, the trading algorithm bought the 40 lowest ranked stocks, investing 2.5% of portfolio equity in each stock. Once purchased, the algorithm held any given stock until it moved up and out of the ranks of the 80 lowest ranked stocks. At that point, the algorithm sold the stock and rotated the proceeds back into one of the 40 lowest ranked stocks not already held. The back test ended December 31, 2006. The algorithm recorded trade executions at the closing price the next day after order entry. The algorithm continued to execute this rotational trading every trading day of the 17-year back test period. At the time of purchase, the amount invested in any stock purchase could not exceed 2.5% of current portfolio equity but could be less if available cash was less than 2.5%. There was no rule forcing rebalancing of existing positions. The system traded long only, without margin, and stayed 100% invested. We named this strategy “%b BW” (Buy Weakness).

Variable Initialization and Optimization

Note that in the fourth line of the code we applied a filter. This filter removes stocks from purchase consideration and forces a sale if the stock price was under $1.00 and the date was after January 1, 1998. Although this filter actually reduced system total return, we used it anyway because when inspecting the trade logs we noticed that the system was initiating trades in low price stocks that were no longer in the S&P 500 (though they were at one time). Because our sample price data is split-adjusted, we avoided applying the filter to prices before 1998 since many stocks before that time traded at actual prices much higher than their sub $1.00 splitadjusted price would indicate and were in fact in the S&P 500.

In order to reduce trade activity, we also required the algorithm to hold a stock at least four trading days before selling (SetOption(“holdminbars”, 4)). We chose 65-day %b, 80-rank “worst rank held”, and 2.5% position size without rigid optimization for maximum total return or any other specific outcome. In our judgment, system performance was reasonably robust across a relevant range of optimization values.

Our reported back test results assume a 0.1% cost per trade (0.2% round-trip). We noticed that very short (5-20 day) %b BW back tested impressively with a 0% assumed cost, but performance degraded dramatically when tested with a 0.1% cost per trade.

We also tested our rule in reverse by changing the second to last line of the code from – PB to + PB. We named this strategy “%b BS” (Buy Strength) since it buys strong stocks with high relative %b. Recall that high %b means that a stock price is near or above its top Bollinger Band.

Results

Table 1 presents the results of our back tests on the custom sample described previously. The first column represents a buy and hold strategy on the S&P 500 price index over the backtest period. The second column tests our proposed strategy, %b BW. The final column tests the %b BS Strategy.

Figure number two is a profit distribution histogram of all trades executed by the %b BW (Buy-Weakness) Strategy over the 17 years back-test period. Table 2 lists the 14 trades returning the extreme losses in the histogram.

The top pane in Figure 3 is the weekly closing value for a unit of equity in the %b BW system. The lower panes plot rolling 52-week Alpha*, Beta, and R-squared on the closing value vs. the benchmark S&P 500 from 1990-2006.

Figure 4 plots a weekly comparative relative strength line[8] from 1990-2006 of the %b BW strategy using the S&P 500 Price Index as the base price. We delineate two major periods of relative underperformance.

Tests of Statistical Significance

Is the difference in return between the %b BW strategy and the S&P 500 statistically significant? To answer that question, we used a paired comparisons test of 4287 paired differences in daily returns from 1990-2006. The sample mean difference was .0533% per day (the mean daily alpha). The sample standard deviation of the mean difference was .7462% (the daily tracking error). The standard error of the sample mean difference was .7462% * 4287^.5 = .0114%. The calculated test statistic was z = (.0533/.0115) = 4.68. The two-tailed P value is less than 0.0001. The difference in returns is extremely statistically significant.

Is the risk-adjusted return of the %b trading strategy statistically significant? The Information Ratio[8], also known as the appraisal ratio, is a widely used risk metric that measures risk and return relative to an appropriate benchmark. The information ratio equals alpha divided by tracking error. We tested to determine if the information ratio (IR) of the %b strategy was greater than zero:

From Table 1 we see the information ratio for the %b strategy equaled 1.15. So our test statistic is t = 1.15 * 17^.5 = 4.74 with df =16. The two-tailed P value equals 0.0003. The difference is extremely statistically significant.

Discussion

The evidence supports our thesis that a rotational trading algorithm using relative %b rankings can select stock portfolios that beat the risk-adjusted return on the S&P 500. Moreover, those portfolios consist only of S&P 500 constituent stocks. For perspective, a search of the expansive Morningstar mutual fund database in February 2007 reveals that just three mutual funds had an annualized rate of return in excess of 18% over the past fifteen years. None of those returns exceeded 19%. The %b BW Strategy* returned 24.1% annualized with surprisingly little risk relative to the benchmark. The charts in Figures 3 and 4 as well as the Sharpe and Information Ratios reported in Table 1 provide the relevant risk assessment analytics.

*The historical performance of a simulated trading strategy is not a guarantee of future returns.

Admittedly, we have presented back test results that fly in the face of the well-worn trader’s axiom “Cut your losses short; let your profits run.” Table 2 confirms that the %b BW system offers no protection against ruinous losses at the asset level. The diversification of an equal-weight 40 stock portfolio afford the only down side protection, a striking demonstration of the critical importance of position sizing and diversification in system development.

While our results are statistically significant, the economic significance is less straightforward. The system trades frequently averaging over three trades per day. For taxable investors, returns would be taxed 100% as unfavorable short-term capital gains. From Table 1 we see the average profit per trade is 1.2% net of an assumed .2% round trip transaction costs. That is likely satisfactory only for a trader using an efficient broker[9] and perhaps more importantly, trade size must be sufficiently small to have only a modest impact on market prices. Assessing potential slippage[10] is clearly an important consideration when evaluating any system.

Finally, the results suggest that investors overreact, possibly to news or changing prices, in a three-month (65- trading day) frame of reference. By design, our indicator look-back period corresponds with the three-month earnings report cycle for stocks as well as the performance reporting cycle for many asset managers capturing possible earnings-announcement and window dressing[11] effects.

End Notes

  1. http://serial-correlation.behaviouralfinance.net/ retrieved from the web February 2006
  2. http://www.bollingerbands.com/ retrieved from the web February 2006
  3. http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:stochastic_oscillato retrieved from the web February 2006
  4. http://en.wikipedia.org/wiki/Survivorship_bias retrieved from the web February 2006
  5. http://www.investopedia.com/terms/l/lookaheadbias.asp retrieved from the web February 2006
  6. http://amibroker.com/ retrieved from the web February 2006
  7. http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:price_relative retrieved from the web February 2006
  8. http://www.ssga.com/library/esps/sflanneryinfowhitenoisega/page.html retrieved from the web February 2006
  9. http://www.interactivebrokers.com/en/accounts/fees/commission.php?ib_entity=llc#bundled retrieved from the web February 2006
  10. http://www.investopedia.com/terms/s/slippage.asp retrieved from the web February 2006
  11. http://www.investopedia.com/terms/w/windowdressing.asp retrieved from the web February 2006
BACK TO TOP

Ichimoku Kinko Hyo

by Véronique Lashinski, CMT

About the Author | Véronique Lashinski, CMT

Véronique Lashinski, CMT is a Vice President and Senior Research Analyst with Newedge USA, LLC. She is responsible for producing research on a broad range of markets and has over 15 years experience in derivatives. She was a speaker at various industry conferences around the world, as well as at the Massachusetts Institute of Technology (MIT), the Market Technicians Association and Bloomberg.

Her research paper on Japanese Clouds was published in the Market Technicians Association Journal of Technical Analysis in 2008. Her publications also include contributions to the magazine “The Technical Analyst” and a chapter in the books “Technical Analysis of the FX markets” and “Technical Analysis of the Commodities markets”.

Ms. Lashinski is a Director of the International Federation of Technical Analysts, a former Officer of the American Association of Professional Technical Analysts, and a member of the Market Technicians’ Association. She holds a Masters of Science degree in Management from Ecole de Management de Lyon, France and is a Chartered Market Technician.

Goichi Hosoda, invented the cloud charts, or Ichimoku Kinko Hyo charts, in Japan before World War II. The method uses moving averages based on the middle of the range over a period of time, then shifts the lines, in the past and in the future.

In this paper, we will compare hypothetical trading results in some US commodity futures markets, when using the base moving average crossover, with a few combinations of the different filters provided by the method.

Outline Ichimoku Kinko Hyo on Commodity Futures

I- Description/overview of the cloud lines, and basic trade signals derived from these lines.

II- Tests. II-A Trade entry on Kijun Sen/Tanken Sen crossover, with no other condition. Exit on reverse Kijun Sen/Tenkan Sen crossover, with no other condition.

II-B Trade entry on Kijun Sen/Tanken Sen crossover, adding both the Chikou Span and the cloud as filters. Exit when either condition no longer fulfilled. Conclusion: did the Chikou Span improve results?

II-C Trade entry on Kijun Sen/Tanken Sen crossover, adding the market position relative to the cloud at the time of the signal as a filter. (above the cloud for buy, under the cloud for sell) Exit on reverse Kijun Sen/Tenkan Sen crossover, with no other condition. Conclusion: what is the impact of delaying the entry until the market position relative to the cloud confirms the outlook (above the cloud being bullish, and under the cloud, bearish)?

II-D Trade entry on Kijun Sen/Tanken Sen crossover, adding the market position relative to the cloud at the time of the signal as a filter. (under the cloud for buy, above the cloud for sell) Exit on reverse Kijun Sen/Tenkan Sen crossover, with no other condition. Conclusion: does an aggressive entry, attempting to capture the move early make a difference?

II-E Trade entry on Kijun Sen/Tanken Sen crossover, adding the Chikou Span as filter. (above the Chikou Span for buy, under the Chikou Span for sell) Exit on reverse Kijun Sen/Tenkan Sen crossover, with no other condition.

II- F Trade entry on Kijun Sen/Tanken Sen crossover, adding the Chikou Span as filter. (above the Chikou Span for buy, under the Chikou Span for sell) Exit on reverse Kijun Sen/Tenkan Sen crossover, with no other condition. In this case, all lines are calculated with the original six-day week assumption. Conclusion: in the sample used, was it beneficial to have adapted the periods to the shorter working week?

I Description/Overview

Description/Overview of the cloud lines, and basic trade signals derived from these lines.

I-A Overview

A newspaper writer, Goichi Hosoda, invented the cloud charts, or Ichimoku Kinko Hyo charts, in Japan before World War II. The various lines are built from the middle of the range over different periods, with some of the lines shifted in the future. One more line is made using the close, plot in the past.

Two of the lines are projected forward. The cloud is formed by the space between those two lines. As it is drawn in the future, it provides a unique, visual idea of support and resistance in the future, not available in other techniques.

This paper focuses on the five basic lines of the cloud chart, which are readily available in many charting systems. Using back testing, the author compares hypothetical results of trading systems based on the basic crossover in the method, using various combinations of the five lines, as added trade entry and/or exit filters. Hosoda’s original definitions were based on a six-day working week in Japan when he developed the method (which included more than the cloud charts presented below). As the author has adapted the cloud charts to a five-day working week in daily use, all the tests are based on the five-day working week assumption, except the last one, which uses the six-day working week.

II-B Definitions of the Lines and Interpretations

Tenkan-Sen/Turning line: (Highest high + lowest low)/2, for the past seven trading days. (nine trading days, in the case of the six-day working week environment) In other words, this is the middle of the range, over the past week and a half.

Kijun-Sen/Base line: (Highest high + lowest low)/2, for the past 22 trading days. (The period is changed to 26 trading days, in the case of the six-day working week environment.)

This is the middle of the range, but this time over the past month.

Crossovers between the Tenkan Sen and the Kijun Sen produce buy and sell signals, in a similar way as moving averages do in Western techniques. (see figure 1)

Chikou Span/Lagging Span: Today’s close, plotted 22 trading days behind. (The period is changed to 26 trading days, when operating in a six-day working week environment.)

The position of the Chikou Span relative to prices gives an idea of market strength: when the Chikou Span is above the market prices, it is an indication of market strength (and vice versa for weakness). In other terms, prices 22 (26) days ago are relevant, and represent current support/resistance. (see figure 2)

Senkou Span A: (Tenkan-Sen + Kijun-Sen)/2, plotted 22 trading days ahead. (The period is changed to 26 trading days, when assuming a six-day working week.)

Senkou Span B – (Highest high + lowest low)/2, for the past 44 days, plotted 22 days ahead. (The period is changed to 56 trading days, plotted 26 days ahead, in the case of a six-day working week.) The area between Senkou Span A and Senkou Span B is colored and represents “the cloud.” (see figure 3) It represents key support (if the cloud sits below prices) or resistance (if the cloud sits above prices).

The position of the market relative to the cloud confirms the trend: uptrend if prices are above the cloud, downtrend if they are below the cloud. Some of the cloud attributes further qualify the strength of support/resistance. For example, if the market is above a rising cloud, the current uptrend has a better chance to continue. (see figure 4)

If the market is under a declining cloud, the current downtrend has a better chance to continue.

A very thin area in the cloud is a point of vulnerability of the current trend: with both lines close to each other, only a small move will be needed for the market to cross the cloud. (see figure 5)

The position of the Kijun Sen/Tenkan Sen crossover (see figure 1) relative to the cloud is significant. For example, a bearish crossover is a weak signal if it happens above the cloud (i.e. above significant support), normal if it happens inside the cloud, and strong if it happens below the cloud (see figure 6). The latter can be interpreted as an attempt to capture resumptions of the major trend, after short-term counter-trend corrective moves (and vice versa for buy signals).

The method puts the emphasis on the middle of the range. We note that in corrections, the cloud is typically close to the classic Fibonacci retracements. (see figure 7) The main difference is that while the Fibonacci retracements are static, in particular the 50% retracement, which is the mid-point of the move, the cloud will vary as levels are dropped out of the calculation, and new ones added.

II – Tests

Testing Methodology:

We used US commodity futures contacts. The ending date for all tests was October 3, 2007. For all contracts, we used equalized continuations, going back 1,000 days from October 3, 2007. The results are theoretical, as the impact of spreads is removed. This can be substantial in commodities. However, this was necessary to provide a sufficient number of trades for comparison purposes. For simplification, we used continuations based on trading activity. The rollover from one contract to the next is made at the time of higher tick activity in the next contract. The adjustment (which removes the spread) is the difference between the two contracts at the time of rollover. The tests assumed no slippage, and no transaction costs were included. (see table 1)

Finally, due to a much larger tick value in Nymex Natural Gas, that commodity was removed of the final calculations, as Natural Gas had too much impact on the total outcome.

II-A

II-A Trade entry on Kijun Sen/Tanken Sen crossover, with no other condition. Exit on reverse Kijun Sen/Tenkan Sen crossover, with no other condition: this signal is named “tk1” in this paper. (see figure 8)

The net profits are: $146,533 for the long and $-57,191 for the short. The total profit, $89,342 was the highest of the six cases studied. However, this is immediately mitigated by the fact the total loss on the short side is the second largest of the twelve cases studied, and this method also has the highest maximum drawdown, at $352,000. This illustrates that this method would benefit from filters, aiming to reduce risk and preserve capital.

Not surprisingly, this method also has more than/or very close to (for one case) double the number of the trades as the other methods.

II-B

Trade entry on Kijun Sen/Tanken Sen crossover, adding both the Chikou Span and the cloud as filters. Exit when either condition no longer fulfilled. This signal is named “tk2” in this paper. (see figure 9)

What is the impact of waiting for both cloud and Chikou Span filter, and to exit trades aggressively on adverse conditions?

This is the only method where filters were used on trade exit. The result is a clear decline in losses. Both the duration of losing trades and the amount of losses is reduced. Both the average loss and the maximum drawdown are the smallest among the methods tested in this study. This makes this method particularly attractive. The total profit is not the highest, but in light of the risk reduction and capital preservation is quite acceptable at $55,289 and compares to the highest profit in this test (see table 1), $89,342 and the lowest profit at $15,010.

II-C

Trade entry on Kijun Sen/Tanken Sen crossover, adding the position relative to the cloud as filter. (above the cloud for buy, under the cloud for sell) Exit on reverse Kijun Sen/Tenkan Sen crossover, with no other condition. This signal is named “tk 3” in this paper. (see figure 10)

What is the impact of delaying the entry until the cloud confirms the outlook?

We can think of this system as attempting to capture the resumption of established medium-term trends after the end of short-term counter-trend corrections.

This method provides the second highest total net profit. This is coming from the highest average profit of all six methods, the second highest percentage of winning trades. The number of trades is less than half the number of trades in the first test, “tk1”, which did not have use the cloud nor the Chikou Span as entry filter, and this reduction was quite beneficial.

Despite the entry filter, the average trade duration is the second highest of all six methods. The average loss is also the second highest, which is impacting the overall results of this method substantially. A comparison with the second test, “tk2” immediately above, suggests that this method would strongly benefit from earlier exit of losing trades.

Separately, the trade system used the position of the daily settlement relative to the cloud as the trade entry filter, not the position of the crossover relative to the cloud, which would have been a stronger filter. This was chosen for an easier calculation. While it may not seem like a substantial difference, figure 10 on the previous page illustrates that trades are entered in sideways markets, where the cloud is not at its best performance. When the cloud is thin, and the crossover occurs either inside or under the cloud, strength towards the close can result in a close above the cloud, and longs are entered. The resulting losses remain small, but without these, tk3 would have had higher total profits in this test.

II-D

Trade entry on Kijun Sen/Tanken Sen crossover, adding the position relative to the cloud as filter. (under the cloud for buy, above the cloud for sell) Exit on reverse Kijun Sen/Tenkan Sen crossover, with no other condition. This signal is named ”tk4” in this paper. (see figure 11)

What is the impact of an aggressive entry, attempting to capture the move early in the trend? 

This resulted in the lowest total net profit in this test (see table 1), with a low percentage of winners. However, closer examination reveals that this system ranked fifth -fairly low- for trade count, trade duration, average loss and maximum drawdown, and ranked in second for average profit, which are all potentially encouraging results.

The Achilles heel of this system is that short-term corrections result in trade exits, but if the longer term trend is still up, the trade would not necessarily be reentered. (For example, if the market is no longer under the cloud, but has risen inside, or above the cloud for long positions, and vice versa for short positions.) A substantial part of the trend, the later part, when it is confirmed, is missed. Other re-entry conditions could be considered in a more complex system.

II-E

Trade entry on Kijun Sen/Tanken Sen crossover, adding the Chikou Span as filter. (Chikou Span above prices for buy, and under prices for sell). Exit on reverse Kijun Sen/Tenkan Sen crossover, with no other condition. This signal is named ”tk5” in this paper. (see figure 12)

This method produced middle of the road results, and had average scores. Its weak points were the fairly low average profit and the high count.

II- F

Trade entry on Kijun Sen/Tanken Sen crossover, adding the Chikou Span as filter. (above the Chikou Span for buy, under the Chikou Span for sell) Exit on reverse Kijun Sen/Tenkan Sen crossover, with no other condition. (see figure 13)

In this case, all lines are calculated with the original 6-day week assumption. This signal is named ”tk6” in this paper.

In the sample used, was it beneficial to have adapted the periods to the shorter working week?

This system can be immediately compared with the fifth test, “tk5”. The comparison is favorable to the switch to the shorter working week, five-days.

Worse, this system had the unwanted privilege of having the highest average loss, the lowest average profit, and the lowest percentage of winning trades of all systems in our sample.

III – General Conclusion

The cloud system is typically used as a visual method by the analyst, and has shown to be a worthwhile analytical tool. The author likes especially that the cloud method uses the entire range, as opposed to the market closing price, the default source on most typical Western indicators. As such, cloud charts are one way to diversify the data used. However, there are times when the cloud chart is of little use, other than highlighting the lack of medium-term trend. Indeed, this is a trend following system, and as such, like with a typical Western moving average crossover system, a medium-term trend needs to exist. In medium-term sideways markets, the analyst will immediately “see” that the cloud is a tangled mess of lines, and will switch to another method. This would need to be somehow replicated in trading systems.

When considering building a trade system with this method, the author recommends testing with the addition of a trend indicator, like an ADX for example. Further, having tighter exit conditions (as in tk2) dramatically reduced the maximum drawdown for both longs and shorts, making it the best method in our sample, in terms of risk management and capital preservation, and the author strongly advocates that any further testing be made with tighter exit signals than the simple reverse Kijun Sen/Tenkan Sen crossover.

References

Elliott, Nicole and Harada, Yuichiro, April 2001, Market Technician, Issue 40, Society of Technical Analysts

Elliott, Nicole, Option Strategies designed around Ichimoku Kinko Hyo Clouds, October 2002, International Federation of Technical Analysts

Muranaka, Ken, 2000, Ichimoku charts, Technical Analysis of Stocks and Commodities

Nippon Technical Analysis Association, 1989, Analysis of Stock Prices in Japan

Disclaimer

The opinions, views and forecasts expressed in this report reflect the personal views of the author(s) and do not necessarily reflect the views of Newedge USA, LLC or any other branch or subsidiary of Newedge Group (collectively, “Newedge”). Newedge, its Affiliates, any of their employees may, from time to time, have transactions and positions in, make a market in or effect transactions in any investment or related investment covered by this report. Newedge makes no representation or warranty regarding the correctness of any information contained herein, or the appropriateness of any transaction for any person. Nothing herein shall be construed as a recommendation to buy or sell any financial instrument or security.

BACK TO TOP

Using Style Momentum to Generate Alpha

by Samuel L. Tibbs, Ph.D.

About the Author | Samuel L. Tibbs, Ph.D.

Samuel L. Tibbs, Ph.D. is an Assistant Professor of Finance at East Carolina University where he teaches investments and corporate finance. His personal interest in stock investing motivated him to earn his Ph.D. and CFA Charter.

by Stanley G. Eakins, Ph.D.

About the Author | Stanley G. Eakins, Ph.D.

Stanley G. Eakins, Ph.D. has experience as a financial practitioner, serving as vice president and comptroller at the First National Bank of Fairbanks and as a commercial and real estate loan officer. A founder of Denali Title and Escrow Agency, a title insurance company in Fairbanks, Alaska, he also ran the operations side of a bank and was the chief finance officer for a multi-million dollar construction and development company.

by William DeShurko, CFP

About the Author | William DeShurko, CFP

Mr. William DeShurko, CFP is the President of 401 Advisor, LLC, a Registered Investment Advisor. He has an Economics degree from the University of Rochester and has been in the financial services industry since 1987. He is the author of the book, The Naked Truth about Your Money and is a regular contributor at HorsesMouth.com. Mr. DeShurko started his own firm in 1993, and adopted a momentum based investment strategy for his practice in 2004.

Abstract

Russell style indexes exhibit significant momentum, particularly after medium term out- and underperformance. The existence of this momentum produces a diversified, index-based low-cost means to exploit momentum by incorporating relative style index performance into tactical allocation strategies. Such style index momentum trading strategies have outperformed on both a raw and risk-adjusted return basis, with the long minus short portfolio generating an average 9.25% annual return over the 34-year period analyzed. Although the excess returns vary, they are robust through time and after controlling for potentially confounding effects. Additionally, the returns are not driven by any single style index and portfolio reconstruction is, on average, required every six months.

Introduction

Prior research has shown the ability of various momentum strategies to generate excess returns at the firm, industry, and country level, but little research has been done using investment style data at the index level. (Swinkels [2004] provides an informative survey of the momentum literature.) Our paper extends this literature by examining whether momentum extends to Russell style indexes. This contribution is meaningful because it provides a diversified, index-based low-cost trading strategy to exploit such momentum.

At the firm level, Lewellen [2002] shows that stocks partitioned based on size and book-to-market ratio exhibit momentum as strong as that in individual stocks and industries. Also, Chen and De Bondt [2004] provide evidence of style momentum within the S&P-500 index by constructing portfolios based on style criteria. However, constructing such portfolios can be costly, significantly eroding returns. Our focus is on indexes easily represented by exchanged traded funds, thereby producing a significant cost advantage and providing a low expense, diversified means to exploit style momentum by incorporating relative style index performance into tactical allocation strategies.

Using Russell Large-Cap and Small-Cap style index data, Arshanapalli, Switzer, and Panju [2007] develop a market timing strategy using a multinomial timing model based on macroeconomic and fundamental public information. While their multinomial model does include prior market return variables to time their style index allocation decisions, their paper is significantly different from ours in several ways. First, they do not focus on the importance on style index momentum, nor do they discuss the significance of the prior market return variables in their model. Secondly, the beauty of our market-timing strategy is its simplicity. Only the raw, prior return of the Russell style indexes is required to make the asset allocation decision. In contrast, Arshanapalli, Switzer, and Panju [2007] require variables such as the Change in the Conference Board Consumer Confidence Index, U.S. Bond Default Premium, U.S. Bond Horizon Premium, S&P 500 Earnings Yield Gap, Change in the Consumer Price Index, etc. to construct their model. Further, their model requires generating conditional probabilities using a multinomial logit and the assigning of arbitrary cutoff probabilities when constructing trading rules. Additionally, Arshanapalli, Switzer, and Panju [2007] do not analyze short or long minus short portfolios, nor do they consider Russell Mid-Cap Value/Growth portfolios. Lastly, the vast majority of their analysis covers a shorter time period, 1979-2000 versus 1972-2005, which fails to include the two most severe, post-War World II market declines, specifically the 1973-1974 and 2000-2002 crashes. The inclusion of those time periods further verifies the robustness of our analysis.

We are also motivated to test the existence of style index momentum due to the proliferation of style index benchmarks. Both Lipper and Morningstar use style benchmarks to rate mutual fund performance. Our decision to focus specifically on Russell style indexes was influenced by their popularity, as of 2006, 54.5% of institutionally managed U.S. equity funds (over $3.8 trillion in assets) were benchmarked against Russell indexes.[1]

Furthermore, Barberis and Shleifer [2003] provide a theoretical basis for our analysis. They model an economy with fundamental traders and positive feedback traders that chase relative style returns. The results being that “[p]rices deviate substantially from fundamental values as styles become popular or unpopular” (p. 190). Our results, using Russell style index data, provide additional support for their model.

Style Index Data and Portfolio Structure

Using monthly data from January 1969 to December 2005, we examine momentum across a broad range of economic and market conditions. For the years 1969 to 1996 we use the constructed style index data from Chan, Karceski, and Lakonishok [2000].[2] For years 1997 to 2005 we use the actual Russell index data. Indexes are the Russell 2000 Growth (Value) for Small-Cap Growth (Value), the Russell Mid-Cap Growth (Value) for Mid-Cap Growth (Value), and the Russell Top 200 Growth (Value) for Large-Cap Growth (Value). For the remainder of this paper these portfolios will be denoted by SG, SV, MG, MV, LG, and LV, respectively. Note, in total the data covers 37 years, but the results cover a 34 year period due to our analysis of 36 month formation period performance.

To view the extent to which momentum may exist, we analyze various formation periods to rank each of the six indexes based on their return over that period of time. For each index held we then calculated subsequent returns for various holding periods. An example would be a 24,6 portfolio, which means we ranked the style indexes based on 24 month prior performance, then held each single style index portfolio[3] for six months based on its formation period performance. After the style index is held for six months, indexes are re-ranked on prior 24 month performance then a single index is again selected and held for another six months with the process continuing for the time period covered. The top (bottom) ranked portfolio would consist of the style indexes selected, and held in six month increments, through time based on the highest (lowest) performance in the 24 month formation period.

Profitability of Various Style Index Momentum Strategies

To analyze the performance of style index momentum based trading strategies, we rank the style indexes based on formation period performance, then buy the top performing index and short the bottom performing index. The long top minus short bottom (Long-Short) position is then held for the designated holding period. Exhibit 1 reports the average monthly returns to Long-Short portfolios across various portfolio formation and holding periods.

Results are generally positive and statistically significant, especially for the shorter holding periods. Long-Short returns across various formation periods peak at 12 months of prior performance. Across various holding periods the Long-Short returns peak at 1 month. Therefore, top performing Long-Short portfolio was 12,1 with an average monthly return of 0.85% (p-value < 1%). Based on these results, the remainder of the paper focuses on portfolios composed of one style index with a 12 month formation and one month holding period. While this 12,1 portfolio was the highest performer for the 34 year period, when various time periods were analyzed it was not always the top performer[4]. However, the top performing strategy was consistently driven by medium-term momentum with prior performance in the 8 to 14 month range.

Exhibit 2 shows the monthly level and persistence of the return outperformance and underperformance for the six portfolios selected based on the ranking of their prior 12 month performance relative to the average of the six Russell indexes. The average monthly return is presented for each of the 12 months of the formation period and for 36 months after each style index is ranked. For the top and bottom ranked portfolios the average cumulative 12 month prior return was 28.10% and 1.12%, respectively. For the first month of the holding period the average return for the top and bottom portfolios was 1.57% and 0.68%, respectively.

All portfolios revert back to the mean, but the portfolio with greatest (lowest) prior 12 month relative performance exhibits the greatest outperformance (underperformance) persistence. This persistence is particularly pronounced for the top style index ranked by 12 month formation period performance, which continues to outperform all other portfolios for 14 months. Also, the top and bottom ranked portfolios have the greatest spread between portfolio performance and the average index return in the first month of the holding period, which is consistent with the results in Exhibit 1.

Performance of 12,1 Style Index Momentum Portfolios

Exhibit 3 reports the annualized results for six 12,1 portfolios, Long-Short 12,1 portfolio, the six Russell indexes used to build those portfolios, and other indexes for comparison. As predicted by style index momentum, the 12,1 portfolio performance increases monotonically with prior 12 month performance. The Long-Short 12,1 portfolio has an annualized return of 9.25% and a Beta estimate of -0.01. The top 12,1 also outperforms all of the Russell style indexes and other indexes on return, Sharpe ratio, Treynor ratio, and Jensen’s alpha.

More importantly, on a risk-adjusted basis the six ranked 12,1 portfolios improve monotonically with prior 12 month performance. The Sharpe ratio, Treynor ratio, and Jensen’s alpha all show such improvement, indicating that style index momentum not only provides excess raw returns, but excess returns on a riskadjusted basis as well.

Using Fama-French 3-factor models we further analyze the top, bottom, and Long-Short 12,1 portfolio returns over the 34 year period5. Exhibit 4 reports a monthly alpha of 0.53% (6.60% annualized) for the top 12,1 portfolio and -0.41% (-4.81% annualized) for the bottom 12,1 portfolio, both statistically significant with p-values < 1%. The Long-Short portfolio produced a monthly alpha of 0.45% (5.56% annualized) which was statistically significant at the 5% level. These results again provide evidence of momentum in style indexes even after controlling for market, size, and book-to-market factors.

Allocation and Average Return of Selected Style Indexes

Each of the six 12,1 portfolios’ allocation (Panel A) and contribution to returns (Panel B) of the six Russell indexes are reported in Exhibit 5. For the top performing portfolio, and for all 12,1 portfolios, the largest allocation was the SV at 34.1%. This means that SV had the largest prior 12 month performance 34.1% of the time and was therefore held in the 12,1 portfolio 34.1% of the 34 year period analyzed. The SV was also the highest performing style index over the 34 year period analyzed, but was not the leading average return contributor to the 12,1 portfolio performance. The largest contributor was MG, followed by SG, LV, then SV, with average monthly returns of 2.11%, 1.97%, 1.78%, and 1.69%, respectively. This indicates that the momentum exhibited is not simply a SV phenomenon. For the bottom performing 12,1 portfolio the largest allocation was the LG at 32.8% and the smallest was the MV at 3.7% with average monthly returns of 0.11% and 0.70%, respectively.

Persistence Through Time of 12,1 Portfolios

To succinctly show that the 34 year period results are not solely driven by a particular time period and are robust through time we report the top, bottom, and Long-Short 12,1 portfolios in Exhibit 6 (on page 54). Top is the 12,1 portfolio with the highest formation period performance, bottom is the 12,1 portfolio with the lowest formation period performance, and Long-Short is the difference. On an annualized return basis the returns for the periods analyzed vary from 3.08% to 13.71%, depending on the period analyzed, and averaged 9.25%. If you exclude the two largest return periods from ’72 to ’80 the Long-Short portfolio still returns an annualized 5.08%. On an individual calendar year basis, not reported for brevity, the worst Long-Short portfolio return was -20.35% in 2000 and the best was 50.11% in 1999.

Average Holding Period

To further evaluate the momentum persistence we analyzed the average holding periods of the top and bottom 12,1 portfolios. Even though style indexes were analyzed on a monthly basis for reshuffling, on average, reshuffling was only required about twice a year. The top and bottom positions are held for an average of 5.65 and 6.10 months, respectively. Interestingly, the LG had the longest holding period for both the top and bottom position. The bottom position was for 26 months from March 1976 to April 1978 and the top position was for 24 months from May 1989 to April 1991. The relatively infrequent need for rebalancing when combined with the low cost of exchange traded funds supports the viability of this momentum trading strategy.

Conclusion

Style index momentum is particularly interesting since it provides a diversified, low-cost trading strategy to exploit it. This inexpensive and diversified option provides the opportunity for money managers, regardless of the amount of assets under management, to include such strategy into their tactical asset allocation decisions. Such style index momentum trading strategies have outperformed on both a raw and risk-adjusted return basis, with the long minus short portfolio generating an average 9.25% annual return over the 34-year period analyzed. Although the excess returns vary, they are robust through time and after controlling for potentially confounding effects.

Appendix

Where the dependent variable (Ri,t-Rrf, t) is the 12,1 portfolio return minus the one month Treasury Bill rate, Rm,t-Rrf,t, is the market factor (CRSP value-weighted index minus the one month Treasury Bill rate), SMB (small minus big) is the size factor, and HML (high minus low) is the book-to-market factor. The αi represents the 12,1 portfolio return in excess of the one-month Treasury Bill rate not explained by the risk factors in the model.

Endnotes

  1. Russell indexes Rank #1 as Institutional Benchmarks, http://www.russell.com/news/Press_Releases/PR20060629_US_p.asp
  2. We would like to thank Jason Karceski for providing us with the constructed index data from January 1969 to December 1996 used in Chan, Karceski, and Lakonishok [2000].
  3. We analyzed holding multiple indexes simultaneously, but only single index portfolios are reported due to larger momentum and significance relative to multiple index holdings.
  4. We evaluated all formation periods from -36 to -1 months and holding periods from +1 to +36 months. However, for brevity we only report months at common breakpoints.
  5. We would like to thank Kenneth French for providing HML and SMB factor data on his website, http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ data_library.html

References

Arshanapalli, Bala G., Lorne N. Switzer, and Karim Panju. 2007. “Equity-Style Timing: A Multi-Style Rotation Model for the Russell Large-Cap and Small-Cap Growth and Value Style Indexes.” Journal of Asset Management, Vol. 8: 9–23

Barberis, Nicholas, Andrei Shleifer, and Robert Vishny. 1998. “A Model of Investor Sentiment.” Journal of Financial Economics, Vol. 49, No. 3 (September): 307–343

Chan, Louis K.C., Jason Karceski, and Josef Lakonishok. 2000. “New Paradigm or Same Old Hype in Equity Investing?” Financial Analysts Journal, Vol. 56, No. 4 (July/August): 23–36

Chen, Hsiu-Lang, and Werner De Bondt. 2004. “Style Momentum Within the S&P-500 Index.” Journal of Empirical Finance, Vol. 11, No. 4 (September): 483–507

Lewellen, Jonathan. 2002. “Momentum and Autocorrelation in Stock Returns.” The Review of Financial Studies, Vol. 15, No. 2, (Special Issue: Conference on Market Frictions and Behavioral Finance): 533–563

Swinkels, Laurens. 2004. “Momentum Investing: A Survey.” Journal of Asset Management, Vol. 5, No. 2: 120–143

BACK TO TOP

Benner’s Prophecies of Future Ups and Downs in Prices[1]

by Samuel Benner

About the Author | Samuel Benner

Bio Coming

Panic

Panics in the commercial and financial world have been compared to comets in the astronomical world. It has been said of comets that they have no regularity of movement, no cycles, and that their movements are beyond the domain of astronomical science to find out. However, the writer claims that Commercial Revulsions in this country, which are attended with financial panics, can be predicted with much certainty; and the prediction in this book, of a commercial revolution and financial crisis in 1891 is based upon the inevitable cycle which is ever true to the laws of trade, as affected and ruled by the operations of the laws of natural causes.

The panic of 1873 was a commercial revolution; our paper money was not based upon specie, and banks only suspended currency payments for a time in this crisis. As it is not in the nature of things in succeeding cycles to operate in the same manner, the writer claims that the “signs of the times” indicate that the coming predicted disturbance in the business world will be not only an agricultural, manufacturing, mining, trading, and industrial revulsion, but also a financial catastrophe, producing a universal suspension of payments and bank closures.

It is not necessary to give a detailed account of the effects of disorderly banking in our colonial and revolutionary history, and the different panics prior to the war of 1812, to establish cycles in commerce and finance.

Such a history would fill many pages without answering the purpose of this book, and would be as intricate and difficult to understand as the prices of stocks and gold in Wall Street.

The war of 1812 was the period in history of the United States of America when it was deemed a necessity for this country a manufacturing nation, as a balance wheel, to maintain the prosperity of agriculture and commerce, and also to declare her independence forever from any nation upon the earth.

It is a doleful commentary upon the times that such calamities in the history of our country, as hereafter mentioned, should have occurred amidst a profusion of all the elements of wealth, prosperity in trades and manufactures, and independence in the arts and sciences.

It will only be necessary for the purposes of this book to state that the business of this country before, during, and after the war of 1812 had culminated in the year 1819, as commercial history will show; and that a reaction in business followed this year, the beginning year in our cycles of commerce and panic.

However, we deem it important to notice at this period the operations of banking in brief as a good criterion of the prosperity or adversity in general business, and the fluctuations in the activity of industry and commerce.

In the Report of Finances for 1854 and 1855, it is stated that the adoption of the Federal Constitution in 1787 to the year 1798, no people enjoyed more happiness or prosperity than the people of the United States, nor did any country ever flourish more within that space of time. During all this time, and up to the year 1800, coin constituted the bulk of the circulation; after this year the banks came, and all things became changed, like the Upas tree, they have withered and impaired the healthful condition of the country, destroyed the credit and confidence which men had in one another.

The bank-note circulation began to exceed the total specie in the country in the years 1815, ’16, and ’17, and in the year 1818, the bank mania had reached its height; more than two hundred new banks were projected in various parts of the Union. The united issues of the United States Bank, and of the local banks, drove specie from the country in large quantities, and in the year 1819, when the culmination in general business had been reached, and contraction of the currency began to be felt, multitudes of banks and individuals were broken. The panic producing a disastrous revulsion in trade, caused the failure of nine-tenths of all the merchants in this country and others engaged in business, and spread ruin far and wide over the land.

Two-thirds of the real estate passed from the hands of the owners to their creditors.

A banker, in a letter to the Secretary of State, in 1830, describes the times as follows;

“The disasters of 1819 which seriously affected the circumstances, property, and industry of every district of the United States will be long recollected. A sudden and pressing scarcity of money prevailed in the spring of 1822; numerous and very extensive failures took place in 1825; there was great revulsion among the banks and other monied institutions in 1826. The scarcity of money among the trades in 1827 was disastrous and alarming; 1828 was characterized by failures among the manufactures and trades in all branches of business.”

After the year 1828 business continued to be depressed, vibrating according to circumstances until 1834, a year of extreme dullness in all branches of trade; after which our stock of precious metals increased very fast, business revised, and in the year 1835 and ’36, the imports of gold and silver increased to an enormous extent; as the banks increased their reserves of species, they also correspondingly issued bank notes – each increased issue of paper money led to the establishment of new banks.

The State banks that had numbered in 1830 only three hundred and twenty-nine, with a capital of one hundred and ten millions, increased, according to the treasury report, by the first of January, 1837, to six hundred and twenty-four, or, including branches, to seven hundred and eighty-eight, with a capital paid in of two hundred and ninety millions.

Mark the result and culmination; a panic! In the month of May, 1837, a suspension of specie payments by all the banks, and a general commercial revulsion throughout the country, involving the fortunes of merchants, manufacturers, and all classes engaged in trade, in consequence of a ruinous fall in prices. This year of reaction makes the second year in our panic cycles, and is eighteen years from 1819.

It is not necessary to go over almost the same history again to show that business was depressed, and trade was stagnant after 1837 down to the year 1843, and then up and down to the year 1850, a year of extreme dullness in all branches of trade and industry, after which year a change came, and business was again prosperous to the year 1857, when we again experienced a commercial and financial crisis and reaction, not only in this country but all over the world, making the third year in our cycles, and twenty years from 1837.

History repeats itself with marvelous accuracy in detail from one panic year to another. The general direction of business after the panic of 1857 was on the same downward grade that had characterized the times after the panic of 1819 and 1837, until all business had culminated in depression in the year 1861, after which trade again improved, and was very active during the war of the rebellion and up to the year 1865, when a temporary reaction set in. Reader let me observe here, that if then had been the time for a commercial revulsion and panic in money, the catastrophe would have been the most deplorable national calamity upon record. However, the cycle was not then complete. And the commerce and trade of the country continued to be semi-prosperous until 1870, after which year commercial activity was the order of the day, all branches of business and manufacture flourished and was prosperous; our railroad building was astonishing in the world in the years 1871, ’72; but the end must come, and in September 1873, we had the culmination – a crashing panic, and reaction in all trades, manufactures, railroads, and industries, which is still going on, and we have not reached hard pan.

These are facts of late history, and are so fresh in the recollection of the mind of the reader, that it is only necessary to refer to them. The panic of 1873 makes the fourth year in our panic cycles, and sixteen years from 1857.

As to whether it is the paper money or the manufacturing and trading industries of the country, which call out and into use the paper money that produce these periodical inflations and contractions, by which trade is stimulated and deranged, and extremes in business activity is brought about, is a matter for the statesman and historian to ascertain and record; [ ed note – the first book written was by a technical analyst and not a fundamentalist.] it is only sufficient for our purposes to point out the years, and to show that the preceding years were prosperous and profitable years in trade; while the succeeding years, for a certain length of time, were years of depression and loss in business; and we observe that since the business of the country has abandoned specie (ed note, gold and silver) as a currency, and adopted paper money in lieu thereof, the manufacturing interests have attained larger proportions, and that there are more regularity and system in the return of the advance and decline in general business, and that the culminating years in activity and depression can be calculated and ascertained with greater certainty.

The panics of 1819, ’37, ’57, and ’73, during this period of years, stand out upon the pages of history of this country in their magnitude compared with other panics. (ed. Text removed.)

Commencing with the commercial revulsion of 1819, we find it was eighteen years to the crisis of 1837, twenty years to the crisis of 1857; and sixteen years to the crisis of 1873 – making the order of cycles sixteen, eighteen, and twenty years and repeat. The cycle of twenty years was completed in 1857, and the cycle of sixteen years ending in 1873, was the commencement of the repetition of the same order. It takes panics fifty-four years in their order to make a revolution, or to return in the same order; the present cycle consisting of eighteen years will end in 1891, when the next panic will burst upon us with all its train of woes.[2]

Ed. note… Benner’s book continues another 70 pages

Endnotes

  1. [Editor Note … Benner’s use of the word depression means bear market trend or economic contraction. The 1819 bank collapse from the cost of war, excess currency in circulation, and money moving out of the country is well defined. Benner’s economic forecast for a recession in 1891, though written in 1884, is only off a year. The true brilliance is Benner’s cyclical analysis work throughout this book. Benner’s book was first published in 1875 and is widely viewed as the first market analysis book written in North America. This excerpt begins on page 96 after a detailed study of high to low price cycles in Steel, Hogs, Corn, and Cotton. Long term cycle analysts of today’s markets will find Benner provides annual market data from 1821 in this book.]
  2. [Ed. note… Giving Benner plus or minus a year he hit the cycle low and high forecast. In 1837, 1857, 1873, and 1893 a New York residential boom ended in a panic bust where housing prices collapsed. New York’s housing busts were caused by recessions (or “panics”) in the national economy recorded to be in the years 1837, 1857, 1873, and 1892-3. The ad at right can be found at http://www.brownstoner.com/brownstoner/archives/2007/08/not_new_yorks_f.php]
BACK TO TOP