JOURNAL OF
TECHNICAL ANALYSIS
Issue 57, Winter/Spring 2002

Editorial Board
Michael Carr, CMT
Matthew Claassen
Julie Dahlquist, Ph.D., CMT
J. Ronald Davis
Golum Investors, Inc.
Cynthia A Kase, CMT, MFTA
Expert Consultant
Cornelius Luca
John R. McGinley, CMT
Michael J. Moody, CMT
Dorsey, Wright & Associates
Jeffrey Morton, CMT, MD
Ken Tower, CMT
Chief Executive Officer
Avner Wolf, Ph.D
Barbara I. Gomperts

CMT Association, Inc.
25 Broadway, Suite 10-036, New York, New York 10004
www.cmtassociation.org
Published by Chartered Market Technician Association, LLC
ISSN (Print)
ISSN (Online)
The Journal of Technical Analysis is published by the Chartered Market Technicians Association, LLC, 25 Broadway, Suite 10-036, New York, NY 10004.New York, NY 10006. Its purpose is to promote the investigation and analysis of the price and volume activities of the world’s financial markets. The Journal of Technical Analysis is distributed to individuals (both academic and practicitioner) and libraries in the United States, Canada, and several other countries in Europe and Asia. Journal of Technical Analysis is copyrighted by the CMT Association and registered with the Library of Congress. All rights are reserved.
Letter from the Editor
by Charles D. Kirkpatrick

You may have noticed the new cover and the new name “Journal of Technical Analysis” to our revered MTA publication. There have been other changes as well.
First, you have a new editor. Hank Pruden, your previous editor, first managed the MTA Journal in 1993, almost nine years ago. What a wonderful nine years for the Journal. He and Dave Upshaw and many others as reviewers produced over those years a professional and useful Journal for MTA research that has flourished amongst a group of practitioners not normally associated with research itself. This was an admirable, arduous, and unrewarded feat, one dedicated to the MTA and its professional image, and one that humbles me. Thank you Hank and Dave and all you others for your hard work over those past nine years.
Second, you may have noticed that we now have two finance professors as manuscript reviewers. We hope to entice even more. As we upgrade the articles in the Journal to satisfy more stringent criteria for content and analysis, it is imperative that we include those from the academic world to help us. We have begun to do so and welcome Professors Avner Wolf and Julie Dahlquist to our circle.
Third, we begin this editorial reign with a collection of all the Charles H. Dow award papers including this year’s. This may seem presumptuous at first, because I have twice won the award myself, but in truth, the award has never been given much notice within the MTA or elsewhere. It is an award that recognizes good writing and research, and as such, should be a cornerstone for this publication that attempts to do likewise. Thus, we have reproduced each winning article in its original form with updates when necessary. Only one of the winners, Bill Scheinman, is no longer with us; the remaining winners are still very much in the business of technical analysis. We hope these papers will provide inspiration for you not only to compete for the Dow Award but also to provide us with additional research studies that we can share with our members and the rest of the investment world.
Charles D. Kirkpatrick II, CMT, Editor
Charles H. Dow Award Winner • May 1993
by Charles D. Kirkpatrick

About the Author | Charles D. Kirkpatrick
Charles Kirkpatrick, who holds the Chartered Market Technician (CMT) designation, is the president of Kirkpatrick & Company, Inc., and has been a featured speaker before such professional organizations as the New York Society of Security Analysts, Financial Analysts Federation, CMT Association, the Foundation for the Study of Cycles, and numerous colleges and universities. He is a former Board Member of the CMT Association, former editor of the Journal of Technical Analysis and former Board Member of the Technical Analysis Educational Foundation, responsible for the development of courses in technical analysis at major business schools.
Throughout his 45 years in the investment field, Charlie has received recognition from both the national media and his peers. He has been featured on Wall $treet Week, CNBC, and in the magazine Technical Analysis of Stocks and Commodities, has been quoted in such publications as The Wall Street Journal, BusinessWeek, Forbes, Futures magazine, Money magazine and The New York Times, and has written articles for Barron’s and the Market Technicians Journal. He is the only person to win the annual Charles H. Dow Award twice, for articles on technical analysis in 1993 and 2001. In 2008, he won the CMT Association’s Annual Award for “outstanding contributions to the field of technical analysis.”
In 1970 Mr. Kirkpatrick co-founded the Market Forecasting division of Lynch, Jones & Ryan and in 1978 started his own market forecasting and brokerage firm, Kirkpatrick & Company, Inc., which published an investment-strategy letter, provided computerized stock-selection methods to institutional portfolio managers, managed a hedge fund, and traded options on the PHLX and CBOE. While currently retired from the investment management, brokerage and trading businesses, he continues to publish his Market Strategist letter, calculate his award-winning stock-selection lists, write books and articles on trading and investing, and as an Adjunct Professor of Finance, teach technical analysis at Brandeis University International Business School.
A graduate of Phillips Exeter Academy, Harvard College (AB), and the Wharton School at the University of Pennsylvania (MBA), Mr. Kirkpatrick lives in Kittery, Maine.
Using the stock market principles outlined by Charles H. Dow, how
could we look at the long wave in stock prices?
Dow published The Wall Street Journal beginning in 1889 and, unfortunately, died in 1902. He wrote during a period of generally rising stock prices from the depression lows in the 1870s to the then all time high in 1901. During that period Dow formulated his theory of the stock market. It consisted of two important components: the cyclical nature of the markets and in the longer cycle, the “third wave,” the need for confirmation between economically different sectors, specifically the industrials and the railroads.
Following an earlier analogy between the stock market and ocean waves during the tidal cycle, Dow hypothesized in his famous Wall Street Journal editorial of January 4, 1902:
“Nothing is more certain than that the market has three well-defined movements which fit into each other. The first is the variation due to local causes and the balance of buying and selling at that particular time. The secondary movement covers a period ranging from 10 days to 60 days, averaging probably between 30 and 40 days. The third movement is the great swing covering from four to six years.”
Some technicians, especially cycle analysts, would quibble with the simplicity of Dow’s breakdown since there is evidence of other waves with periodicity between 40 days and four years. However, cycle analysts would also have to acknowledge that Dow’s breakdown is certainly accurate, though perhaps not inclusive, and that the periods he mentions are, remarkably, still the dominant cyclical movements today.
But Dow stopped short at the four- to six-year cycle, essentially the business cycle. He assumed that stock prices had an underlying uptrend about which these cycles oscillated. This was consistent with his experience at the time. Stock prices (see chart A, Dow Jones Industrial, 1885-1902) had wild gyrations during the late 19th Century, but the underlying trend was generally upward. He undoubtedly would have added a fourth wave, or “long wave,” had he lived to see the 1929-32 crash.
Aside from recognizing that the stock market had a pattern, which is the basis for technical analysis, Dow also recognized, in his theory of confirmation between the Industrial Average and the Railroad Average, that there must be an economic rationale for any signals given by the stock market price action. Most pure technicians conveniently overlook this because it diverges from a strict price analysis. Unfortunately, investment analysts have evolved into three camps since Dow – technicians, fundamentalists and academics – and as seems to be the way of human nature, they generally disregard the other’s work to reinforce their own identity. However, Dow was above all that, (or at least before it), and considered the economic rationale for a cyclical turn in the stock market just as important as the technical.
In the post-1929 era, we now know that the underlying long-term uptrend in stock prices can be severely interrupted. From looking at stock prices going back several hundred years we also note that the 1929-1932 decline was not an anomaly. It occurs with frightening regularity, roughly every 40 to 60 years (see Chart B, Dow Jones Industrial, Reconstructed, 1700-1940). We call this cycle the “long wave” and ponder on how Dow would have analyzed it.
As an aside, there are still many analysts, especially academics, who believe that the long wave is imaginary. Their thesis is based on the assumption that markets don’t have a “memory.” They argue that today’s prices are totally independent of yesterday’s, of last week’s, of last year’s and certainly of 50 years ago prices. Furthermore, since Fourier transforms and other sophisticated mathematical techniques have been unable to identify with certainty such cyclicality, it probably doesn’t exist. On the other hand, new experiments, especially those with non-linear mathematics, are beginning to knock down the “no memory” thesis. Edgar Peters, in his book Chaos and Order in the Capital Markets, suggests that the stock market has at least a four year memory. Professors McKinley and Lo from Wharton and MIT have demonstrated that stock price action is inconsistent with a “no memory” thesis and are now using non-linear mathematics to study prices. Professor Zhuanxin Ding from the University of California has shown that stock prices act as if they had long memories. Even simple moving averages, as studied by two professors at the University of Wisconsin, Dr. William Brock and Blake LeBaron, can generate profitable trading signals from prices alone, an inconsistency with the “no memory” thesis. The Economist wrote in a special section on the Frontiers of Finance on October 9, 1993:
“This was a shock for economists. Might chartists, that disreputable band of mystics, hoodwinking innocent fund managers with their entrail-gazing techniques and their obfuscatory waffle about double-tops and channel break-outs, be right more often than by chance? How could it be?”
Whether we believe in price memory or not, charts of stock prices since the South Sea Bubble in 1720 show that there are obviously times when the stock market experiences enormous, speculative rises and subsequent, disastrous declines. These major events occur at periods considerably longer than Dow’s four- to six-year movements. Furthermore, when we look at other economic data, such as commodity prices, GNP (even U. S. Post Office revenue), etc., we see the same long-term periodicity.
How would Charles Dow have looked at this long wave price action for signals? Probably he would have begun by looking only at the highs and lows of each four- to six-year cycle. Intermediate-term motion would be largely irrelevant to the long wave. Simplistically, he would likely have stated that the long wave was up when the tops and bottoms of the fouryear cycles were making new highs, and conversely, when the tops and bottoms were making new lows, the long wave was declining.
In the last 60 years, this approach would have missed the 1929 crash, but the ultra long-term investor would have sold his stocks in 1930 when the 1929 low, a four-year cycle low, was broken. It would also have told the investor in 1950 that the long wave was turning upward, that it was time to invest in the stock market. Unfortunately, there would have been several false signals. For example, in the 1970s, two four-year cycle lows broke below previous four-year lows, wrongly suggesting that the long wave was headed down again. Also, in the 1930s, after the initial bottom in 1932, several four-year cycle lows were broken between 1937 and 1949, suggesting that the long-term cycle was turning down again after having bottomed in 1932.
False signals also occurred in Dow’s original work on the four-year cycles and are the reason for his turn to confirmation between the Industrial and the Railroad averages. He based his confirming signals on the economic assumption that expansion in industrial profits could be a temporary anomaly but not if the produced goods were being shipped, by railroads, to customers. A confirmation between the two averages in either direction suggested that the new trend was real.
Unfortunately, over the long wave, the theory of industrials versus railroads breaks down. First, over time, railroads are not always the principal form of transportation for goods (How do you ship the service industry? and how about canals in the 1830s?), and second, the apparent cause for the long-wave has more to due with capital formation, debt and money than with industrial production.
Money has a price too – the interest rate. Interestingly, interest rates over the past several hundred years have also had a long wave that has corresponded in period, if not in turning points, with the stock market (see Chart C, U.S. Long-Term Interest Rates, Reconstructed, 1700-1940). For this reason, we assume Dow would have looked to the interest rate market for confirmation of a trend change in the long-term stock market.
Looking at interest rate trends, however, is not as simple as looking for a confirmation in trend between industrials and rails. Long wave interest rate cycles do not overlap precisely with long wave stock price cycles (see Chart D, U.S. Long-Term Interest Rates & Dow Jones Industrial Average, Reconstructed, 1700-1940). They will not “confirm” a move to new highs or lows as the rails will the industrials. It is important that one understand more about the history of the long wave direction in interest rates before a signal can be confirmed for the stock market.
The confusing aspect between long wave interest rates and the stock market is that sometimes both can be moving in the same direction and sometimes each can be moving in opposite directions. This is because stock prices have a corporate profit or growth component, as well as an interest rate or alternative investment component. In the former, stock prices rise as a result of economic growth, industrial expansion and profitability along with interest rates; in the latter, stock prices rise as an alternative investment to falling yields on fixed income securities. The latter, as we shall see, is more dangerous.
When we look at the evidence over the past several hundred years we see alternating periods of rising and falling interest rates. These are called “secular” moves and have to do with the expansion and contraction of capital and debt.
Notice in Chart D that the peak in interest rates always precedes the long wave peak in stock prices by many years. When interest rates and the stock market are both rising together, the industrial growth component is dominant. The period after interest rates peak is when stock prices rise as an alternative investment. During that period declining interest rates force yield-conscious investors into alternative investments of lesser quality in order to maintain yield. Since stocks are the most risky and least quality investments, they become the final alternative, especially when their price continues to appreciate as a result of increasing cash flow into the stock market. The recent conversion of government-guaranteed CD deposits into stock mutual funds is typical during this period. Unfortunately, it eventually leads to the declining long wave in stock prices.
Each declining stock market wave has occurred only during a secular decline in interest rates. Over the past several hundred years, you won’t see a long wave decline in stock prices while interest rates are rising. Declining interest rates at first can cause a financial speculation and an enormous rise as yield is chased through lesser quality, but eventually declining interest rates are unhealthy for the long wave in stock prices. With this in mind, Dow would likely have developed the following confirmation rules for the long wave in stock prices:
- When four-year stock price cycles reach new highs and business-cycle interest rates are rising, the long wave is rising.
- When four-year stock price cycles break below previous lows and business-cycle interest rates are rising, the long wave is rising.
- When four-year stock price cycles break above previous highs and business-cycle interest rates are declining, the long wave has been given a warning but is still rising.
- When four-year stock price cycles break down below previous lows and business-cycle interest rates are declining, the long wave is declining.
- After a decline, the long wave will not turn up until business cycle interest rates also turn up.
Using this set of rules, let’s walk through the past 75 years using the accompanying Chart E of long-term U.S. interest rates and the Dow Jones Industrial Average since 1900.
From Dow’s death in 1902 both interest rates and the stock market rose.
p4
According to rule #1, the long wave was rising. Interest rates peaked in 1920 and declined through 1946. Declining interest rates are a warning to be confirmed later by a breakdown in the stock market. Thus, under rule #4, when the stock market broke to new lows in August 1930 (DJIA monthly mean = 231), it confirmed the long wave downturn.
During the 1930s and 1940s, while the initial bottom in 1932 turned out to be the actual bottom, the gyrations were large and the stock market trend generally flat. Interest rates declined until the end of World War II. Any upward breakout had to be taken skeptically (rule #5).
Finally, in March 1950, interest rates broke above their earlier business-cycle high (rule #5 and #1). Since rising interest rates are always accompanied by a rising stock market long wave, this was the buy signal. The DJIA was 249 at the time.
In the 1970s, the stock market broke below its prior four-year cycle lows in 1970 and in 1974. However, interest rates were still rising and thus the long wave was still rising (rule #2).
Interest rates finally peaked in September 1981. This was a warning (rule #3), similar to the interest rate peak in 1920, that the long wave was ending. Currently, the stock market has yet to break below a previous four-year cycle low and thereby confirm a new decline in the long wave.
The last four-year low was 2340 in the DJIA in 1990*. Should it be broken before a higher low is established, we will have confirmation of the downturn in the long wave. Would Charles Dow have looked at the long wave in this manner? We don’t know. But his principle of first observing price action simplistically and then confirming it with other markets, using some economic justification, gives us an excellent background for analysis of the long wave and teaches us to remain broad-minded and rational. His legacy is more than just a stock market theory. It is a way of thinking that transcends the narrow confines and pettiness of much investment analysis.
March 30, 1994
*Note: 8064 in the DJIA in 2001, the NASDAQ has already begun its long wave decline.
CDK, 2002
Charles H. Dow Award Winner • May 1995
by William X. Scheinman
About the Author | William X. Scheinman
William X. Scheinman, was a registered investment advisor since 1968. He moved from Wall Street to Reno, Nevada in December 1974 where he advised financial institutions worldwide. One of the founding members of the Market Technicians Association, Bill was the founder of the African-American Students Foundation, Inc. which between 1956 and 1961 brought more than 1,000 students from all over Africa to the United States to attend institutions of higher learning. He was the founder of the African-American Leadership Foundation, Inc. Bill died on May 24, 1999.
“The nature of risk is highly sensitive to whether we act before or after we have all the information in hand. This is just another way of saying that risk and time are only opposite sides of the same coin, because the availability of information increases with the passage of time. Thus, risk, time and information interact upon one another in complex and subtle ways.”
From keynote address by Peter L. Bernstein upon receiving the Inaugural Distinguished Scholar Award from the Southwestern Economic Association, Dallas, March 4, 1994.
The reader should keep in mind that any discussion of the financial markets is of necessity a discussion of constantly changing statistics and other data. This article was originally written in May 1994 and was submitted to the MTA Journal at that time. Therefore, while the data used herein were current as of May 20, 1994, such data applied to any specific situation described may no longer be applicable. The same caveat applies to the Sequel, which was written and submitted on November 11, 1994 market close, and briefly discusses how each of the theories or methods described herein worked or failed to work during the period subsequent to May 20,1994.
SYNOPSIS
This article outlines the core theories of Charles H. Dow and Edson Gould. Three of Gould’s methods used to forecast stock prices, which are based on quantifying investor psychology, are described and then illustrated using current data. Several forecasts are then made based on how Gould’s three methods and those of the author combine, in the author’s opinion, to operate in current financial markets. Future levels of interest rates, stock prices, an industry group, the technology sector, as well as two individual stocks, are estimated. A sequel, written six months after the original article was submitted, discusses how the forecasts turned out.
DOW’S THEORIES
The granddaddy of all stock market technical studies is the Dow Theory, which was originated by Charles H. Dow around the turn of the century. According to Dow, major bull or bear trends are indicated when the Dow Jones Industrial and Transportation averages, one after the other, set new highs or lows. A divergence between the indices often indicates a potential turning point in the underlying trend of the stock market. Dow set the stage for the later theories, still used and elaborated on by market analysts today, of what may broadly be defined as divergence analysis. That is the study of divergences among and between a broader universe of indices and indicators than were available to Dow. Dow’s theory was used in the context of his basic commandment: “To know values is to know the meaning of the market.”[1] But Dow also said that wise investors, knowing values above all else, buy them when there is no competition from the crowd. Indeed, they buy them from the crowd during periods of mass pessimism, and sell them to the crowd in return for cash during late stages of advancing markets. The stock market as a whole, said Dow, “represents a serious, well-considered effort on the part of farsighted and well-informed men to adjust prices to such values as exist or which are expected to exist in the not too remote future.”[2]
GOULD’S THEORIES
Edson Gould, who first studied the Dow Theory, was a practicing market analyst for more than fifty years between the early 1920s and late 1970s. His main focus was on forecasting the stock market. Though a student of physics and the harmonics of music, as well as business cycles and Greek civilization, each of which he believed helped explain certain aspects of how the stock market behaved, he came to believe, after reading Gustave LeBon’s classic, The Crowd,[3] that “the action of the stock market is nothing more nor less than a manifestation of mass crowd psychology in action.”[4]
The methods and techniques Gould utilized in his service, Findings & Forecasts, attempted to “…integrate the many economic, monetary and psychological factors that set the level and cause the changes in stock prices.”[5] He regarded the economic factors as important but typically late so far as the stock market is concerned. He regarded the monetary factors as crucial for the stock market and typically early. Whereas, he believed that, “Of all three sets of factors, the psychological factors are by far the most important – in fact, the dominant factors affecting the cyclical swings of stock prices.”[6]
THESIS
It follows from the above that one of the most important aspects of all in successfully analyzing the stock market is measuring investor sentiment. The consensus view, the most difficult factor of all to gauge accurately, can be glimpsed at times – and only in part-through not only such transaction-based data as put/call ratios, premiums and open interests, but also poll-based data such as the weekly Investors Intelligence reports of what percentage of investment advisors are bullish or bearish. Whereas the author regularly screens such data for extremes, the theories and methods which are derived from Gould and are discussed below are, in an of themselves, measures of the behavior of the investment crowd and, in his opinion, more practically useful in making and implementing investment decisions. And inasmuch as they are also applied to the monetary factors, a bond market opinion is derived therefrom, as well.
The index and stock charts used to illustrate this article are of:
- Treasury Bonds Nearest Futures, Monthly
- Treasury Bonds Nearest Futures, Weekly
- New York Stock Exchange Financials Index, Weekly Standard & Poor’s 40 Utility Stock Composite, Weekly
- Standard & Poors 400 Industrial Stocks Composite, Monthly
- Drug Shares Index, Weekly Close (Sum of BMY, LLY, MKC, MRK, PFE, UPJ x 4.50541)
- TXB-Hambrecht & Quist Technology Stock Index Less CBOE Biotechnology Stock Index, Weekly Average
- Merck (MRK), Weekly
- U.S. Robotics (USRX), Daily
GOULD’S METHODS AND TECHNIQUES
Edson Gould is, perhaps, best known for his monetary rule and valuation barometer: His Three-Step-and-Stumble Rule states that, “Whenever any one of the three rates set by monetary authorities – the rediscount rate, the rate for bank reserve requirements, and margin requirements on stocks – increases three times in succession … invariably … the stock market has subsequently … suffered a sizable setback.”[7] Whereas his Senti-Meter is, “the ratio of the Dow Jones Industrial Average to the average rate of annual cash dividends paid on that average. “[8] When the Senti-Meter reads $30 per $1 of dividends or more it indicates a high and risky market. A reading of $15 or less indicates a relatively low and cheap market.
Lesser known and, perhaps, too arcane for many, the author has found that three of Gould’s methods and techniques are more practically useful in helping decide when and at what levels a given stock or price index is “too” high, or “too” low and what constitutes a sentiment extreme. With this background in mind, let’s examine Gould’s theories and applications of Resistance Line Measurement, Unit Measurement and the Rule of Three, as well as the author’s theory of the Cut-in-Half principle and its opposites.
RESISTANCE LINE MEASUREMENT
According to Gould, “ … the market continually reveals a quantum of mass psychology comprising time and price. It follows that a sharp decline in a short period of time generates as much bearishness as a slow and minor decline over a long period of time.”[9] This theory, then, is based on three principal determinants of crowd psychology in the market place: price change itself, elapsed time to achieve it and the perceived amount of risk.
The resistance line theory attempts to measure these three elements of mass psychology mathematically, weighing both the vertical price change and the horizontal elapse of time. This measure of potential risk or reward must be keyed off whatever the investor regards as any pair of prices which consist of an important high and low of the particular security’s price history. Four of the charts which are discussed below illustrate how the resistance lines are applied.
The theory is that a trendline rising at one-third (or two-thirds) the rate of an advance movement is likely to produce resistance to subsequent decline, but, if violated, the decline will accelerate from the point of penetration. Similarly, a trendline declining at one-third (or two-thirds) the rate of a decline movement may provide resistance to a subsequent advance, but, if penetrated, the advance will accelerate from that point. Sometimes these resistance lines work, sometimes they don’t; they are not foolproof. But the author uses resistance lines because they seem to be more accurate than ordinary trendlines and – most importantly – because they can be drawn before the subsequent price action takes place.
LONG TREASURY BONDS
Let’s see how resistance line theory may be helpful this year in gauging when Treasury Bonds Nearest Futures, which have been falling mostly since their September 1993 peak, etch a major low. Inasmuch as these Treasuries, Monthly (Chart 1) made a major low in 1981 at 55.156 and more than doubled it at the 1993 high of 122.313, the most important set of resistance lines derive from that low and that high. Referring to the chart, we observe that the May 11, 1994 low of 101.125 slightly broke the rising 2/3 speed line before reversing upward to close May 20 at 105.000. Important here are the facts that this same resistance line approximately defined each of the 1987, 1990 and 1991 Treasury lows. Translated into an opinion on May 20, this means that Treasuries won’t decisively break par this year. Should they do so, it might imply the onset of a renewed inflationary cycle.
Moreover, the second of Gould’s methods, the unit measurement principle, also helps to determine the importance of the early-May intraday low of 101.125.
UNIT MEASUREMENT
This technique is sometimes helpful in estimating terminal phases of advances and declines, of both individual stocks and market indices. In other words, what constitutes a price which is “too” high or “too” low. Its measurements are expressed in terms of bull and bear “units.” A bull unit consists of the number of points of an initial advance by a stock or price index following the bottom of a prior important decline, succeeded by a subsequent reaction which, however, remains above that bottom and then is followed by a second advance that goes beyond the first one. A bear unit is formed in the same manner but in the opposite direction. These measurements sometimes portend the length of an overall advance (or decline) and indicate levels at which a trend may meet resistance, or, at times, an extreme reversal.
Price action with the primary trend frequently “works off” units three times (sometimes four times), in accordance with the Rule of Three, the basis of which is discussed below. In other words, for a move with the trend, expect three units, but be prepared for the fourth. One other important point about unit measurement is that recognition of the 2-unit level, by a sharp reaction from it, often indicates that following such a reaction the security will go all the way and work off three, or four units. Whereas recognition of that level which is defined by 2-l/3 units, without recognition of the 2-unit level (by resistance from it), is grounds for caution, especially for trend followers, since that is often the hallmark of a contratrend move.
Now we are equipped to develop a second opinion about Treasury Bonds Nearest Futures, Weekly, which is illustrated in Chart 2. Referring to the chart, we observe that these Treasuries etched a bear unit of 4-7/8 points by their initial September 7-22, 1993 decline from 122.313 to 117.438. According to unit measurement theory, then, the contratrend, upward reaction from the 2-bear unit level of 112.563, which was reached at the November 23 low of 112.031, implied that Treasuries would go down all the way – to work off either 3 or 4 bear units to 107.688, or 102.813, respectively. With the actual intraday May 11 low of 101.125, this close – less than 2 percent away-recognition of the theoretically maximum 4-bear unit count, also leads to the conclusion that that was a low in Treasuries of major importance.
INTEREST-SENSITIVE STOCK MARKET INDICES
Because of the importance of the monetary factors, ideally the resistance line and unit measurement theories should also be reflected in interest-sensitive stock market indices. Sure enough, the New York Stock Exchange Financials and Standard & Poors 40 Utilities indices (both on Chart 3) did, so far, in 1994 faithfully reflect both resistance line measurement and unit measurement theory, respectively. Referring to the chart, we observe that the Financials’ week of April 8, 1994 low of 200.01 and all subsequent lows, which were higher (itself a positive divergence), reversed upward above the rising 2/3 speed resistance line from the 1990 low. Gould always said that the ability of a price index to stay above its rising 2/3 speed resistance lines during reactions was the hallmark of a powerful advance.
Whereas the S&P 40 Utilities, which etched a bear unit of 10.29 points by the September 17-October 15, 1993 decline from 189.49 to 179.20, worked off a fairly precise 4-bear unit count to 148.33, compared to the actual May 13 low of 146.85. Close enough. Moreover, these Utilities also respected their rising 2/3 speed resistance line from the 1981 low, which approximated this 4-bear unit count.
It logically follows from each of these two theories that should the aforesaid risk parameters of these three interest-sensitive indices – Treasuries, NYSE Financials, S&P 40 Utilities – be decisively downside penetrated on a closing basis that the bear market in bonds not only had more to go on the downside but also that stocks might have begun a bear market as well. However, the author does not believe that is the case at May 20, 1994, as we examine next.
A STOCK MARKET ROAD-MAP
Gould also said that over longer periods of time unit measurement was useful, too: “A ‘grand unit’ is, as the name implies, a big unit sometimes taking months to complete and years to confirm.”[10] We think this theory has been remarkably accurate since the stock market’s 1982 low and that it is relevant now. Referring to Standard & Poors 400 Industrials (Chart 4) we observe that during the 14-month long 1982-1983 advance from 113.08 to 195.25, a grand bull unit of 82.19 points was etched – the l0-month-long 1983-1984 decline to 167.64 not exceeding the 1982 low.
Thereafter, the S&P 400 steadily rose until hitting the 1986 peak of 282.87, which was less than 2 percent above the 2-bull unit count at 277.46. The subsequent 12 percent reaction to that year’s September low of 252.07 constituted recognition that the unit measurement principle was operative, and that the S&P 400 would go on to work off at least 3 or 4 bull units.
From the 1986 low, the S&P 400 gathered steam and began to accelerate in 1987, reaching the 3-bull unit count of 359.65 in June. That level was potentially an important peak level in accordance with the theory – expect three units. During the next two months the S&P 400 overshot the 3-bull unit count but peaked 9.7 percent higher (intraday) in August, followed by the crash.
From the 1987 crash low, stocks steadily rose until hitting the July 1990 peak of 438.56 which was less than one percent below the 4-bull unit count to 441.84. That was a perfect fourth and final move, according to Gould’s unit measurement theory. During the next three months stocks fell by 21 percent.
Of current relevance, in the author’s opinion and experience, is that sometimes unit counts will work off a double set of units, i.e., 6 or 8 units. This appears to be the current case for the S&P 400 Industrials, which, rising from the October 1990 low of 345.79, recognized the five-bull unit count to 524.03 repeatedly last year by resisting further advance. However, by late-1993 that level was decisively exceeded. This means to us that the theory is saying the stock market should continue to rise until reaching at least the 6-bull unit count to 606.22, before the bull market which began from the 1982 low is over.
In 1994 the S&P 400 Industrials advanced further to reach the 560.88 level in February, before reacting to the April 20th low of 507.36, a drop of 9.5 percent. Referring again to Chart 4, we further observe that during the February-April reaction the rising 2/3 speed resistance line from the 1990 low, which during April was at the 500 level, was effective in defining that month’s low. This means we believe current risk from the May 20th close of 530.88 approximates four percent, say 510, whereas potential reward – to 606.22 – would be a gain of 14 percent. Those seem like good odds.
THE RULE OF THREE
Now, we examine the third of Edson Gould’s theories, the Rule of Three. For reasons about which people have speculated for thousands of years, the number “three” and “four” have a meaning of finality about them. For example, Aristotle said, the “Triad is the number of the whole, inasmuch as it contains a beginning, a middle and an end.” This concept may be deeply rooted in the natural family unit of father, mother and child, which is given religious expression in the concept of the Holy Trinity. The financial markets, which, after all, reflect human emotions, also frequently act in the same way. Sometimes there is a fourth movement, which usually is characterized as a “now and never” action, climactic in nature. (Three strikes you’re out; four balls take a walk). That financial markets and individual stocks typically – but not always – move in a series of three or four steps is apparent in both very short-term moves as well as those encompassing months and even years.
DRUG SHARES INDEX
However, to simply illustrate the Rule of Three we next examine the Drug Shares Index (Chart 5), a composite of Bristol Meyers, Lilly, Marion Merrill Dow, Merck, Pfizer and Upjohn. Between January 8, 1992 and August 13, 1993 the Drugs dropped 42.8 percent in a classic bear market, which consisted of four steps down. Also, helping define the fourth step as the final one was the fact that the August 13, 1993 low of 1011.46 closely approximated the 3-bear unit count from the 1992 peak, at 1012.93.
After rallying 20 percent from the 1993 low, to 1217.04 on January 14, 1994, the Drugs came down again to etch a successful test of last year’s low, at the April 15, 1994 low of 1014.84. In other words, we are confident that a classic double bottom has been put in place for this group. Additionally, as illustrated later, another yardstick of extreme investor behavior targeted both Lilly and Merck as having etched final lows last year and this year.
TECHNOLOGY AND GROWTH STOCKS
No discussion of the stock market would be complete without addressing the role of the technology and growth sectors. They are important not only be- cause they often represent the fastest growing companies, but also, as I stated in my book which was first published in 1970, “…glamour/ growth stocks which, because they are highly volatile – ordinarily two-and-a-half times more so than those in the DJIA – are favorite vehicles of sophisticated investors.”11 This volatility provides greater time opportunity than is available in the behavior of most other stocks.
Edson Gould, “put together the first ‘glamour average’ back in 1960,”[12] though, surprisingly, the pamphlet, A Vital Anatomy, from which we’ve also earlier quoted various Gould statements about his theories and methods, says nothing whatsoever about “glamour” stocks. Having originally gotten this idea from Gould in the late 1960s I created my own “Glamour Price Index,” which consisted of the stocks of eleven highly-regarded, wellknown, technology-oriented companies.[l3] However, in the most recent edition of my book, I noted that in recent years I’ve scrapped my original “Glamour” and several other technology- or growth-based indices in favor of the more representative Hambrecht & Quist Technology Stock Index[14] and its sub-index of even more rapidly growing, smaller companies, the H&Q Growth Stocks Index. But inasmuch as the H&Q indices include stocks in the biotechnology sector, which I believe march to a different tune than other growth and technology types, I also have created two other indices which consist of the numerical values of each of the respective H&Q indices less the CBOE Biotechnology Stock Index. Hence, in the technology and growth sectors, we examine these five different indices:
- H&Q Technology Stock Index, which is comprised of the publicly traded stocks of 200 technology companies, broadly defined in five basic groups: Computer Hardware, Computer Software, Communications, Semiconductors, Health Care (within which is a Biotechnology sub-index). The index was originally conceived in the 1970s as a priceweighted index. In 1985 it was reconstructed and market capitalization weighted. Changes in the index occur as mergers, acquisitions and failures dictate – not infrequently.
- H&Q Growth Stock Index is a subset of the Technology Index and is comprised of all companies in the Technology Index which have annual revenues of less than $300 million. Companies are removed every January if they have passed $300 million in revenues.
- CBOE Biotechnology Stock Index
- TXB Index, which is the H&Q Technologies Excluding Biotechs
- GXB Index, which is H&Q Growth Stocks Excluding Biotechs
THE TXB INDEX
Of these five indices, the author thinks the TXB Index is both the most representative of the overall technology sector as well as being the most orthodox in reflecting investor psychology We examine it next. Referring to Chart 6, we observe that between their respective 1990 lows and 1994 highs (through May 20, 1994), whereas the DJIA gained almost 71 percent and the Dow Transportations rose more than 131 percent, the TXB Index more than tripled. So much for volatility!
We also observe that at its March 18, 1994 peak, the TXB Index completed a third step up from its 1991 low. In accordance with the Rule of Three, this allows for either the possibility that that was a final step, or allows for the emergence of a fourth and final higher high, after the current reaction is over. We favor the latter possibility and believe Gould’s two other theories provide well-defined potential risk and reward parameters for the outcome we envisage.
As to risk in the TXB Index, which at May 20, 1994 was down 14-l/2 percent from its March 18 peak, it must not break below the rising 2/3 speed resistance line from the 1991 low, in order to maintain its bullish uptrend, in accordance with Resistance Line Theory. Inasmuch as TXB closed at 303.15 on May 20, and the aforesaid 2/3 speed line was nearing the 287 level, that means we think that risk of this date approximates 5 percent.
Whereas potential reward of a possible fourth and final rise of the TXB Index we think may be estimated through the Unit Measurement method. Referring again to Chart 6, we further observe that the TXB Index etched a bull unit of 99.96 points by its initial advance from the September 20, 1991 low of 112.29 to the April 24, 1992 high of 212.25. The 2-bull unit level of 312.21 was briefly recognized by its one-week reaction from near that level in early 1994, before advancing to the higher March 18 all-time high. Assuming then, that the aforesaid resistance line risk parameter holds on the current reaction, we believe that potential reward from the May 20 level is about 35 percent to the 3-bull unit count at 412.17.
These sound like favorable odds of 7-to-1 between possible risk and reward, in the author’s opinion. I note, too, that an overhead trend, which is projected through the 1992, 1993 and March-1994 peaks and which also parallels the rising 2/3 speed resistance line, approximates the 400 level by year-end 1994, as well. In other words, the author believes that the TXB Index will rise by about one-third before this sector is vulnerable to a bear market.
BEAR MARKET
After the reward area is approximated, that’s from where I think a bear market in technology and growth, as well as one for the stock market, overall, may begin. That there will likely be a bear market between now and 1995 is suggested by the facts that every single “5” year in this century has been an up year, which means that there “should” be an intervening bear market before the 1995 bull market begins. However, an alternative scenario is simply that it will take between now and year-end 1994 for the reward area to be reached.
If that proves to be the case and the stock market rises to record levels and nears the potential reward areas we have outlined herein, by year-end 1994 (possibly narrowly extending into early 1995), that would be the fourth consecutive up year – a possible “final” up year, according to the Rule of Three. In that event, 1995, especially if perceived by “too” many as always an up year since it is a “5” year, would then set the stage for it to become the first down “5” year during the past century, in the author’s opinion, i.e., 1995 happens in second-half 1994.
THE CUT-IN-HALF RULE AND ITS OPPOSITE
The fourth gauge for measuring investor extremes is conceptually the simplest of all – the Cut-In-Half Rule and its opposite. Briefly stated, when an important stock or price index loses 50 percent of its value, a rally or even major reversal often originates from near that level. Keep in mind that the Cut-In-Half Rule and a 50 Percent Retracement are quite different. For example, two stocks each base at the level of 50 and both rise to 100. If one declines to 75, before advancing once again, it has retraced 50 percent of its advance from 50 to 100. However, if the other one drops back to 50 from 100, it has been “cut-in-half.” A textbook example of the Cut-in-Half Rule is shown in Chart 7 of Merck, which we discuss below.
Why the Cut-In-Half Rule and its related spinoffs often work is probably because the investor crowd quantifies 50 percent off the top as “too” cheap. Whereas the opposite is that after an important stock or index doubles it often runs into trouble. At that point, investors tend to take at least some profits. But since some indices, individual stocks, commodities and interest-bearing securities are more volatile than others, this same yardstick is sometimes extended on the way up to a triple, quadruple, quintuple, or even a sextuple, (with low price stocks sometimes squaring their lows). Whereas, on the way down there is sometimes a double cut-in-half (off 75 percent), or – more rarely – a triple cut-in-half (87-l/2 percent off the top). Keep in mind, however, that in applying these Cut-In-Half yardsticks as potential long entry points, one should be satisfied that the company’s balance sheet is not in serious question.
STOCK SELECTIONS
Though it can be repeatedly demonstrated that these four theories of investor behavior are constantly operating in all financial markets, in the author’s opinion, it does not necessarily follow that one can readily use them in every instance. Sometimes the units are not readily discernible and the resistance lines don’t work. Moreover, sometimes there is a fifth step in an overall advance or decline movement, which appears to contradict the Rule of Three – though a case might be made that such a fifth step represents an undercut (or overcut) test of the fourth step.
However, after using these theories over time to make day-to-day investment decisions, I have found that they are valuable when discernible and add confidence to a decision. That is particularly the case when more than one theory appears to be operative in a given situation.
For example, referring to Chart 7 of Merck (MRK @ 30-l/4), we can observe that when it closely approximated its theoretical cut-in-half level of 28-9/ 32 during August 1993, at the actual low of 28-5/8, on a fourth (and presumably final) step down from the January 3, 1992 peak of 56-9/16, it appears to have constituted a classic buying juncture. Thereafter rallying to 38 by January 5, 1994, Merck subsequently tested last year’s low at this year’s April 15, 18 lows both at 28-1/8. This makes me confident that the cut-in-half level was, or approximated, a final low, especially since there is no great mystery about Merck’s fundamentals and fifty percent off the top seems a reasonable – if not “too” great – a discount for those investors critical of the Clintons’ health care plans.
More volatile technology and growth stocks sometime reflect these theoretical principles of how crowd behavior plays out in the financial markets in an extraordinary way. For example, referring to Chart 8 of U.S. Robotics (USRX @ 30-3/4), a world-wide leader in data communications, we can observe that after doubling the late-1991 low of 12-l/4 by the early-1992 high of 24-1/4, Robotics dropped sharply (off 45 percent). Whereas the early-1993 high of 25-1/2 almost doubled the summer-1992 low of 13-3/8. Then the March 1993 low of 17 was slightly more than doubled at the October high of 35-1/4, whereas the subsequent decline to 23 worked off an almost-perfect 4-bear unit count to 23-1/4.
Moreover, this year’s high of 46 (on March 8) was a perfect double, from which a reaction has begun, with a bear unit of 5-1/4 points already etched and confirmed (by a lower low), and more recently appearing to recognize the 3-bear unit level of 30-1/4, by the May 10-11 lows of 29-1/4, as the new low from which to key off. That the stock of a single company could have gone through so many extreme bull and bear moves, in such a short period of time, shows not only that Alvin Toffler’s “Future Shock”[15] has arrived on Wall Street but also that traditional Wall Street research is incapable of dealing with it effectively. The arrival of “future shock,” what some now call the information age, also presents a challenge to stock market technicians – to do their homework in order to stay ahead of the curve.
May 22, 1994
SEQUEL
At Market Close November 11,1994: What Happened During the Subsequent Six Months
Treasury Bonds Nearest Futures (Charts 1 and 2) perfectly tested their May 11 low at their virtually identical July 11 low of 100.0625 – compared to the May 11, 100.2500 (the numerical value of the Futures are about one point lower than shown on the chart because the Nearest Futures had rolled over from June’s to September’s, and currently December’s). Thereafter Treasuries rallied back to the August 5 high of 105.21875, then slowly eroded until par was broken at the September 22 close of 99.40625.
At that point we conceded the 13-year long uptrend in Treasuries was clearly broken and that the major trend inference of bonds should be assumed as being down. By November 11, Treasuries slumped even more, closing at 96.0625. Moreover, since this break of the grand resistance line of Treasuries also took out the 4-bear unit count level, it implies to us that ultimately at least six bear units will be worked off. That level is 93.0625. However, we never changed our positive stock market opinion because the two interest-sensitive stock market bellwethers we mostly rely on remained intact, notwithstanding the break in bonds.
The NYSE Financials(Chart 3), which had hit an intraday low of 199.95 on April 4, closing May 20 at 214.27, slightly exceeded the 220 level during four days in June, then also slowly eroded until closing November 11 exactly at 199.56. While this does constitute a break of its resistance line, and hence is clearly negative as of November 11, it seems such an obvious “test” of the April 4 low, that it is conceivable to us the Financials may be able to mount at least a weak rally from here.
We draw this tentative conclusion because the third interest-sensitive index, Standard & Poors 40 Utility Stock Index (also on Chart 3), which closed November 11 at 148.51 – still above its May 13 low – we don’t think will take out that level. Not only has the 4-bear unit count level of 148.33 been repeatedly and successfully tested during 15 trading days in October and November but is also defined by these Utilities’ rising 2/3 speed resistance line from the March 1980 low. That is about as precise recognition of a Gouldian-defined risk parameter as it ever gets! Naturally, this also means that a decisive break of it would undoubtedly require some change in our current stock market opinion.
Our Stock Market Road-Map for Standard & Poors 400 Industrials (Chart 4 and Chart 9), successfully tested the April 20 low (507.30) at the June 27 low of 511.90, thereafter rising to an all time high of 564.50 on October 31. Closing November 11 at 550.87, potential reward has now moved down to only 10 percent whereas near-term risk remains about 4 percent (the rising 2/3 speed resistance line moving up to about 527). Not quite as good odds as on May 20.
The Drug Shares Index (Chart 5) was up almost 18 percent at November 11 from May 20 and we think is headed substantially higher. Though gaining almost 30 percent from its April low at the November 11 close of 1316.70, we think the Drugs will work off at least three bull units, a further gain of 25 percent from here. Three bull units were worked off on the way down, so why not three bull units on the way up?
The TXB Index (Technologies Less Biotechs) (Chart 6 and Chart 9), at its June 23 daily close of 293.97, never broke below its 2/3 speed resistance line risk parameter and subsequently rose 28.9 percent from that low to 378.83 on November 9. Obviously the odds of further gain from here have sharply deteriorated, potential remaining reward only a possible additional 8.8 percent, in our opinion. We have chosen to deal with this change of the odds by building cash as specific technology and growth components reach their individual, respective potential reward zones.
Merck (MRK @ 36-3/4) (Chart 7) hit a recovery high of 37-5/8 on November 10 and we believe is headed into the 44-45 zone. That is defined by both a bull unit count and an overhead declining 1/3 speed resistance line.
Whereas U.S. Robotics(USRX @ 38-3/4) (Chart 8) worked off a fourth bear unit at its June 2nd low of 24, then etched a new bull unit of 5-1/2 points by its subsequent initial rise to 29-1/2. USRX went on to slightly exceed the 3-bull unit count of 40-1/2. at the November 9 high of 42-1/4. The maximum upside potential we see from here, is a 4-bull unit count to 46, which would also be a prospective double top with the early-1994 peak.
CONCLUSION
I believe that this real-time experience in using the Gouldian theories amply demonstrates both their usefulness as well as their drawbacks, though only scratching the surface of their potential applications. Their key advantages are that Gould’s quantifications of investor sentiment help one to both reach and act upon specific investment conclusions on a case by case basis, without being held hostage to an endless, self-imposed debate about what to do.
REFERENCES
- Why Most Investors Are Mostly Wrong Most of the Time, W. X. Scheinman, 1991, Fraser Publishing Company (p. 139)
- Scheinman, ibid
- The Crowd, G. LeBon, 1896, Fraser Publishing Company (1982)
- A Vital Anatomy, E. Gould, (Undated), Anametrics, Inc.
- Gould, ibid
- Gould, ibid
- Gould, ibid
- Gould, ibid
- Gould, ibid
- .Gould, ibid
- Scheinman, ibid
- Gould, ibid
- Scheinman, ibid
- Hambrecht & Quist Technology and Growth Indices, Michael De Witt and Shiela Ennis, Hambrecht & Quist Incorporated, January 1993
- Future Shock, A. Toftler, 1971, Bantam Doubleday
BIBLIOGRAPHY
Numbers
- Jung, C.G., Collected Works of C.G. Jung, General Index, (Volume 20, pp. 485-489, “Numbers”), Princeton University Press, 1979
- Menninger, K., Number Words and Number Symbols; A Cultural History of Numbers, The M.I.T Press, 1970
- Von Franz, M-L, Number and Time, Northwestern University Press, 1974
Technology
- Veblen, T., Imperial Germany and the Industrial Revolution, Transaction Publishers (1990 reprint)
Charles H. Dow Award Winner • May 1996
by Tim Hayes, CMT

About the Author | Tim Hayes, CMT
Timothy Hayes, CMT, is NDR’s Chief Global Investment Strategist. He has been with the firm since 1986. Tim directs NDR’s global asset allocation services, develops strategy and major investment themes, and establishes NDR’s weightings for global asset allocation, presenting his views on the cyclical and secular outlook globally.
Tim’s recommendations, strategies, and timely market commentaries and studies are featured in Global Strategy publications, which focus on global allocation and the most significant global developments.
Tim has made many appearances on CNBC and Bloomberg TV, and his market views have been featured in numerous podcasts and in financial media in the U.S. and internationally.
He is author of The Research-Driven Investor, published in November 2000, and he has contributed to several other books. His research articles have appeared in the Journal of Technical Analysis, Technical Analysis of Stocks and Commodities, and other publications.
Tim received his Bachelor of Arts degree from Kenyon College, and he is a Chartered Market Technician. In 2008, the Investorside Research Association honored Tim with its Research Excellence Award for “Exiting Equities in ‘07.” In 1996, the Market Technicians Association awarded Tim the Charles H. Dow Award for groundbreaking research, recognizing an outstanding original work that best expounds on the principles of technical analysis.
“This indicator has always produced huge profits! In fact, you would have doubled your money in just six months!”
Such a claim could be a sales pitch. It could also be an analyst’s enthusiasm about some work just completed. But in either case, such claims appear to be meeting increasing skepticism, perhaps because enough have proven to be based more on fiction than quantifiable fact, perhaps because enough investors have been burned by indicators that have failed to pan out when put to real-time use, perhaps because the combination of everstrengthening computing power and ever-increasing program complexity have made excessive optimization as easy, and dangerous, as ever.
In any case, the need to quantify accurately and thoroughly is greater than ever. Honest and reliable quantification methods, used in the correct way, are needed for increased research credibility. They are needed to impart objectivity. They are needed for effective analysis and for the sound backing of research findings. The alternative is the purely subjective approach that uses trendlines and chart patterns alone, making no attempt to quantify historical activity. But when the quantification process fails to deliver, instead producing misleading messages, the subjective approach is no worse an alternative – a misguided quantification effort can be worse than none at all. The predicament, then, is how to truly add value through quantification.
THE CONCERNS
The major reason for quantifying results is to assess the reliability and value of a current or potential indicator, and the major reason we have indicators is to help us interpret the historical data. The more effective the interpretation of historical market activity, the more accurate the projection about a market’s future course. An indicator can be a useful source of input for developing a market outlook if quantitative methods back its reliability.
But for several reasons, quantification must be handled with care. The initial concern is the data used to develop an indicator. If it’s inaccurate, incomplete, or subject to revision, it can do more harm than good, issuing misleading messages about the market that’s under analysis. The data should be clean and contain as much history as possible. When it comes to data, more is better – the greater the data history, the more numerous the like occurrences, and the greater the number of market cycles under study.
This leads to the second quantification concern, and that’s sample size. The data may be extensive and clean, and the analysis may yield an indicator that foretold the market’s direction with 100% accuracy. But if, for example, the record was based on just three cases, the results would lack statistical significance and predictive value. In contrast, there would be fewer questions regarding the statistical validity of results based on more than 30 observations.
The third consideration is the benchmark, or the standard for comparison. The test of an indicator is not whether it would have produced a profit, but whether the profit would have been any better than a random approach, or no approach at all. Without a benchmark, “random walk” suspicions may haunt the results.[1]
The fourth general concern is the indicator’s robustness, or fitness – the consistency of the results of indicators with similar formulas. If, for example, the analysis would lead to an indicator that used a 30-week moving average to produce signals with an excellent hypothetical track record, how different would the results be using moving averages of 28, 29, 31, or 32 weeks? If the answer was “dramatically worse,” then the indicator’s robustness would be thrown into question, raising the possibility that the historical result was an exception to the rule rather than a good example of the rule. An indicator can be considered “fit” if various alterations of the formula would produce similar results.
Moreover, the non-robust indicator may be a symptom of the fifth concern, and that’s the optimization process. In recent years, much has been written about the dangers of excessive curvefitting and over-optimization, often the result of unharnessed computing power. As analytical programs have become increasingly complex and able to crunch through an ever-expanding multitude of iterations, it has become easy to over-optimize. The risk is that armed with numerous variables to test with minuscule increments, a program may be able to pick out an impressive result that may in fact be attributable to little more than chance. The accuracy rate and gain per annum columns of Figure 1 compare results that include an impressive-looking indicator that stands in isolation (top) with indicators that look less impressive but have similar formulas (bottom). One could have far more confidence using an indicator from the latter group even though none of them could match the results using the impressive-looking indicator from the top group.
What follows from these five concerns is the final general concern of whether the indicator will hold up on a real-time basis. One approach is to build the indicator and then let it operate for a period of time as a real-time test. At the end of the test period, its effectiveness would be assessed. To increase the chances that it will hold up on a real-time basis, the alternatives include out-of-sample testing and blind simulation. An out-of-sample approach might, for example, require optimization over the first half of the date range and then a real-time simulation over the second half. The results from the two halves would then be compared. A blind-simulation approach might include optimization over one period followed by several tests of the indicator over different periods.
Whatever the approach, real-time results are likely to be less impressive than results during an optimization period. The reality of any indicator developed through optimization is that, as history never repeats itself exactly, it is unlikely that any optimized indicator will do as well in the real-time future. The indicator’s creator and user must decide how much deterioration can be lived with, which will help determine whether to keep the indicator or go back to the drawing board.
TRADE-SIGNAL ANALYSIS
With the general concerns in mind, the various quantification methods can be put to use. The first, and perhaps most widely used, is the approach that relies on buy and sell signals, as shown in Figure 2.[2] When the indicator meets the condition that it deems to be bullish for the market in question, it flashes a buy signal, and that signal remains in effect until the indicator meets the condition that it deems to be bearish. A sell signal is then generated and remains in effect until the next buy signal. Since a buy signal is always followed by a sell signal, and since a sell signal is always followed by a buy signal, the approach lends itself to quantification as though the indicator was a trading system, with a long position assumed on a buy signal and closed out on a sell signal, at which point a short position would be held until the next buy signal.
The method’s greatest benefit is that it clearly reveals the indicator’s accuracy rate, a statistic that’s appealing for its simplicity – all else being equal, an indicator that had generated hypothetical profits on 30 of 40 trades would be more appealing than an indicator that had produced hypothetical profits on 15 of 40 trades. Also, the simulated trading system can be used for comparing a number of other statistics, such as the hypothetical per annum return that would have been produced by using the indicator. The per annum return can then be compared to the gain per annum of the benchmark index.
But the method’s greatest benefit may also be its biggest drawback. No single indicator should ever be used as a mechanical trading system – as stated earlier, indicators should instead be used as tools for interpreting market activity. Yet, the hypothetical and actual can be easily confused. Although the signal-based method specifies how a market has done between the periods from one signal to the next, they are not actual records of real-time trading performance. If they were, the results would have to account for the transaction costs per trade, with a negative effect on trading results. Figure 3 summarizes the indicator’s hypothetical trade results before and after the inclusion of a quarter-percent transaction cost, illustrating the impact that transaction costs can have on results. The more numerous the signals, the greater the impact.
Also, as noted in the results, another concern is the maximum draw-down, or the maximum loss between any consecutive signals. But again, as long as it is clear that the indicator is for perspective and not for dictating precise trading actions, indicators with trading signals can provide useful input when determining good periods for entering and exiting the market in question.
ZONE ANALYSIS
In contrast to indicators based on trading signals, indicators based on zone analysis leave little room for doubt about their purpose – they don’t even have buy and sell signals. Rather, zone analysis recognizes black, white and one or more shades of gray. It quantifies the market’s performance with the indicator in various zones, which can be given such labels as “bullish,” “bearish” or “neutral” depending upon the market’s per annum performance during all of the periods in each zone. Each period in a zone spans from the first time the indicator enters the zone to the next observation outside of the zone. Unlike the signal-based approach, the indicator can move from a bullish zone to a neutral zone and back to a bullish zone. An intervening move into a bearish zone is not required.
Zone analysis is therefore appealing for its ability to provide useful perspective without a simulated trading system. The results simply indicate how the market has done with the indicator in each zone. But this type of analysis has land mines of its own. In determining the appropriate levels, the most statistically-preferable approach would be to identify the levels that would keep the indicator in each zone for roughly an equal amount of time. In many cases, however, the greatest gains and losses will occur in extreme zones visited for a small percentage of time, which can be problematic for several reasons:
- if the time spent in the zone is less than a year, the per annum gain can present an inflated picture of performance;
- if the small amount of time meant that the indicator made only one sortie into the zone, or even a few, the lack of observations would lend suspicion to the indicator’s future reliability;
- the indicator’s usefulness must be questioned if it’s neutral for the vast majority of time.
A good compromise between optimal hypothetical returns and statistical relevance would be an indicator that spends about 30% of its time in the high and low zones, like the indicator in Figure 4. For an indicator with more than four years of data, that would ensure at least a year’s worth of time in the high and low zones and would make a deficiency of observations less likely. In effect, the time-in-zone limit prevents excessive optimization by excluding zone-level possibilities would look the most impressive based on per annum gain alone.
Another consideration is that in some cases, a closer examination of the zone performance reveals that the bullish-zone gains and bearish-zone losses occurred with the indicator moving in particular directions. In those cases, the bullish or bearish messages suggested by the per annum results would be misleading for a good portion of the time, as the market might actually have had a consistent tendency, for example, to fall after the indicator’s first move into the bullish zone and to rise after its first move into the bearish zone.
It can therefore be useful to subdivide the zones into rising-in-zone and falling-in-zone, which can have the added benefit of making the information in the neutral zone more useful. This requires definitions for “rising” and “falling.” One way to define those terms is through the indicator’s rate of change. In Figure 5, which applies the approach to the primary stock market model used by Ned Davis Research, the indicator is “rising” in the zone if it’s higher than it was five weeks ago and “falling” if it’s lower. Again, the time spent in the zones and the number of cases are foremost concerns when using this approach.
Alternatively, “rising” and “falling” can be defined using percentage reversals from extremes, in effect using zones and trading signals to confirm one another. In Figure 6, for example, the CRB Index indicator is “rising” and on a sell signal once the indicator has risen from a trough whereas it’s “falling” and on a buy signal after the indicator has declined from a peak. Even though the reversal requirements resulted from optimization, the indicator includes a few poorly-timed signals and would be risky to use on its own. But the signals could be used to provide confirmation with the indicator in its bullish or bearish zone, in this case the same zones as those used in Figure 4. For example, in late 1972 and early 1973 the indicator would have been rising and in the upper zone, a confirmed bearish message. The indicator would then have peaked and started to lose upside momentum, generating a “falling” signal and losing the confirmation. That signal would not be confirmed until the indicator’s subsequent drop into its lower zone.
The chart’s box shows the negative hypothetical returns with the indicator on a sell signal while in the upper zone, and on a buy signal while in the lower zone. In contrast to the rate-of-change approach to subdividing zones, this method fails to address the market action with the indicator in the middle zone. But it does illustrate how zone analysis can be used to in conjunction with trade-signal analysis to gauge the strength of an indicator’s message.
SUBSEQUENT-PERFORMANCE ANALYSIS
In addition to using signals and zones, results can be quantified by gauging market performance over various periods following a specified condition. In contrast to the trade-signal and zone-based quantification methods, a system based on subsequent performance calculates market performance after different specified time periods have elapsed. Once the longest of the time periods passes, the quantification process becomes inactive, remaining dormant until the indicator generates a new signal. In contrast, the other two approaches are always active, calculating market performance with every data update.
The subsequent-performance approach is thus applicable to indicators that are more useful for providing indications about one side of a market, indicating market advances or market declines. And it’s especially useful for indicators with signals that are most effective for a limited amount of time, after which they lose their relevance. The results for a good buy-signal indicator are shown in Figure 7, which lists market performance over several periods following signals produced by a 1.91 ratio of the 10-day advance total to the 10-day decline total.
In its most basic form, the results might list performance over the next five trading days, 10 trading days, etc., summarizing those results with the average gain for each period. However, the results can be misleading if several other questions are not addressed. First of all, how is the average determined? If the mean and the median are close, as they are in Figure 7, then the mean is an acceptable measure. But if the mean is skewed in one direction by one or a few extreme observations, then the median is usually preferable. In both cases, the more observations the better.
Secondly, what’s the benchmark? While the zone approach uses relative performance to quantify results, trade-signal analysis includes a comparison of per annum gains with the buy-hold statistic. Likewise, the subsequent-performance approach can use an all-period gain statistic as a benchmark. In Figure 7, for instance, the average 10-day gain in the Dow Industrials has been 2% following a signal, nearly seven times the 0.3% mean gain for all 10-day periods. This indicates that the market has tended to perform better than normal following signals. That could not be said if the 10-day gain was 0.4% following signals.
A third question is how much risk has there been following a buy-signal system, or reward following a sell-signal system? Using a buy-signal system as an example, one way to address the question would be to list the percentage of cases in which the market was higher over the subsequent period, and to then compare that with the percentage of cases in which the market was higher over any period of the same length. Again using the 10-day span in Figure 7 as an example, the market has been higher after 75% of the signals, yet the market has been up in only 58% of all 10-day periods, supporting the significance of signals. Additional risk information could be provided by determining the average drawdown per signal – i.e., the mean maximum loss from high to low following signals. The mean for the 10-day period, for example, was a maximum loss of 0.7% per signal, suggesting that at some point during the 10-day span, a decline of 0.7% could be considered normal. The opposite approaches could be used with sellsignal indicators, with the results reflecting the chances for the market to follow sell signals by rising, and to what extent.
Along with those questions, the potential for double-counting must be recognized. If, for example, a signal is generated in January and a second signal is generated in February, the four-month performance following the January signal would be the same as the three-month performance following the February signal. This raises the question of whether the three-month return reflects the impact of the first signal or the second one. Moreover, such signal clusters give heavier weight to particular periods of market performance, making the summary statistics more difficult to interpret. Problems related to double-counting can be reduced or eliminated by adding a time requirement. For the signals in Figure 7, for instance, the condition must be met for the first time in 50 days – if the ratio reaches 1.92, drops to 1.90, and then returns to 1.92 two days later, only the first day will have a signal. The time requirement eliminates the potential for double-counting in any of the periods of less than 50 days, though the longer periods still contain some overlap in this example.
Another application of subsequent-performance analysis is shown in Figure 8, which is not prone to any double-counting. The signals require that three conditions are met, all for the first time in year – the Dow Industrials much reach its highest level in a year, another index must reach its highest level in a year, and the joint high must be the first in a year. The significance for the various indices can then be compared in conjunction with their benchmarks – i.e., the various all-period gains. Figure 9 uses 12 of those indices to show how subsequent performance analysis for both buy signals and sell signals can be used together in an indicator. For each time span, the chart’s box lists the market’s performance after buy signals, after sell signals, and for all periods.
REVERSAL-PROBABILITY ANALYSIS
Finally, the subsequent performance approach is useful for assessing the chances of a market reversal. In Figure 10, the “signal” is the market’s year-to-year change at the end of the year, with the signals (years) categorized by the amount of change – years with any amount of change, those with gains of more than 5%, etc. In this case, the subsequent-performance analysis is limited to the year after the various one-year gains. But the analysis takes an additional step in assessing the chances for a bull market peak within the one- and two-year periods after the years with market gains, or a bear market bottom within the one- and two-year periods after the years with market declines.
This analysis requires the use of tops and bottoms identified with objective criteria for bull and bear markets in the Dow Industrials. The reversal dates show that starting with 1900, there have been 30 bull market peaks and 30 bear market bottoms, with no more than a single peak and a single trough in any year. This means that for any given year until 1995, there was a 31% chance for the year to contain a bull market peak and a 31% chance for the year to contain a bear market bottom (30 years with reversals / 95 years).
Using this percentage as a benchmark, it can then be determined whether there’s been a significant increase in the chances for a peak or trough in the year after a one-year gain or loss of at least a certain amount. The chart’s boxes show the peak chances following up years and the trough chances following down years, dividing the number of cases by the number of peaks or troughs. For example, prior to 1995, there had been 31 years with gains in excess of 15% starting with 1899. After those years, there was a 52% chance for a bull market peak in the subsequent year (16 following-years with peaks / 31 years with gains of more than 15%). The chances for a peak within two years increased to 74%, which can be compared to the benchmark chance for at least one peak in 61% of the two-year periods (since several two-year periods contained more than one top, this is not the exact double of the chances for a peak in any given year).
A major difference in this analysis is that in contrast to signals and zones, which depend upon the action of an indicator, this approach depends entirely on time. Each signal occurs after a fixed amount of time (one year), with the signals classified by what they show (a gain of more than 5%, etc.). Depending upon the classification, the risk of a peak or trough can then be assessed.
CONCLUSION
Each one of these methods can help in the effort to assess a market’s upside and downside potential, with the method selected having a lot to do with the nature of the indicator, the time frame, and the frequency of occurrences. The different analytical methods could be used to confirm one another, the confirmation building as the green lights appeared. An alternative would be a common denominator approach in which several of the approaches would be applied to an indicator using a common parameter (i.e., a buy signal at 100). Although the parameter would most likely be less than optimal for any of the individual methods, excessive optimization would be held in check. But whatever approaches are used, it needs to be stressed that each one of them has its own means of deceiving. By better understanding the potential pitfalls of each approach, indicator development can be enhanced, indicator attributes and drawbacks can be better assessed, and the indicator messages can be better interpreted.
The process of developing a market outlook must be based entirely on research, not sales. The goal of research is to determine if something works. The goal of sales is to show that it does work. Yet in market analysis, the lines can blur if the analyst decides how the market is supposed to perform, then selling himself on this view by focusing only on the evidence that supports it. What’s worse is the potential to sell oneself on the value of an indicator by focusing only on those statistics that support one’s view, regardless of their statistical validity. As shown by the various hazards associated with the methods described in this paper, such self-deception is not difficult to do.
Our goals should be objectivity, accuracy, and thoroughness. Using a sound research approach, we can determine the relative value of using any particular indicator in various ways. And we can assess the indicator’s value and role relative to all the other indicators analyzed and quantified in a similar way. The indicator spectrum can then provide more useful input toward a research-based market view.
FOOTNOTES
- Reference to Burton Malkeil’s A Random Walk Down Wall Street, which argues that stock prices move randomly and thus cannot be forecasted through technical means.
- The charts that accompany this paper were produced with the Ned Davis Research computer program.
Since winning the third Dow Award in 1996, Tim Hayes has expanded upon “The Quantification Predicament” in writing his first book, “The Research Driven Investor,” published in November 2000 by McGraw-Hill. A Global Equity Strategist for Ned Davis Research, Tim and his team have developed numerous U.S. and global asset allocation indicators and models in recent years, while also developing global market and sector ranking systems and indicators based on 18 market sectors in 16 countries.
Charles H. Dow Award Winner • May 1998
by Christopher Carolan
About the Author | Christopher Carolan
Bio Coming
The crash of the Hong Kong stock market in October 1997, with its obvious parallels to similar events in the U.S. in 1987 and 1929, once again raises the specter of October as a dark and ominous month for stocks. Is it merely a coincidence that these three crashes all occurred in October? Is there a timing pattern among autumn panics useful to market participants? This article expands upon the observation, originally contained in Chapter 1 of the author’s book, The Spiral Calendar[1], outlining the correlation between the lunar calendar and the stock market panics of 1929 and 1987. This paper examines how the 1997 Hong Kong panic conforms to that earlier model, as well as examines the great autumn panics of the 19th century. Finally, a look at the peculiar international character of panics, and its implications for the possible causes of these panics.
DEFINITION OF TERMS
Panic. The focus of this article is on short-term equity market panics. The crashes of 1929 and 1987 are the obvious examples. I define these panics as one-to-three day, free-fall drops of approximately 20% in the major averages. The term “panic” is preferred over “crash” as the definition of panic stresses the suddenness and irrationality of the event. Panics were originally ascribed to the god Pan simply because there were no obvious fundamental causes for their occurrence.
Collapse. Collapse is used to signify the larger macro market decline lasting weeks or months within which the panic occurs. An example would be the Hong Kong panic of October 1997, occurring within the larger Asian equity and currency collapse that ran from July 1997 to January 1998.
Annual Lunar Calendar. The annual lunar calendar used here is based on the Babylonian calendar, which was the model for the later Jewish calendar. This annual lunar calendar labels the date of the first new moon following the spring equinox as month one, day one; or 1-1. The following date is 1-2. The date of the second new moon after the spring equinox is 2-1, etc. The difficulty with annual lunar calendars, and one of the reasons for their abandonment, is that the solar year does not have an even number of months. Thus, some years in an annual lunar calendar have 12 months, others 13. For our purposes, which focus on the Autumn months, this issue is inconsequential. All calculations use Eastern Standard Time to determine the dates of the lunar phases.
In 1992, this author demonstrated how the panic dates of “Black Tuesday,” October 29, 1929, and “Black Monday,” October 19, 1987 occurred on the same annual lunar calendar date, 7-28. Additionally, the other similar points in the comparisons of those two years, the spring lows, summer highs and autumn failure highs all occurred within one day on the lunar calendar. Figure 1 shows those years in a chart aligned with the lunar calendar, where similar lunar dates are juxtaposed above each other. The panics are marked with arrows. The other similar features are denoted with dashed lines. The chart also includes Hong Kong’s Hang Seng Index for the panic year 1997.
These price moves are extraordinarily large over a very short period of time. Are these panics the largest such declines, or do we selectively remember the October panics and forget those of other months? A scan of daily data of the Dow Jones Industrial Average from 1915, the Hang Seng Index from 1980, The Japanese Nikkei Index from 1950, and the German DAX Index from 1960 for the 10 largest, single-day percentage drops is shown in table 1. Seven of those ten declines were days associated with one of the three panics. Two of the others, the spring 1989 declines in the Hong Kong market, were tied to a fundamental news event, the Tiananmen crisis in China. The final entry is from the German market during the “minicrash” of October 1989, an October event similar to the others, but smaller in magnitude. The point to stress here is that in their breadth and ferocity, these panics lie outside the boundaries of normal price action.
There are no other comparable one-to-three day declines of this magnitude in the data. They represent the very largest percentage drops in the database. This is not normal market behavior. What else ties these events together? The panics occupy virtually identical positions on the annual lunar calendar.
Table 2 shows the percentage declines for each panic in the key fourday time span around the lows. The lunar dates 7-27 and 7-28 are the “dark days,” encompassing the various Black Tuesdays of N.Y. in 1929 and Hong Kong in 1997, and the Black and Blue Mondays in N.Y. in 1987 and 1997 respectively. In each case, lunar date 7-28 marked the end of the panic and the next two days, 7-29 and 7-30 (or 8-1, some lunar months have 29 days, others 30) saw significant retracement rallies. Table 3 groups the data into two-day segments and includes the percentage of these retracement rallies. This table shows the striking similarity of these panics and how that similarity conforms to the annual lunar calendar.
Table 4 pinpoints the precise timing of the panic lows on the lunar calendar. The timing from 1929 is gathered from the news accounts that described stock prices as rallying sharply off their lows in the last fifteen minutes of trading on Black Tuesday, October 29. The 1987 and 1997 times are from available databases for the Dow Industrials and are corrected to Eastern Standard Time. The table also shows the date and time of the nearest lunar phase, the eighth new moon on the annual lunar calendar, as well as the difference in hours between the stock market’s low and the moon’s phase. The timing of these three great panic lows is within twenty-four hours of each other. In other words, all three lows fall within the same onehalf of one percent of the calendar year.
A REVIEW OF THE PRE-1915 AUTUMN PANICS
The Panic of 1907
The so-called panic of 1907 does not fit our short-term panic criteria. There was no market decline of approximately 20% in the span of one-tothree days. The largest, singleday declines were 3% in the Dow Jones Industrial Average during the collapse. There was a collapse and coincident banking panics, most of which occurred in October of that year. Sobel, in Panic on Wall Street[2] , describes the ending of the collapse. J.P Morgan put together his plan to save the banking system on November 3-5, 1907, 7-28 through 7-30 on the annual lunar calendar. After being closed for Election Day on November 5 (7-30), stocks rallied strongly on lunar 8-1. The crisis was over. The timing of the end of the crisis is consistent with the lunar panic model. The day Morgan realized the banking system was not going to fail, he put into motion a plan to save the banks, which ultimately arrested the decline. That day was lunar 7-28, the same date as the lows of the later 20th century panics.
The Crash of 1873
September 18 and 19, 1873 were labeled “Black Thursday” and “Black Friday” in the collapse of 1873. The Friday selling took prices of major stocks 5 to 25% percent below Thursday’s already collapsed levels. This panic was considered the greatest on Wall Street until 1929. The news accounts describe the same type of free fall and despair as the 20th century counterparts. The annual lunar calendar dates of “Black Thursday” and “Black Friday” were 6-27 and 6-28, one month earlier, but exactly the same lunar days as the 20th century examples. News accounts describe a temporary bottom late on Friday. Saturday, September 20 brought renewed selling and the closure of the exchange after a shortened two-hour trading day. The stock exchange remained closed for a week thereafter. Though on Monday September 22 prices rose sharply in trading in the streets. The timing of the 1873 Autumn panic is consistent with the 20th century results, though exactly one month earlier.
The Crash of 1857
The collapse of 1857 was not a stock market free-fall in the sense of the 20th century panics outlined above. It was a very sharp drop in stocks over a period of nine weeks, accompanied by a number of runs on banks, persistent pressure on the banking system, and sharply rising interest rates. Also, it was international in scope, a facet we’ll address later. Though the selling in the equity markets did not climax in a free-fall panic, the pressure on the banking system did, as the N.Y. banking panic broke out on October 13 and mayhem continued for two days thereafter. Sobel, in the Panic on Wall Street[3], quotes George Strong writing on October 15. “Wall Street blue with collapse. Everything flaccid like a defunct Actina.” On the annual lunar calendar, October 13 and 14, 1857 are 7-27 and 7-28, the same “dark days” as the 20th century examples.
CAUSATION
The correlation between the annual lunar calendar and the timing of the three 20th century panics as well as the supportive data from the 19th century does not prove that an annual lunar calendar position is the cause of those panics. A few examples of anything cannot statistically prove a hypothesis. However, it should be realized that each occurrence is not a 50-50, or true/false proposition. If the Hong Kong panic had occurred on any of the 360 days of 1997 other than lunar 7-27, 7-28, 6-27, or 6-28; then this model would be effectively discredited. Yet the 1997 Hong Kong panic climaxed 5 hours after the timing of the 1987 panic and 20 hours after the 1929 panic on the lunar calendar.
Previous theories explaining panics have not fared well when the next panic came along. In the 19th century, it was widely believed that panics occurred in October specifically because banks’ cash positions were weakened as farmers were paid for the new crop. Yet today, agriculture makes up a much smaller fraction of the world economy than before, yet October panics are still with us. The Federal Reserve System was set up in the belief that if banking panics were prevented, stock market panics would cease to exist as well. That causal theory was disproved by the 1929 crash. The 1929 panic was blamed on low margin levels, yet 1987 happened anyway. In 1987, the finger was pointed at program trading. However, the 1997 panic occurred without any appreciable role by program traders.
The lunar calendar model of panics, alone among theories, not only survived the next panic intact, but its basic tenet was remarkably affirmed by the precise timing of the 1997 low.
The timings of financial collapses do not show a pattern. The 1997 Asian collapse began in July, while the crisis of 1987 and the collapse of 1857 began in August. The 1929 and 1873 examples began in September. Yet in each case, the start of the collapse did not result in immediate widespread panic. Those panics seem to wait for a particular time period on the calendar, the 27th and 28th days of the autumn lunar months, usually October, but in one instance September.
THE INTERNATIONAL QUESTION
The international character of financial crises has been a difficult problem for those who have sought to ascribe causes to collapses and panics. Kindleberger, in Manias, Panics and Crashes writes, “Time and again, observers like Juggler, Mitchell and Morgenstern have observed that financial crises tend to be international, either running parallel from country to country or spreading by one means or another from the country where they originate to other countries.[4] ” And “What is remarkable is that securities prices do the same even when only a few securities can be said to be truly international, that is, are traded on several markets, their prices joined by arbitrage. In 1929 all stock markets crashed simultaneously; the same was largely true in October 1987…It is striking that share prices behaved in parallel almost sixty years apart, even though share prices were thought not to have been integrated in the 1920s as they were in the 1980s.[5]”
The panics of 1987 and 1997 highlighted the international quality of panics. Traders the world over saw these markets dive and then rally in unison. In this wired world, that interconnection is not so extraordinary, though Kindleberger is surprised by the international nature of the 1929 collapse. An examination of the 1857 collapse is more revealing. Kindleberger notes, “What is striking is the concentrated nature of the crises…Clapham observes that it broke out almost at the same moment in the United States, England, and Central Europe, and was felt in South America, South Africa and the Far East.[6] ” Aside from the international nature of the macro collapse, the 1857 collapse affords a unique, controlled database of market behavior in the “dark days” of lunar 7-27 and 7-28 on two continents. In 1857, the Atlantic cable linked America with England by telegraph. In the early days of the collapse, the telegraph cable failed and all communication was done by ship for the remainder of the crisis. The London Times and The New York Times from the period leading up to and through the N.Y. banking panic provide striking evidence of two markets in distress. Wall Street began its rally from the depths of the collapse on October 13, 1857 (lunar 7-27) at the same time the banking panic broke out in N.Y. Table 5 is reprinted from The New York Times of October 14, illustrating the sharp rise in prices underway as contrasted with the lows of October 13. I’ve added the column on the right showing the month’s-end prices. Some issues had made their lows earlier in September, but others were at or near their lows on October 13. What’s clear is that prices began to rally from their depressed levels on lunar 7-27, coincident with the outbreak of the banking panic. This sequence parallels the 1987 experience, when U.S. bond prices began a sharp rally from their lows on lunar 7-27, coincident with the outbreak of the stock panic.
At this same time, Europe was aware of, and sharing in, the collapse in America. In the week leading up to October 13, the bank of England raised their discount rate twice, while Paris, Hamburg and Amsterdam each raised their rates once. Though debt and equity prices traded down sharply, there was no free-fall panic. London stocks and debt bottomed decisively on October 13 at the beginning of the trading day. The London Times of November 2, 1857 summarized the events of October and printed the table of prices labeled here as Table 6. To that table is added the date of the month’s low for each security. Here is the commentary accompanying the table. “The range of Consols (government debt) has been unusually extensive, showing a difference of 4 percent between the highest and lowest prices, although at the conclusion (of the month) the market has returned to the precise position in which it stood at the commencement…In railway shares the fluctuations have also been violent, and the rebound, except in a few cases has not been equal to that of the funds.”
The London Times offered this account of the trading in debt on October 13 in its October 14 edition. “The fluctuations in the funds today (Oct. 13) have again been most rapid and extensive. The market opened with a great weakness at a fall of nearly one and a quarter percent from the heavy prices of last evening. But there was subsequently a considerable reaction and a more healthful tone became apparent in all departments of business.” Now, here’s the account of stock trading on October 13 from The New York Times of October 14. “The stock market this afternoon advanced from 1 to 3 percent, the conviction being general that the basis of business would be changed tomorrow and that a large amount of money held in abeyance since the panic first paralyzed confidence will be set free now that the worst is known…”
The cause of the market low in New York on October 13 is ascribed to the banking panic, yet London bottomed on the same day. The selling, motivated by fear, was pervasive on both side of the Atlantic leading up to October 13. That selling ceased and a vigorous rally commenced on the same day, continents apart, with neither market having access to any timely information from the other. Word of the N. Y. banking panic did not reach London until October 26, and was then reported in The Times the following day.
The sudden, international cessation of distress selling that is a hallmark of 20th century panics also occurred in the crisis of 1857, at a time when no timely communication existed. The international character of panics has been a stumbling block to those who subscribe to local, “fundamental” causes for these panics. Contrarily, a lunar-based model for panics would seem to require an international manifestation of the phenomenon. If the moon is affecting market participants, it should affect them the world over. All the panic examples cited here, from 1857 through 1997, have been international, yet the dearth of communication technology in 1857 provides a datum that cannot be explained as a serial reaction. The international character of panics is distinctively supportive of the lunar model.
USES
Put simply, every market participant should have his calendar marked with the “dark days” of lunar 7-27 and 7-28. Even better, everyone should calculate the time of the eighth new moon and subtract 55 hours from that point. A time window of plus or minus twelve hours from that point is the lunar calendar model for an Autumn panic’s low point. There may not be another October stock panic for sixty years or longer. And the lunar model offers no clues as to which years will see a panic. Yet there can be no doubt, as the trillions of dollars lost during these panics makes plain, market intelligence that can pinpoint when an unfolding panic will climax is invaluable. In 1997, as worldwide markets became unglued in October, the lunar calendar model provided by 1987 and 1929 pointed to late Monday, October 27 as the ideal low point. The dramatic early Tuesday morning low of October 28 demonstrated the model’s effectiveness in real time.
Calendars are complex mechanisms. Calendar research must recognize the importance of both lunar and solar calendars. The annual lunar model for panics points to the 27th and 28th days of the lunar month as the dark days, yet that is only true in the autumn season, the 6th or 7th lunar month. Past studies that purport to find no lunar relationship in markets have treated all lunar phases alike, lumping spring and fall together as well as summer and winter. Likewise, specific seasonal analysis tends to ignore the concurrent lunar calendar. Those who dismiss that October may be a rough month for stocks cite that overall, it is not the worst month for stocks statistically, falling on average .5% since 1915. A proper approach to calendar research suggest that distinctions should be made among Octobers based on the lunar calendar. Here’s the lunar distinction. When there is no full moon between October 3 and 19 inclusive, the Dow has been up 1.5% in October since 1915. In those years with a full moon between those dates, the Dow’s average change is a loss of 1.9%. Seasonal analysis should recognize the lunar distinctions and vice versa. The annual lunar calendar makes those distinctions. When autumn panics are viewed though its prism, the results are remarkable.
FOOTNOTES
- Carolan. The Spiral Calendar. New Classics Library. 1992
- Sobel. Panic On Wall Street. Dutton. 1988. pp 318-320
- ibid. p 106
- Kindleberger. Manias Panics and Crashes. Basic Books. 1989. p 131
- ibid. p 131
- ibid. p. 143
Charles H. Dow Award Winners • May 1999
by Eric Bjorgen
About the Author | Eric Bjorgen
Bio Coming
by Steve Leuthold

About the Author | Steve Leuthold
Steve Leuthold has been an investment strategist, manager, and researcher for over 45 years. He is Founder of The Leuthold Group, LLC, an institutional investment research firm established in 1981.
In 1987, Steve initiated a small investment management operation that is driven almost exclusively by The Leuthold Group’s own internal research. Steve served as Chief Investment Officer of the registered investment advisor, was senior executive of the investment portfolio management team, and was a member of Leuthold Funds’ Board of Directors through 2012. Since 1987, assets under management climbed to over $5 billion, including seven mutual funds, and around 200 privately managed accounts. Nearly 85% of assets are managed per the firm’s proprietary active asset allocation methodology.
Considered an industry expert, Steve is frequently cited in leading trade journals, makes appearances on broadcast media financial programs, and is regularly invited to speak at meetings and investment conferences throughout the country. In the past, he has served as a contributing editor, authored articles for major industry publications, and has conducted seminars for industry-related university curriculums including the Universities of Minnesota, Wisconsin, Arizona and St. Thomas.
Steve is the author of many books and articles, including The Myths of Inflation and Investing. He has been a frequent contributor to leading trade journals, including The Wall Street Journal, Barron’s, The Journal of Portfolio Management, The Financial Analysts Journal, Newsweek, and Business Week.
In 1999, Steve and former Leuthold team member Eric Bjorgen co-authored a special study, “Corporate Insiders’ Big Block Transactions”, for which they won the prestigious Charles H. Dow Award. In addition, Steve Leuthold’s Financial Analyst Journal article, “Inflation, Deflation and Interest Rates,” was awarded the 1982 Graham and Dodd Scroll by the Financial Analysts Federation.
Most recently Steve has been spending his time between his Family Office (Leuthold Strategies, LLC) in Minneapolis and continued efforts in obtaining forestland in the U.S. to devote to Wilderness Forests via his Family Foundation. He has spent many hours with the Nature Conservancy of Maine, New York, and New Hampshire to find available land to purchase with the purpose of preserving natural land and wilderness for generations to come. His Family Office is a small investment firm that manages family assets and the Leuthold Family Foundation, as well as private clients.
Steve remains very active in his business, but hobbies include his garden in Maine (five varieties of potatoes), environmental and animal welfare activities, as well as reliving his past as a Rock ‘n’ Roll pioneer. After graduating from Albert Lea High School (Minnesota) in 1956, Steve pursued a higher education at the University of Wisconsin and graduated from the University of Minnesota. He has four children, Kurt, Mike, and Russell Leuthold and Linda Donerkiel. He spends much of his time with his partner of 20 years, Sharon Hovey.
SUMMARY
- The Leuthold Group has been compiling corporate insiders big block transactions since 1982 in order to gauge the sentiment of the “smart money.”
…”Big Block” transactions is defined as those involving more than 100,000 shares or have a total transaction value greater than $1,000,000. - Corporate insiders have been selling at levels approaching historical extremes recently. The short-term outlook for the stock market is beginning to look negative by this measure.
…Increased levels partially due to SEC’s 1997 code revision, which shortens the holding period of restricted shares. Additionally, a lower long-tenn maximum capital gains tax rate (20%) was effected in ’97 and has likely resulted in increased selling by insiders. - Since 1983, when net selling measured in dollars has reached historically high levels, the stock market performed poorly over the next 12 months.
…Normalizing the data allows a better historical perspective by adjusting for the growth of the stock market over time. When normalized, the current high level of dollar volume of net selling, while still high, is significantly below levels of 1983, 1989 and the selling extremes of mid-1998. - When net selling (measured in dollars) reaches historically low levels, the stock market has demonstrated significant above average performance over the next 12 months.
. . .The 10 week moving average is particularly useful to signal bear market bottoms. This measure has signaled “net buying” from corporate insiders a total of three times in the last 15 years; all were within weeks of bear market bottoms. - The number of net buy/sell transactions is also currently near historic highs. Even when the data is normalized, the number of selling transactions have been increasing since 1991.
- Conclusion: Quantitatively testing the normalized historical data for insiders net transaction levels measured in dollars confirms that when historically high or low extreme levels are hit, they offer excellent trigger points for asset allocators and market timers.
INTRODUCTION
The efficient market hypothesis holds that the market discounts information as soon as it is made public. While the degree of efficiency in the U.S. stock market is debatable, most would agree that corporate insiders possess superior knowledge about their own company’s prospects for the future. Clearly insider trading laws prohibit using “material, non-public information” for financial gain, but hunches about the success or failure of a new product line, for instance, can come into play when a corporate insider decides whether to accumulate or sell company stock.
Consequently, monitoring the significant buying and selling of company insiders should lend some insight concerning a firm’s financial health and growth prospects…insight that might not be gleaned from the latest quarterly statements. It then follows, that since the stock market is the sum of all public firms, the aggregate buying and selling patterns of all insiders from these firms should lend insight about the future prospects of the stock market. This study examines the merits of this assumption.
Since 1982, the Leuthold Group has been tracking corporate insiders big block transactions on a weekly basis. It is one of the components that we use in the weekly Major Trend Index’s “Sentiment” category. We are the only research firm that we are aware of that compiles this type of data. The SEC makes information on insider’s transactions available weekly. By law, all corporate insiders (and beneficial owners who hold 10% or more of outstanding shares) are required to file Form 4 by the l0th day of the month following a transaction. Each week the latest filings are compiled and published in Vicker’s Weekly Insider. This is where we begin.
COMPILING THE INSIDERS BIG BLOCK DATA
For our purposes of gauging insiders’ sentiment, we ignore small transactions – focusing only on big block transactions involving buying/selling more than 100,000 shares or those transaction with a total value of $1 million or more. We ignore the transactions of corporations, foundations, trusts and other institutional shareholders, since these transactions are often motivated by factors that have nothing to do with the financial prospects of a company.
The resulting list of buy and sell transactions are logged and then summed up to yield a weekly aggregate net dollar amount of buys vs. sells. Most of the time the net selling is much greater than the net buying, but this is not always the case. We then tabulate the aggregate net number (frequency) of buys vs. sells meeting the above criteria. An example of a recent week’s list of qualifying transactions appears in the appendix to illustrate the individual transactions that we look at in deriving a weekly reading.
PART A: THE RAW DATA
The Chart 1 depicts the weekly net dollar amount of insiders’ buying and selling activity (vertical bars) and the 10-week average (lower line) vs. the S&P 500 on semilog scale. Note: on this chart and all that follow, points above the zero axis represent net selling and the very infrequent points below the axis represent net buying.
Observations
- Over the course of the last sixteen years, the weekly data (represented by the bars) shows that weeks of net selling outnumber weeks of net buying by about 12:1. This is primarily because insiders’ sell transactions include the sale of stock resulting from the exercise of options, although no corresponding buy transaction occurs when options are issued. In the last seven years, the sell/buy ratio has climbed even higher (to about 50:1), partly due to increasing use of options to compensate corporate insiders.
- The 10-week average (lower line) passes below the zero axis into net buying territory on only three occasions within the last 16 years (marked with arrows). Each time this occurred, a bear market bottom had occurred within a short period, sometimes within weeks. Intermittent weeks of strong net buying and low net selling accompanied these bear market lows, which has made this measure an excellent “buy” signal at the lows of the three significant down markets since 1983.
- Since the late 1990 signal, the 10-week average has not been in net buying territory, but there haven’t been any bear markets either. The closest the average came to net insider buying was in early 1995 (indicated by the dashed arrow). This four-year low in net selling could not have been better timed, coming right at the end of an 18 month consolidating market.
- On the sell side, this non-normalized series is more difficult to interpret. Because the collective wisdom of all corporate insiders is thought to be a forward-looking stock market indicator, high levels of selling should theoretically precede stock market corrections. However, as option issuance and market capitalization has increased, the 10-week average and weekly data has shown a strong tendency to drift upward. Fixed “sell” trigger levels that worked ten years ago are commonplace today. Normalizing the data avoids this problem (see part B of this study).
- The latest reading through April 7, 1999 shows the 10-week average is now in a rising trend, but is still 33% below the all time high recorded in late May 1998. Also note that the week or 3/24/99 net aggregate insider selling posted the second highest single week reading ever ($2.8 billion).
While the dollar amount measures the magnitude of insider’s transactions, the net number of transactions measures the breadth of net sells/ buys. Normally the two data series move together (e.g. when the dollar amount of net selling rises, the number of net “sells” also rise). But this is not always the case. Occasionally the weekly net dollar amount of net selling surges, but the net number of sell transactions remains flat. This indicates that there were one or more very large transactions during that particular week. For instance:
…In July 1989, weekly dollar volume soared to $2.1 billion while net number of sells actually fell from the preceding week. This was the result of insider Carl Icahn liquidating $1.3 billion worth of Texaco shares, a company for which he had served as an officer.
…During a weekly reporting period in May 1995, two significant insiders at Duracell sold big blocks of shares, accounting for 90% of the soaring $1.5 billion volume that week. But the number of net insider sales that week was down from the levels of previous weeks.
…In March of 1998, Bill Gates and Paul Allen sold shares of Microsoft totaling over $1.6 billion during a two week period, accounting for about half of the all-time record dollar volume of insiders sales reported that week ($3.2 billion). But unlike the previous two cases, net number of sells also hit a record high. What are insiders transactions revealing about the stock market when you have high levels of conviction (indicated by record dollar amount) and broad consensus (record number of net sells)?
…Historically it has meant that a market peak may soon be at hand.
The current 10-week moving average is also in a rising trend. In terms of weekly number of net sells, the all time high was nearly beat on the week ending March 24th, when a net 406 individual corporate insiders were selling big blocks of their company’s stock (previous 1998 record still stands at 410 net sells).
…Because non-normalized data has a tendency to drift upward as market capitalization and total number of traded issues increases, it is necessary to normalize the data if any comparisons are to be made between today and sixteen years ago. In part B we normalize the dollar volume data in order to provide a better historical perspective.
PART B: THE NORMALIZED DATA
Chart 3 is similar to the first chart we presented except that the data is normalized as a percentage total equity market capitalization. This will allow more meaningful comparison over time and keep the tendency for the data to drift upward held in check.
- On the buy side, normalization does not significantly change the original conclusions. No amount of normalization will change net selling to net buying. Since late 1990, the 10-week average has not returned into net buying (positive) territory, but it still came fairly close in early 1995.
- However the net selling peaks of the 1980s now prominently stand out. In fact, when normalized, the current nominal reading for the 10-week average now falls short of peaks recorded in 1983, 1989, 1993 and 1997. But the rapid increase of net selling so far in 1999 is approaching historical extremes through the most recent reading. The normalized data still indicates that the recent high levels of insiders’ big block selling may have negative implications for the stock market.
- Since 1983, upward surges in insiders net selling appear to precede periods of market weakness, however the chart above shows that the results are not entirely consistent. In 1983, 1987, 1993 and late 1989 (1990 bear market), surges in net selling did foreshadow intermediate corrections that were quickly followed by a longer term rallying market. But the three other peaks that occurred during the 1990s were soon followed by periods of market consolidation. The most recent set of sell signals occurred during Q2 of 1998, and provided a timely exit signal for the market declines that occurred the following quarter.
- The dashed horizontal lines on the chart at -.07% (seven hundredths of a percent) and -.01% (one hundredth of a percent) represent points at which selling reached historically high and low extremes. Since 1983, the 10-week average has moved outside this range only about 14% of the time.
- The normalized data on the chart seems to provide some strong evidence (at least visually) between high levels of net selling (above the .07% line) and subsequent inferior market performance. On the other side of the coin, the few instances of insider net buying have demonstrated a record of signaling bear market lows. Even when net selling falls to historically low levels (below the dashed .01% line) it was a good time to start buying stocks.
This study wouldn’t be complete without some quantitative evidence to support the visual evidence. In Part C we test the relationship between high and low levels of net selling and subsequent market performance.
PART C: TESTING THE DATA
The table below shows subsequent market performance for different levels of dollar volume of insider net selling. We show the price performance of the S&P 500 in 3, 6, 9 and 12 month time periods when the insider selling 10-week average reaches historical extremes on a normalized basis (refer to previous chart). As the dashed lines on the normalized 10-week average dollar chart indicates, extremes are signaled when net selling falls below one-hundredth of a percent of total market capitalization (bullish) and when net selling reaches seven-hundredths of a percent of total market capitalization (bearish). The 10-week average is now within 10% of the bearish line and in a rising trend. Only time will tell where the market goes from here, but the current high level of net selling gives rise for concern about what insiders are collectively revealing about the outlook for the stock market in the coming year.
- Performance subsequent to high levels of net selling was below “normal range” in all time frames. Average 3-month subsequent performance when in “bearish” range was a loss of 0.6% compared to 3.7% average gain when in “normal” range.
- At low net selling levels, subsequent market performance better than “normal range” in all time frames. Outperformance optimized at 3- month horizon, but consistently outperforms by 280-320 b.p.’s over longer time horizons.
- This study covers 1983 to date covering roughly 848 weeks. During this time, the normalized 10-week average dollar amount has spent a total of 74 weeks in the historically low selling (bullish) range or 8.7% of the time. The average has spent 48 weeks in the historically high selling (bearish) range, representing 5.7% of the time. Following insiders buying and selling cues over this sixteen-year time span would have been profitable for the market timer or asset allocator.
- Recent periods of high net selling include the mid 1998 signals that occurred as the market was in decline. Not enough time has elapsed to evaluate the longer-term merits of these latest readings, but the preliminary data shows the mid 1998 signals were followed by below average performance in the 3-month range, but not over the longer time frames. But what occurs in the next 3-6 months will determine if these latest signals were indeed as productive as they have been in the past.
PART D: CONCLUSIONS
- The 10-week average of dollar volume of net insider sells seems to work extremely well in identifying bear market bottoms. When net selling has hit historical lows; it has identified the market bottoms of 1984, 1987 and 1990 within several weeks.
- The 10-week dollar average also has a proven track record of signaling periods of impending market weakness when insider net selling reaches historically high levels (like now).
. . . Normalizing the data to reflect the growth of the stock market over time is necessary to identify the key selling triggers on a historical basis. Testing the data confirms the value of tracking trends of insiders’ aggregate buying/selling behavior. Using the historical extremes as action points to increase or decrease equity holdings has worked well. - Net dollar volume and net number of transactions often move together, but occasionally an unusually large single transaction can swell dollar volume without causing the number of net sells to rise. When this occurs, the number of net sells becomes significant as a confirming indicator.
- Our measure of insider buys/sells is indicating that since mid-1997, insiders have been selling their stock at historically high levels in both nominal and normalized terms. Based on the historical relationship between levels of net selling and subsequent market performance, insiders may be signaling that the road ahead for the stock market may be rocky in the coming year.
. . . However, part of the increase in insider net selling is due to recent changes in SEC restrictions about the length of holding periods for restricted shares. This has encouraged insiders to sell more freely than before. Additionally, reductions in the maximum capital gains rate may have resulted in increased selling by corporate insiders. At this point, it is difficult to tell how much this is contributing to current high levels of net selling.
Written May 1998
Charles H. Dow Award Co-Winner • May 2001
by Charles D. Kirkpatrick

About the Author | Charles D. Kirkpatrick
Charles Kirkpatrick, who holds the Chartered Market Technician (CMT) designation, is the president of Kirkpatrick & Company, Inc., and has been a featured speaker before such professional organizations as the New York Society of Security Analysts, Financial Analysts Federation, CMT Association, the Foundation for the Study of Cycles, and numerous colleges and universities. He is a former Board Member of the CMT Association, former editor of the Journal of Technical Analysis and former Board Member of the Technical Analysis Educational Foundation, responsible for the development of courses in technical analysis at major business schools.
Throughout his 45 years in the investment field, Charlie has received recognition from both the national media and his peers. He has been featured on Wall $treet Week, CNBC, and in the magazine Technical Analysis of Stocks and Commodities, has been quoted in such publications as The Wall Street Journal, BusinessWeek, Forbes, Futures magazine, Money magazine and The New York Times, and has written articles for Barron’s and the Market Technicians Journal. He is the only person to win the annual Charles H. Dow Award twice, for articles on technical analysis in 1993 and 2001. In 2008, he won the CMT Association’s Annual Award for “outstanding contributions to the field of technical analysis.”
In 1970 Mr. Kirkpatrick co-founded the Market Forecasting division of Lynch, Jones & Ryan and in 1978 started his own market forecasting and brokerage firm, Kirkpatrick & Company, Inc., which published an investment-strategy letter, provided computerized stock-selection methods to institutional portfolio managers, managed a hedge fund, and traded options on the PHLX and CBOE. While currently retired from the investment management, brokerage and trading businesses, he continues to publish his Market Strategist letter, calculate his award-winning stock-selection lists, write books and articles on trading and investing, and as an Adjunct Professor of Finance, teach technical analysis at Brandeis University International Business School.
A graduate of Phillips Exeter Academy, Harvard College (AB), and the Wharton School at the University of Pennsylvania (MBA), Mr. Kirkpatrick lives in Kittery, Maine.
INTRODUCTION
In the 1960s and 1970s, as the ability to use computers became more widespread, a number of experiments were performed on stock market and corporate data to determine the best variables required to select stocks. These experiments were crude by today’s standards, but in their innocence, these analysts discovered many truths and dispelled many myths. For example, one experiment with Price/Earnings ratios (P/E) suggested, ironically, that contrary to beliefs held even today, the best P/E was a high one, not a low one – that stocks with high P/E’s tended to outperform those with low P/Es[1] . Experiments like these allowed analysts to focus on the value of a number of variables that heretofore had been too complicated or time consuming to pursue. At that time, when the random walk theory, beta theory, efficient market hypothesis (EMH) and capital asset pricing model (CAPM) were gaining in popularity, Robert A. Levy published a book[2] and an article in the Journal of Finance[3] based on his Ph.D. thesis at American University[4] that showed how well–performing stocks, i.e. those with relative price strength, continued to perform well and that poorly performing stocks likewise continued to perform poorly. Levy’s theory was not original. The theory of relative price strength had been around for a long time.[5] However, Levy, with the new aid of computer power, added some nuances and calculations that had not previously been used and found them to be very successful. Since they tended to refute the then popular theory that the stock price action was entirely random (the Efficient Market Hypothesis), his conclusions were subject to considerable criticism; his calculations and statistical evidence were severely condemned; and finally, his results were left in the dust of academic vitriol.[6] Though long forgotten now by most analysts, his theories nevertheless have been kept alive by a few. For almost twenty years a model run in real time (‘live’) and published every week for 17-1/2 years, based largely on these theories, has show his calculations to have been and continue to be useful in selecting stocks with higher–than–market post performance.
THE TEST
Most computerized experiments and stock market models are calculated using what is called “optimization.”[7] In the attempt to find variables that are important in determining the future of stock prices, most experiments use past data and adjust the variables and their parameters to find a ‘fit’ between those variables and stock post-performance. This is called “forced optimization.” Most discussion then centers around how closely the results fit the data, how sophisticated the statistical methods were, and why the results occurred as they did, forgetting that the results may have no usefulness in the future. Some computerized-trading model builders avoid forced optimization by spliting their data into several parts. They perform their experiments on one or more parts, and then test the results against the other
parts. However, the best and most convincing test of any theory is to see if it works by itself using completely unknown data. This is what this study accomplished weekly over 17-1/2 years.
In July 1982, to test variables of relative price strength and relative earnings growth, a selection and deletion criteria was established, a performance measurement determined, and a stock list developed (“List 1”). Later, in 1999, a second list (“List 2”) was established using slightly different criteria. Each list was reported weekly in Kirkpatrick’s Institutional Market Strategist[8] , and periodically performance results were also reported. As of December 31, 2000, List 1 had appreciated 5086.6% versus a S&P 500 gain of 1087.6% and a Value Line Geometric gain of 221.9% (see Chart I). List 2, during a very difficult and slightly declining stock market, appreciated 137.3% versus a S&P 500 gain of 7.41% and a Value Line Geometric loss of 9.99% (see Chart II). The second list also outperformed the original list which gained 75.19% over the same two year period. Most performances occurred during a generally rising stock market but none included dividends, which, though small in most cases, would have made the results even more impressive. Transaction costs were not included. Today, at radically discounted levels, commissions are almost negligible costs except in high turnover models.
SELECTION CRITERIA
The first, and longest existing test list, List 1, included relative price strength, relative earnings growth, and a simple chart pattern as variables for stock selection. The selection criteria for List 2 were slightly different. Relative price strength and earnings growth were used but instead of a chart pattern, relative price-to-sales ratio (“PSR”) was included to reduce the risk of loss.
The reason for the change in criteria between the second and first test list was that with the general market having risen since 1982 and the strong stocks having become so volatile, the danger existed that a severe correction would exert even more downward pressure on the list’s performance. For example, in the bear markets of the 1960s and 1970s, relative price strength performed well as a selection criteria initially until the very end of the general market declines when the strongest stocks tended to decline the sharpest and suffered disproportionately large losses.
To prevent such a loss in an individual stock, in List 1 a simple chart pattern was imposed as a ‘stop’ on negative price action to forcefully delete a stock early and prevent it from being caught in a severe decline. However, later, through tests of stock price patterns alone, no discernible advantage was gained.[9] Therefore, to avoid changing the original selection criteria for List 1 and thereby interrupting its long record of success, List 2 was begun using another approach. Rather than have a price stop to minimize loss, the danger of negative performance was minimized in the beginning by selecting only those stocks trading at low relative price to sales. Presumably these stocks were trading at bargain prices already. As it turned out, three additional advantages arose from this model: (1) portfolio volatility declined rapidly – portfolio beta was consistently below one, whereas List 1 often had a portfolio beta approaching two, (2) turnover declined from an average holding period of 22 weeks in List 1 to well over a year in List 2, and (3) the size of the portfolio was considerably smaller and more manageable – near 5 to 15 stocks in List 2 versus up to 80 in List 1.
PERFORMANCE MEASUREMENT
Each week, before making changes to a list, the average percentage gain or loss of each stock in the list was recorded. For example, say the list included only stock A and stock B. If stock A was up 5% and stock B up 1%, the list was recorded as having risen 3%, the mean of the two stock performances. These list performances were then accumulated each week over the test period. The equivalent in the real world would be for a portfolio manager to equally invest in selected stocks one week, record its combined performance for the next week, and then readjust each stock as well as add new ones and delete old ones such that for the coming week the portfolio would be equally weighted in each stock. Otherwise, the stronger stocks would accumulate over time into a larger relative position in the portfolio and have an unequal effect upon the portfolio’s total performance. Equally weighting of each stock each week, was the best method to reliably measure the criteria used in selecting the stocks.
SPECIFIC SELECTION CRITERIA
Relative Price Strength
Most measures of relative strength weigh a stock’s performance against a market average or index such as the S&P 500. This is wrong. The addition of a market average only complicates the results. For example, market averages are capital–weighted; individual stocks are not. Furthermore, this kind of measurement makes it difficult to weigh one stock against another, difficult to tell when price strength is changing, difficult to determine comparative periods, and is difficult to quantify for model building. The best calculation for a stock’s relative strength is to measure price performance equally against all other stocks over some specific time period. Until the arrival of computer power, this kind of calculation was very difficult and time-consuming. By the 1960s it was not.
Several methods of quantitatively weighing price performance have been proposed.[10] More recently, and since List 1 was begun, for example, Jegadeesh and Titman (1993) used six and twelve month returns held for six months during the period 1965-1989. Their results demonstrated a post– performance excess return of 12.01%. This evidence tends to confirm Levy’s earlier work. However, it was not available when the test model was begun. Instead, both List 1 and List 2 used a derivation of Levy’s original calculations.
Levy originally calculated the ratio of a stock’s 131-day moving average to its latest price. This ratio was calculated for all stocks. The total list of ratios was then sorted. Each stock was allocated a relative price strength percentile between 99 and 0 based on where its ratio fell in the spectrum of ratios. The 0 percentile for the highest ratio (weakest stock) and the 99th percentile for the lowest ratio (strongest stock).
To make the ratio easier to calculate and to understand, the test lists changed several aspects of Levy’s calculation but not the essence. Rather than using the ratio of the moving average to the current price, the inverse was used. The ratio of current price to the moving average made the high percentiles represent the highest relative strength. Thus the 99th percentile represented the strongest stock and the 0 percentile represented the weakest. Second, instead of 131 days of data in the moving average, the test lists used 26 weeks, approximately the same period (131 trading days is 26.2 weeks, not including holidays). In this manner, a large amount of data was not necessary (131 data points per stock versus 26 data points), yet the resulting ratios were equivalent and the effects on the post-erformance
minimal. The closing price used each week was the Thursday close.
Relative Earnings Growth
Until this point, it would appear that the study was involved solely with technical analysis and price behavior. However, while technical analysis has its weak and strong points, a stock selection method must use all variables that appear to work. Relative earnings growth is one of them. To a certain extent, “earnings” are a manufactured statistic. They depend on many accounting tricks and are not always truthful measures of a company’s success or failure. Special charges are often later written off against earnings, and depreciation is recalculated, or taxes reassessed. Reported earnings, therefore, are often subject to controversy and exaggeration.9 No one can argue that a stock closed at a certain price (at least within some small bound), but analysts often disagree on exactly what a company’s actual earnings may be. This becomes even more complicated when earnings are estimated into the future.[11] However, earnings reports are watched, especially for surprises, and are acted upon by investors. Tests have shown that reported relative earnings growth has a positive correlation to the post-performance of a stock.[12] Part of this, of course, is because reported earnings include any earnings surprise.
To be as sensitive as possible without the effect of seasonality, Levy calculated earnings growth by taking the most recent five quarters of reported earnings and measuring the ratio between the latest four quarters total to the first four quarters total.[13] Thus three quarters overlapped, and the seasonal tendency of many quarterly reports was eliminated. The ratio, if positive, showed that earnings were growing and by how much and if negative that earnings growth was negative and by how much. When companies reported losses for any consecutive four quarters, the ratio was not calculated. Growth then was measured over a relatively short period of five quarters. This same calculation was used in determining the earnings growth criteria for both test lists. As with relative price strength, the ratio for each stock was ranked with the same ratio for all other stocks and a percentile ranking determined whereby those stocks with the highest earnings growth were ranked in the highest percentiles, and vice versa for those with the lowest earnings growth.
Chart Pattern
As mentioned above, the test required some means to reduce the risk of an individual stock’s failure. List 1, which was the only one to use a chart pattern, by its nature, was very volatile and its selected stocks very high in price and valuation. This is only natural when stocks with high relative strength and high earnings growth are selected. To reduce the danger of individual collapse, the use of a simple chart pattern was thought to be the best method at the time to eliminate those stocks that begin to decline severely and before they collapsed. Computerizing chart patterns, especially twenty years ago, was and still is a difficult problem.[14] The simplest method was to produce a simple point-and-figure chart, one that shows only price reversal points after a predefined price magnitude has occurred. To do this, only the magnitude of the price move was needed to determine the reversal point. As an example, in many point-and-figure charts, a three point reversal magnitude is required for a price reversal point. If a stock price rises from 50 to 56, then declines to 48, since the three points up and down have been met, the reversal point was 56, the highest point at which the stock price had risen by at least 3 points and reversed by 3 points. This would be called an “upper reversal point” since it marked a top in prices. Had the stock only risen to 52 before declining to 48, no reversal point would have been recorded since the stock had not risen from 50 by the required magnitude of 3 needed to establish a reversal point. Conversely, had the stock then declined to 48 and risen back to 55, the price of 48 would have been a “lower reversal point” since the stock had declined by at least 3 into 48 and then risen more than the required 3 immediately afterward. This combined behavior would then have left us with a history of an upper reversal point at 56 and a lower reversal point at 48. In the chart formula, the last two upper and lower reversal points were recorded each week. When prices rose above two upper reversal points, the chart was said to be “advancing,” and prices declined below two lower reversal points, the chart was said to be “declining.”
In addition, in the chart formula, a sliding scale of reversal magnitudes was established to minimize the effect of absolute price differences. For example, a 3-point reversal in a 100 dollar stock is less significant than a 3-point reversal in a 20 dollar stock. A sliding scale of reversal magnitudes equalized the requirements for a reversal among all stocks.
Rather than be concerned about the actual patterns of the reversal points, List 1 only used the reversal points themselves. Only those advancing stocks were considered for selection, and those stocks in the list that turned down below two lower reversal points were eliminated. This provided the ‘stop’ needed to protect the portfolio from extraordinary negative events.
Relative Price/Sales Ratio (“PSR”)
Prices are well-known and easily accepted as valid. Annual sales of a company are also well-known and easily accepted as valid, and when combined with prices are an excellent comparative measure of a stock’s value. The higher the price-to-sales ratio, the higher the valuation that investors have placed on the stock’s future, and also the higher the risk of failure. Lower PSRs suggest lower value placed on a stock’s future. Their advantage is that “a small improvement in profit margins can bring a lot to the bottom line, improving the firm’s future P/E. Low PSR stocks are held in low regard by Wall Street. Those with improving profit margins usually catch the Street by surprise.”[15] PSRs also include stocks with no earnings (and therefore no P/E). Many studies have shown the value of the PSR.[16] O’Shaughnessy (1998) argues that the PSR is the most reliable method of selecting stock for long term appreciation.[17] His method of using the PSR, however, requires that an arbitrary level be established, below which a stock is attractive. In List 2 the arbitrary level was disbanded in favor of a relative percentile. First, the ratio was calculated for each stock as the current weekly close price divided by the last reported four–quarters sales. Next, this ratio for all stocks was then sorted and divided into percentiles such that the highest was in the 99th percentile and the lowest in the 0 percentile. This way, despite the general market level of valuation, a stock’s PSR could be measured against the PSR of all other stocks at the same time and in the same investment environment.
COMBINING CRITERIA INTO MODEL – THE PARAMETERS
Each week the entire list of available U.S. stocks (usually around 5,000) was screened for those stocks at or above the 90th percentile in relative price and earnings growth. In List 1 an advancing chart pattern was also required. Any stock not already on the list that met these criteria was added to the list. When relative price strength declined to or below the 30th percentile, relative earnings growth declined to or below the top 80th percentile, or the stock price pattern broke two previous lower reversal points, the stock was eliminated from the list. In List 2, the chart pattern was not used, but relative PSR was. The requirement for addition to the list was a relative PSR at or below the 30th percentile. The deletion criteria in List 2 were the same as in List 1 except they did not include the relative PSR since a high level did not necessarily suggest that a stock was facing an impending decline. Additionally, the deletion requirement for relative earnings growth was reduced to or below the 50th percentile since earlier experience had shown that a high threshold deleted stocks prematurely.
SPECIFIC RESULTS
Chart III shows the performance of List 1, the S&P 500 and the Value Line Geometric each year since the inception of the study in 1982. Chart IV shows the more recent total history for List 2 versus List 1, the S&P 500 and the Value Line Geometric. List 1, which began its weekly live trial in July 1982, gained a total of 5086.6% over the 17-1/2 years versus a 1087.6% gain in the S&P 500 and a 221.9% gain in the Value Line Geometric. This gain was 4.37 times the gain in the S&P and 16.11 times the performance of the Value Line Geometric. During that 17-1/2 year period List 1 had only three down years versus three for the S&P 500 and seven for the Value Line Geometric (see Table A below).
List-2, which began it weekly live trial in January 1999, has had only two years of history to measure. Nevertheless, the results so far have been impressive. Over the two years, the list gained 137.3% versus only a 7.41% gain in the S&P 500 and a 9.99% loss in the Value Line Geometric. It had no down years versus one for the S&P and both for the Value Line, and as mention earlier, its beta and turnover were considerably lower than List-1.
CONCLUSION
The quantitative analysis of stock selection criteria has diverged in many directions since the relatively recent widespread use of the computer. Most analysis has centered on demonstrating the validity of one or more specific stock market theories and many have shown mediocre results. The method of testing these results has also fallen into the optimization trap whereby the “best fit” between data and performance was not tested with new data and especially with unknown future data.
This study took several variables that had been demonstrated to have value in stock selection and in one list, beginning in July 1982, tested the results “live” each week for 17-1/2 years. The test was done through simulating the performance of a hypothetical portfolio, thus adding an element of practicality not seen in most studies of stock selection, and used a combination of technical and fundamental factors without prejudice. These factors measured aspects of a company or its stock on a basis relative to all other stocks and were independent of general market averages except in the demonstration of performance. The results were exceptionally favorable for the methods used and demonstrated the usefulness of the variables employed. Relative price strength and relative reported earnings growth,
when calculated in the manner of this study, showed superior results when compared to market averages. Since the period over which the study was done was one of generally rising stock prices, the final test will be completed only after a major stock market decline. However, considering the long period over which the study was conducted without adjustment for market changes, the presumption is that the relative post-performance results of the methods used will continue to exceed average market returns.
FOOTNOTES
- Avanian and Wubbels (1983)
- Levy (1968b) – lengthy and doesn’t add much more than Levy (1967)
- Levy (1967) – This article caused quite a stir in academia because it was the first major attempt to refute the efficient market hypothesis.
- Levy (1966)
- Bernard (1984), the founder of Value Line, as an example, had successfully utilized the concepts of relative price strength and relative earnings growth since the late 1920s. “dividing the stock’s latest 10–week average relative performance by its 52-week average relative price” is the price momentum factor used by Value Line. For a recent discussion of the merits of the Value Line system see Choi (2000).
- The reaction to Levy’s (1967) article was swift. Michael Jensen (1967) of Harvard was the first to publish comments. Initially he criticized Levy’s methodology on the basis that the sample was too small and over too short a period, had a selection bias, and other errors that would overstate the results. His comment was that Levy’s comment of “the theory of random walks has been refuted” was a little too strong. Levy (1968c) then countered with another study including more stocks and a longer time period that produced even better results (31% versus the market 10% for 625 stocks from July 1, 1962 to November 25, 1966). Finally Jensen and Bennington (1970) did their own study, supposedly using Levy’s rules but including transaction costs and adjustments for risk, and using 1962 stocks from 1926 to 1966, and reported that Levy’s rules resulted in a risk–adjusted loss. We never hear of Levy’s relative strength work again.
- See Murphy (1986) and Kaufman (1978)
- Kirkpatrick (1978-2001)
- Merrill (1977)
- The entire concept of past price returns having an effect on future price returns has academia in quandary since it tends to cast severe doubt on the efficient market hypothesis. Many different price return anomalies have been reported, some positive and some negative. Long–term and very short– term results tend to be consistently negative. Chopra, Lokonishok, and Ritter (1992), Cutler, Poterba and Summers (1988), De Bondt and Thaler (1985), and Fama and French (1986) show that for holding periods beyond 3 years, the return is negative. Over periods of a month or less, French and Roll (1986) and Lehmann (1990) found negative returns in individual stocks weekly and daily; Lo and MacKinlay (1990) found positive returns weekly in indices and portfolios but negative returns for individual stocks; and Rosenburg, Reid and Lanstein (1985) found negative reversals after a month. There seems, however, to be a window of about six to twelve months when returns are consistently positive. This was Levy’s hypothesis and it has now been confirmed by Brush (1983, 1986) and Jegadeesh and Titman (1993). BARRA [see Buckley (1994)] has found the price momentum anomaly in a number of countries, including the US, Japan, the UK, Australia, and France. Explanations for these anomalies are varied but best summed in Chan, Jegadeesh and Lokonishok (1996, 1999).
- The question of how accurate are reported earnings and especially how accurate are future earnings forecasts has been widely studied. Niederhoffer (1972) and Cragg and Malkiel (1968) suggest that reported earnings are better forecasters of future earnings than analysts forecasts. Indeed, Harris (1999) concludes that analyst forecasting accuracy is extremely poor, biased and inefficient. The inaccuracy is mostly the result of random error and the performance of forecasts vary with both the company characteristics and the forecast itself. A whole series of studies has evolved around “earnings surprises” those frequent events when reported earnings differ markedly from analysts’ expectations. La Porta (1996) has shown that superior results can be gained by exploiting these analyst errors because expectations are too extreme. Investors overweight the past and extrapolate too far into the future. Chan, Jegadeesh and Lokonishok (1996, 1999) speculate that the reason for the relative strength positive anomaly over six to twelve months is that it takes that long for the analysts to adjust. La Porta (1996) suggests that it takes several years.
- Ramakrishnam and Thomas (1998)
- Levy and Kripotos (1968a)
- The best and most recent discussion about analyzing chart patterns is in Lo, Mamaysky, Wang, and Jegadeesh (2000).
- Fisher (1996)
- A number of financial ratios have been used and tested. The most common, of course, is the price-to-earnings ratio (PER). More recently the market– to–book ratio has become popular, and even more recently attention has returned to the price-to-sales ratio (PSR). Senchack and Martin (1987) had shown that low PSR stocks tended to outperform high PSR stocks but that low PER stocks dominated low PSR stocks on both an absolute and risk-adjusted basis. But recently Barbee (1995) showed in tests from 1979 to 1991 that price-to-sales and debt-to-equity had greater explanatory power for stock returns than did either market-to-book or market-to-equity. Liao (1995) also showed that low PSR stocks avoid the ambiguities of the CAPM approach and dominate high PSR stocks and the market.
- O’Shaughnessy (1998) also argues for relative price strength as a selection criterion.
REFERENCES
- Avanian, Alice C. and Rolf E. Wubbels, 1983, Shaking up a cornerstone? Study raises questions on price–earnings ratio importance, Pensions & Investment Age v11n8, 21.
- Barbee, William C., Sandip Mukherji, and Gary A. Raines, 1996, Do Sales–Price and Debt–Equity Explain Stock Returns Better than Book– Market and Firm Size? Financial Analysts Journal v52n2, 56–60.
- Bernard, Arnold, 1984, How to Use the Value Line Investment Survey: A Subscriber’s Guide (Value Line, New York).
- Brush, John S., 1986, Eight relative strength models compared, Journal of Portfolio Management v13n1, 21–28.
- Brush, John S., 1983, The predictive power in relative strength & CAPM, Journal of Portfolio Management v9n4, 20–23.
- Buckley, Ian, 1994, The past is myself, Pensions Management v26v11, 93–95.
- Chan, Louis K. C., Narasimhan Jegadeesh, and Josef Lokonishok, 1999, The profitability of momentum strategies, Financial Analysts Journal v55n6, 80–90.
- Chan, Louis K. C., Narasimhan Jegadeesh, and Josef Lokonishok, 1996, Momentum strategies, Journal of Finanace v51n5, 1681–1713.
- Choi, James J., 2000, The value line enigma: The sum of known parts? Journal of Financial & Quantitative Analysis v35n3, 485–498.
- Chopra, Navin, Josef Lokonishok, and Jay R. Ritter, 1992, Performance measurement methodology and the question of whether stocks overreact, Journal of Financial Economics v31, 235–268.
- Cragg, J. G and Burton G, Malkiel, 1968, The consensus and accuracy of some predictions of the growth of corporate earnings, Journal of Finance v23n1, 67–84.
- Cutler, D. M., J. M. Poterba, and L.H. Summers, 1991, Speculative dynamics, Review of Economic Studies v58, 529–546.
- De Bondt, W. F. M., and R. H. Thaler, 1985, Does the stock market overreact?, Journal of Finance v40, 793–805.
- Fama, E. F., and K. R. French, 1986, Permanent and temporary components of stock prices, Journal of Political Economy v98, 246–274.
- Fisher, Kenneth L., 1996, PSRs revisited, Forbes v158n6, 225.
- French, Kenneth R., and Richard Roll, 1986, Stock return variances: The arrival of information and the reaction of traders, Journal of Financial Economics v17, 5–26.
- Harris, Richard D. F., (1999), The accuracy, bias and efficiency of analysts’ long run earnings growth forecasts, Journal of Business Finance & Accounting v26n5/6, 725–755.
- Jensen, Michael C. and George Bennington, 1970, Random walks and technical theories: Some additional evidence, Journal of Finance v25, 469–482.
- Jensen, MichaelC., 1967, Random walks: Reality or Myth – Comment, Financial Analysts Journal 77–85.
- Jegadeesh, Narasimhan, and Sheridan Titman, 1993, Returns to buying winners and selling losers: Implications for stock market efficiency, Journal of Finance v48n1, 65–91.
- Jegadeesh, Narasimhan, 1990, Evidence of predictable behavior of security returns, Journal of Finance v45n3, 881–898.
- Kaufman, Perry J., 1978, Commodity Trading Systems and Methods, John Wiley & Sons, New York.
- Kirkpatrick, Charles D., II, Kirkpatrick’s Market Strategist (Post Office Box 699, Chatham, MA 02633)
- LaPorta, Rafael, 1996, Expectations and the cross-section of stock returns, Journal of Finance v51n5, 1715–1742.
- Lehmann, Bruce N., 1990, Fads, martingales, and market efficiency, Quarterly Journal of Economics 105, 1–28.
- Levy, Robert A. and Speros L. Kripotos, 1968, Earnings growth, P/E’s, and relative price strength, Financial Analysts Journal
- Levy, Robert A., 1968, The Relative Strength Concept of Common Stock Forecasting: An Evaluation of Selected Applications of Stock Market Timing Techniques, Trading Tactics, and Trend Analysis, Investors Intelligence, Larchmont, New York, 318p., illus.
- Levy, Robert A., 1968, Random Walks: Reality or Myth – Reply, Financial Analysts Journal, 129–132.
- Levy, Robert A., 1967, Relative strength as a criterion for investment selection, Journal of Finance V22, 595–610.
- Levy, Robert A., 1966, An Evaluation of Selected Applications of Stock Market Timing Techniques, Trading Tactics and Trend Analysis, Unpublished Ph.D. dissertation, The American University.
- Liao, Tung Liang, and others, 1995, Testing PSR filters with the stochastic dominance approach, Journal of Portfolio Management v21n3, 85–91.
- Lo, Andrew W., Harry Mamaysky, Jiang Wang, and Narasimhan Jegadeesh, 2000, Foundations of technical analysis: Computational algorithms, statistical inference, and empirical implementation/discussion, Journal of Finance v55n4, 1705–1770.
- Lo, Andrew W., and Craig A. MacKinlay, 1990, When are contrarian profits due to stock market overreaction? Review of Financial Studies v3, 175–206
- Merrill, Arthur A., 1977, Filtered Waves, Basic Theory: A Tool for Stock Market Analysts, Analysis Press, Chappaqua, New York.
- Murphy, John J., 1986, Technical Analysis of the Futures Markets, Institute of Finance, New York.
- Niederhoffer, Victor, and Patrick J. Regan, 1972, Earnings changes, analysts’ forecasts and stock prices, Financial Analysts Journal, 65–71.
- O’Shaughnessy, James P., 1998, What Works on Wall Street: A Guide to the Best–Performing Investment Strategies of All Time, McGraw–Hill, New York
- Poterba J. M., and L. H. Summers, 1988, Mean reversion in stock prices: Evidence and implications, Journal of Financial Economics v22, 27– 59.
- Ramakrishnam, Ram T. S. and Jacob K. Thomas, 1998, Valuation of permanent, transitory, and price irrelevant components of reported earnings, Journal of Accounting, Auditing, & Finance v13n3, 301–336.
- Rosenburg, B., K. Reid, and R. Lanstein, 1985, Persuasive evidence of market inefficiency, Journal of Portfolio Management v11, 9–16.
- Senchack, A. J., Jr. and John D. Martin, 1987, The relative performance of the PSR and PER investment strategies, Financial Analysts Journal v43n2, 46–56.
Charles H. Dow Award Co-Winner • May 2001
by Peter Eliades

About the Author | Peter Eliades
Peter Eliades is the President of StockMarket Cycles, a financial advisory entity that published a periodic market newsletter from 1975 until 2015. In 2001, Peter was honored by the CMT Association with the prestigious Charles Dow Award for excellence and creativity in Technical Analysis. Peter has been working for decades developing his own interpretation of J. M. Hurst’s theories of cycle price projections. In 2018, his long-term market analysis colleague, Larry Williams, introduced Peter to Steffen Scheuermann, a talented European programmer. Steffen filled in the missing link and in 2020 Peter and Steffen introduced the Eliades Cycle Price Projection application to the computer and financial worlds. It has been greeted with great acclaim.
In 1985, Peter earned Timer Digest’s “Timer of the Year” award and placed second in 1986 in a close race which wasn’t decided until the final trading day of the year. In 1989, Mark Hulbert (Hulbert Financial Digest) named Peter as the “Most Consistent Mutual Fund Switcher” based on his timing signals for the years 1985, 1986, 1987, and 1988. Mr. Eliades was a regular weekly panelist on ABC Network’s weekly Sunday show, Business World in the 1990s, and has made frequent guest appearances on FNN, CNBC, Wall Street Week, Larry King Live, and Nightly Business Report, as well as more recent appearances with Neil Cavuto on Fox News. He has been featured in some of the nation’s most prestigious publications including Barron’s, The Wall Street Journal, and Forbes among others, and in a cover story in Futures Magazine. He has authored several articles published in Barron’s.
Peter is a graduate of Harvard College and Boston University Law School. He passed the Massachusetts Bar before diverting his career into “show biz” in Manhattan and Los Angeles. It was in Hollywood that he initiated his stock market studies and began another career diversion into the world of technical market analysis.
I view the art of technical analysis and research as an exciting adventure. I have often wondered what “Sedge” Coppock was looking for when he invented the Coppock Curve. What was Edson Gould researching when he discovered the precepts for his “Sign of the Bull”? From my own research, I have learned that serendipity, the aptitude for making desirable discoveries by accident, can play a big part in making meaningful technical discoveries. One of the fantasies that every serious stock market technician has probably entertained is that there must be some kind of indicator that will signal us when a major market top is being formed. There are some effective indicators for identifying market bottoms, but because market tops tend to be more diffuse, often occurring at different times for different indexes, the search for an effective tool to identify major market tops has been, for the most part, a futile one.
In November 1992, I was struck by the apparent lack of volatility in the daily number of advancing and declining issues on the New York Exchange. Over a period of 21 trading days (the number of trading days in the average month), the highest single day closing advance/decline ratio (simply divide up stocks by down stocks on the New York Exchange) was 1.84 and the lowest was 0.71. At the time, it seemed that was a very small range for a full month of data, so I decided to research further. Rather than use the observed 1.84 and 0.71 limits as a precedent for further research, the range was arbitrarily widened somewhat to 0.65 and 1.95. The first search of the computer database attempted to find other time periods of 21 consecutive trading days when similar “churning” occurred, i.e. when the highest daily advance/decline ratio was below 1.95 and the lowest advance/decline ratio was above 0.65. That might give a clue as to whether the pattern was significant in any way. The initial research went back to 1966 when the Dow made its first move towards the 1000 level. The results were stunning. Between 1966 and November 1992 when the pattern first caught my attention, a period of almost 27 years, there were only three other periods when the conditions for the pattern were satisfied. Here are the dates when those conditions were fulfilled:
It appeared as if technical gold had been struck. Within an average period of less than a month, the pattern had preceded three of the most important stock market tops of the past several decades. Equally as important, there were no other instances of the pattern over that 27-year period. Just how important were the turning points that were preceded by this pattern?
The first day of the 1966 pattern preceded a final market top on the Dow Jones Industrial Average on February 9th by 11 trading days. On an inflation-weighted Dow chart, the 1966 top lasted until 1995 as an all-time Dow high.
The first day of the 1968 pattern preceded a final market top on the Dow on December 3rd by 27 Trading days. That Dow Jones Industrial Average top also corresponded with a major top on the Value Line Composite Index, an unweighted index more representative of the average investor’s portfolio. That index would go on to lose approximately 75% of its value over the next six years.
The first day of the 1972 pattern preceded a final market top on January 11th, 1973 by 23 trading days. That top led to one of the sharpest two-year Dow declines in history, almost 50 percent in less than 24 months. The high seen on January 11, 1973 would not be reached again until almost a decade later.
After reviewing those results, the feeling was that something very special had been discovered. During a period of almost 27 years, there were only three occurrences of the pattern and each occurrence led to a major market top within, at most, 27 trading days.
Over three decades of market research have made it clear that any pattern that appears to have predictive potential should be researched as far back as is practicable. Research of the period between 1940 to 1966 uncovered a total of nine “churning” patterns when the above conditions were satisfied, namely, the highest daily advance/decline ratio over a 21 day period was below 1.95 while the lowest ratio over that period was above 0.65. With the exception of the period from June 1963 to March 1965, the results were impressive, though not as uniformly dramatic as the post-1965 results noted above. Here are all the similar periods noted from 1940 through 1965 where the 21 day (or longer) churning pattern occurred, followed by the number of days in the pattern.
For now, let’s discard the period between June 1963 and February 1965 and observe the average results for the remaining six periods. The Dow Jones Industrial Average, on average, advanced 1.7 % from the close of the 21st day in the pattern to the highest subsequent intra-day high after the pattern emerged. On average, it took 10 trading days to reach that high, and the subsequent decline averaged 15.2%. It would be convenient to somehow eliminate the three instances of the pattern between June 1963 and February 1965. We could say that the June 1963 and the February 1965 instances barely qualified because they had the fewest number of consecutive days (21 and 22, respectively), and that February through April 1964 was completely out of character with the other instances because it was more than double the number of consecutive days of any other instance, and eliminate those instances from our examples. But in a strict sense, it would not have been a true reflection of technical history. In any event, even those apparent instances of failure were followed by almost
immediate market declines. Those declines, however, were of a minor magnitude.
Overall, these results were deemed to be significant and impressive. If the research had ended there, there would have been sufficient evidence to identify the pattern as one that closely preceded market tops during the period from 1940-1973.
A market historian might note a remarkable commonality in all of the above periods. Almost without exception, each time the churning parameters were satisfied over a minimum of 21 trading days, the market was either at or very close to an all time high or a multi-year high. There is nothing apparent in the definition of the two limits required of the advance/ decline ratio (greater than 0.65 but less than 1.95) over a one month period that would suggest such a result.
As noted initially, the characteristics of this pattern were first noticed in November 1992. The specific pattern which was unfolding then went on for 48 consecutive trading days from November 9th to December 17th, 1992. Between 1992 and 1998, four more instances of the pattern occurred. The updated record of instances of the pattern between 1972 and 1998 reads as follows:
By 1995, it became obvious that if the pattern was a signpost, a kind of footprint that preceded important market tops, the defining characteristics of the pattern would have to be refined. The purpose of the refinements would ideally be to arrive at a tool that was effective in identifying major market tops. At the same time, it was important to attempt to avoid the practice of curve fitting. Going back to the original three patterns that were discovered, dating between 1966 and 1972, I tried to identify characteristics that distinguished those three patterns that worked so very well from the patterns that were either apparent failures or patterns that marked only minor reversals.
One of the items that appeared significant was the length of the pattern before the consecutive streak is broken. Intuition would suggest that the longer the pattern, the greater its potential negative influence. History has proved otherwise. Once a pattern moves beyond 27-28 market days, it has far less a chance of being significant. The 1992 pattern can be eliminated because it was far too long and it did not fit the profile of prior patterns which saw the Dow going to either multi-year highs or all time highs as the pattern reached 21 days in length. It emerged at a time when the Dow was more than five months beyond and almost 6% lower than its previous all time or multi-year high. It just did not fit into the profile of prior churning patterns. The April-May 1995 pattern and the September 1995 pattern appeared at first glance to qualify as patterns that led to intermediate or longterm tops in the past. As successful predictive patterns of the past were further examined, however, a signature that accompanied all the successful pattern predictions of major market tops began to become apparent. One important consideration was how the pattern ended. In other words, when the churning streak ended, did it end with a high ratio (above 1.95) or a low ratio (below 0.65)? The initial three patterns that were discovered from 1966 to 1972, and that worked so remarkably well in identifying major tops, all ended their consecutive streaks with low ratios. In fact, in each of those three instances, the end in the streak was conclusive. Either the two day average advance/decline ratio or the three day average advance/decline ratio following the end of the streak was below 0.75. In other words, after at least a full month without one advance/decline ratio below 0.65, there is a distinct and sharp change in the market’s personality. Not only is there a day with a ratio below 0.65 that breaks the consecutive streak, but for at least two or three days after the streak ends, there is an average ratio below 0.75.
Enough data had now been compiled to formulate a general rule. The pattern would be dubbed, the “Sign of the Bear.” There were three basic rules that were required to identify a “Sign of the Bear.”
- There must be a streak of 21-27 consecutive trading days where the daily advance/decline ratio remains above 0.65 but below 1.95.
- That consecutive streak must end with a downside break, i.e. with an advance/decline ratio below 0.65.
- The downside break in the streak must be confirmed with either a two day average advance/decline ratio or a three day average advance/decline ratio following the end of the streak being below 0.75.
Just around the time the basis for these rules had been formulated, the advance/ decline data for the period prior to 1940 became available. The data were examined with trepidation and with great anticipation. Remember, after the initial discovery of the pattern, the follow-up research went only back to 1940, the limit of our database at the time. There was not one instance of the pattern from 1940 until December 1952. Looking at the newly acquired data, I was again stunned by the results. There was not one instance of the pattern in the decade of the 1930s, just as there had been none in the 1940s. Working backwards from December 1952, take a wild guess when the first appearance of the pattern occurred. The dates were July 19 and 20, 1929. That’s right! Just over six weeks before the most famous top of the 20th century, the pattern occurred and the three basic rules were met. A “Sign of the Bear” had appeared. It would not be seen again until December 1961, over 32 years later. Surely, this appearance of the pattern in a completely different time period would do away with any notion that there was “data mining” or curve fitting performed on the initial data that formed the basis of the research.
What is the rationale that explains why the patterns defining a “Sign of the Bear” should result in major market tops? I believe that searching for 21 day periods without one daily ratio less than 0.65 would obviously direct the computer to periods of market strength, periods where the market went for at least a full month without a big down day. At the same time, the computer is directed to periods of investor complacency – a full month without a meaningful day of selling. Now add the requirement that there also be no daily ratio higher than 1.95 and the computer would be directed to periods of market strength and bullish sentiment, but not the kind of upside breadth (advance/decline ratios higher than 2-1) which is usually required to sustain a healthy market advance. Voila! It’s just the combination that a technician might look for at a market top. The final requirement is one that almost all technicians learn sooner or later. Require confirmation of your pattern. Unless there is a sharp turnaround to the downside as required by rules number 2 and 3, the pattern might be relatively innocuous. Once that confirmation occurs, history tells us the market is in trouble.
The final challenge was to test the theory in real time. There were several patterns in the 1990s, but until 1998, they all failed to meet the three requirements necessary for a “Sign of the Bear.” Finally, on April 6, 1998, the 3 requirements were satisfied for the first time in almost 26 years. As this paragraph is being written in 2000, we know that April 1998 proved to be not only a major top for the daily advance/decline line of the NY Exchange, it was also the all-time high on the Value Line Composite Index (Geometric). That high has not been approached to this day, even though its sister index, the Value Line Composite Index (Arithmetic) has since gone to new all-time highs. It has long been our contention that the geometric Value Line is superior to the arithmetic one in giving a true picture of the average share of stock. Chart 2 shows the daily advance/decline line of the New York Stock Exchange with an arrow pointing to the April 1998 “Sign of the Bear” signal. Once again, in real time, the “Sign of the Bear” gave a virtually perfect signal for a change in the market’s personality.
On September 18, 2000, another “Sign of the Bear” signal was confirmed. It was only the second signal since December 1972. It came just 5 trading days from a new all-time high on the N. Y. Composite Index. There have now been two signals generated within 30 months of each other. The closest previous signals were the ones generated in January 1966 and December 1968, thirty-three months apart. There are not enough results to make a statistically informed judgment, but there is a suggestion from the prior signals that the “Sign of the Bear” is an indication of not merely a potential major top, but perhaps also an important secular change in the overall market from long term bull to market underperformance for many years to come. It is difficult to understand how a simple pattern of only 1- 2 months duration could predict the future course of the market for years to come, but examine the final chart below. The 1929 signal marked a Dow top that would not be exceeded for over a quarter of a century. The 1961 signal preceded a top and then the final run up of only 35% before the 1966 top which was not convincingly exceeded for over 16 years. The 1968 top, as was explained earlier, led to a decline of around 75% in the average share of stock or mutual fund, and the 1973 top led to one of the sharpest Dow declines of the 20th Century, a top that would not be significantly exceeded until a decade later. How will history judge the latest two signals? The April 1998 signal has already marked an almost 3-year top in the daily advance/decline line and the Value Line Composite (Geometric) Index (Geometric). Only history will tell us whether the September 2000 signal will mark a secular market top of historic duration. Based on the prior history of the “Sign of the Bear,” there appears to be an excellent chance it will.
Charles H. Dow Award Winner • May 2002
by Paul F. Desmond

About the Author | Paul F. Desmond
Paul F. Desmond served as the President of Lowry Research Corporation, the oldest continuously published advisory firm in the nation, until his passing in 2018. Paul joined Lowry in 1964 as Director of Research and advanced to President and owner in 1972. Over the course of nearly 50 years, he earned the distinction of being regarded in the industry as the Dean of Supply/Demand analysis. About 85% of the subscribers to the Lowry Analysis are professional investors, including some of the largest hedge funds and private investment counseling firms in the world.
Lowry Research, founded by L. M. Lowry in 1938, has been an enduring part of stock market analysis for more than 80 years because its analysis is a very basic study of the forces of Supply versus Demand, the starting point of all economic analysis. Lowry Research publishes two interactive websites: Lowry onDemand, covering all stocks registered for trading on the New York and NASDAQ Exchanges, as well as Lowry Global, covering 24 major equity markets throughout the Americas, Asia, and Europe.
From 1972 to 1990, Paul also served as the founder and Chairman of Lowry Management Corporation, an S.E.C. registered Investment Adviser and portfolio manager overseeing approximately $400 million in clients accounts. During that same period, he was also the founder and Chairman of Lowry Financial Services Corporation, a registered broker dealer, subsequently acquired by Pacific Mutual Life Insurance Co.
Paul served as President of the CMT Association from 1997 to 1999. He was also a founder of the American Association of Professional Technical Analysts (AAPTA). In 2002, Paul was the recipient of the Charles H. Dow Award for his original ground-breaking research entitled “Identifying Bear Market Bottoms and New Bull Markets.” In 2009, he was honored as The Technical Analyst of the Year by The Technical Analyst Magazine of London. In addition, Lowry Research was honored as the Best Equity Research and Strategy for 2009, 2010, and 2012 by The Technical Analyst Magazine of London.
Paul has authored several widely read white papers on the analysis of market trends, as well as being featured in multiple books. He has been featured in a number of formal interviews in Barron’s Magazine, the Wall Street Journal, Money Magazine, Marketwatch, and a wide variety of other financial publications. He has also been a frequent guest on CNBC and Bloomberg Television.
Ask one hundred investors whether this is a bull market or a bear market, and you are likely to find their opinions split evenly down the middle. No one is really certain that the September 2001 low marked the end of the bear market and the start of a new bull market. But, this uncertainty is nothing new. As long as stock exchanges have existed, analysts and investors have always placed heavy emphasis on the difficult task of identifying the primary trend of the stock market. Everyone’s ideal market strategy is, at least in theory, to avoid the ravages of each bear market, and then to move aggressively into stocks after each important market bottom. To further maximize the benefits of a new bull market, time is of the essence. An investor should buy as close to the final low as possible. This is the ‘sweet spot’ for investors – the first few months of a new bull market in which so many stocks rise so dramatically. But, theory and reality, especially in the stock market, are often entirely different matters. To bring this theoretical investment strategy to reality, an investor would need a time-tested method of identifying major market bottoms – as opposed to minor market bottoms – and would have to apply this method quickly, to capture as much of the bull market as possible. Traditional methods of spotting major turning points in the market often leave a great deal to be desired. The financial news typically remains negative for months after a new bull market has begun. The economic indicators offer little help since, historically, the economy does not begin to improve until about six to nine months after the stock market has already turned up from its low. Even some widely accepted technical indicators, such as 200-day moving averages or long-term trendlines, can sometimes take several months to identify a major turning point in the market. To spot an important market bottom, almost as it is happening, requires a close examination of the forces of supply and demand – the buying and selling that takes place during the decline to the market low, as well as during the subsequent reversal point.
Important market bottoms are preceded by, and result from, important market declines. And, important market declines are, for the most part, a study in the extremes of human emotion. The intensity of their emotions can be statistically measured through their purchases and sales. To clarify, as prices initially begin to weaken, investor psychology slowly shifts from complacency to concern, resulting in increased selling and an acceleration of the decline. As prices drop more quickly, and the news becomes more negative, the psychology shifts from concern to fear. Sooner or later, fear turns to panic, driving prices sharply lower, as investors strive to get out of the market at any price. It is this panic stage that drives prices down to extreme discounts – often well below book values – that is needed to set the stage for the next bull market. Thus, if an investor had a method for identifying and measuring panic selling, at least half the job of spotting major market bottoms would be at hand.
Over the years, a number of market analysts have attempted to define panic selling (often referred to as a selling climax, or capitulation) in terms of extreme activity, such as unusually active volume, a massive number of declining stocks, or a large number of new lows. But, those definitions do not stand up under critical examination, because panic selling must be measured in terms of intensity, rather than just activity. To formulate our definition of panic selling, we reviewed the daily history of both the price changes and the volume of trading for every stock traded on the New York Stock Exchange over a period of 69 years, from 1933 to present. We broke the volume of trading down into two parts – Upside (buyers) Volume and Downside (sellers) Volume. We also compiled the full and fractional dollars of price change for all NYSE-listed stocks that advanced each day (Points Gained), as well as the full and fractional dollars of price change for all NYSE-listed stocks that declined each day (Points Lost). These four daily totals – Upside Volume and Points Gained, Downside Volume and Points Lost – represent the basic components of Demand and Supply, and have been an integral part of the Lowry Analysis since 1938. (Note: an industrious statistician can compile these totals from the NYSE stock tables in each day’s Wall Street Journal.)
In reviewing these numbers, we found that almost all periods of significant market decline in the past 69 years have contained at least one, and usually more than one, day of panic selling in which Downside Volume equaled 90.0% or more of the total of Upside Volume plus Downside Volume, and Points Lost equaled 90.0% or more of the total of Points Gained plus Points Lost. For example, April 3, 2001 qualified as a valid 90% Downside Day. To clarify, the following table was shown in Lowry’s Daily Market Trend Analysis Report of April 4, 2001:
The historical record shows that 90% Downside Days do not usually occur as a single incident on the bottom day of an important market decline, but typically occur on a number of occasions throughout a major decline, often spread apart by as much as thirty trading days. For example, there were seven such days during the 1962 decline, six during 1970, fourteen during the 1973-74 bear market, two before the bottom in 1987, seven throughout the 1990 decline, and three before the lows of 1998. These 90% Downside Days are a key part of an eventual market bottom, since they show that prices are being deeply discounted, perhaps far beyond rational valuations, and that the desire to sell is being exhausted.
But, there is a second key ingredient to every major market bottom. It is essential to recognize that days of panic selling cannot, by themselves, produce a market reversal, any more than simply lowering the sale price on a house will suddenly produce an enthusiastic buyer. As the Law of Supply and Demand would emphasize, it takes strong Demand, not just a reduction in Supply, to cause prices to rise substantially. It does not matter how much prices are discounted; if investors are not attracted to buy, even at deeply depressed levels, sellers will eventually be forced to discount prices further still, until Demand is eventually rejuvenated. Thus, our 69-year record shows that declines containing two or more 90% Downside Days usually persist, on a trend basis, until investors eventually come rushing back in to snap up what they perceive to be the bargains of the decade and, in the process, produce a 90% Upside Day (in which Points Gained equal 90.0% or more of the sum of Points Gained plus Points Lost, and on which Upside Volume equals 90.0% or more of the sum of Upside plus Downside Volume). These two events – panic selling (one or more 90% Downside Days) and panic buying (a 90% Upside Day, or on rare occasions, two back-to-back 80% Upside Days) – produce very powerful probabilities that a major trend reversal has begun, and that the market’s Sweet Spot is ready to be savored.
Not all of these combination patterns – 90% Down and 90% Up – have occurred at major market bottoms. But, by observing the occurrence of 90% Days, investors have (1) been able to avoid buying too soon in a rapidly declining market, and (2) been able to identify many major turning points in their very early stages – usually far faster than with other forms of fundamental or technical trend analysis. Before reviewing the historical record, a number of general observations regarding 90% Days might help to clarify some of the finer appraisal points associated with this very valuable reversal indicator:
- A single, isolated 90% Downside Day does not, by itself, have any long term trend implications, since they often occur at the end of short term corrections. But, because they show that investors are in a mood to panic, even an isolated 90% Downside Day should be viewed as an important warning that more could follow.
- It usually takes time, and significantly lower prices, for investor psychology to reach the panic stage. Therefore, a 90% Downside Day that occurs quickly after a market high is most commonly associated with a short term market correction, although there are some notable exceptions in the record. This is also true for a single 90% Downside Day (not part of a series) that is triggered by a surprise news announcement.
- Market declines containing two or more 90% Downside Days often generate a series of additional 90% Downside Days, often spread apart by as much as 30 trading days. Therefore, it should not be assumed that an investor can successfully ride out such a decline without taking defensive measures.
- Impressive, big-volume “snap-back” rallies lasting from two to seven days commonly follow quickly after 90% Downside Days, and can be very advantageous for nimble traders. But, as a general rule, longerterm investors should not be in a hurry to buy back into a market containing multiple 90% Downside Days, and should probably view snapback rallies as opportunities to move to a more defensive position.
- On occasion, back-to-back 80% Upside Days (such as August 1 and August 2, 1996) have occurred instead of a single 90% Upside Day to signal the completion of the major reversal pattern. Back-to-back 80% Upside Days are relatively rare except for these reversals from a major market low.
- In approximately half the cases in the past 69 years, the 90% Upside Day, or the back-to-back 80% Upside Days, which signaled a major market reversal, occurred within five trading days or less of the market low. There are, however, a few notable exceptions, such as January 2, 1975 or August 2, 1996. As a general rule, the longer it takes for buyers to enthusiastically rush in after the market low, the more investors should look for other confirmatory evidence of a market reversal.
- Investors should be wary of upside days on which only one component (Upside Volume or Points Gained) reaches the 90.0% or more level, while the other component falls short of the 90% level. Such rallies are often short-lived.
- Back-to-back 90% Upside Days (such as May 31 and June 1, 1988) are a relatively rare development, and have usually been registered near the beginning of important intermediate and longer term trend rallies.
A detailed Appendix is attached, showing each 90% Day (or back-to-back 80% Upside Days) over the past 40 years, since January 1, 1960. But, several examples may make it easier to visualize the concepts presented here. The charts to follow show the Dow Jones Industrial Average in the months before and after a number of major market bottoms. An oscillator of the intensity of each day’s trading, in terms of both Price and Volume, is also shown on each chart (for simplicity’s sake, the Price and Volume percentages have been combined into a single indicator). The 90% Days, both Downside and Upside, are highlighted with an arrow. The backto-back 80% Upside Days are highlighted with a dot.
During 1962, seven 90% Downside Days were recorded during May and June, the last one occurring two days before the final low. The 90% Upside Day was recorded on June 28, just three days after the low in the Dow Jones Industrial Average.
Five 90% Downside Days were recorded during the final months of the 1969-1970 bear market, the last one occurring one day before the low. The 90% Upside Day occurred on May 27, one day after the market low.
The final months of the 1973-1974 bear market contained four 90% Downside Days (a total of 14 occurred throughout 1973 and 1974), the last occurring on December 2, four days before the final low in the Dow Jones Industrial Average. Back to back 80% Upside Days occurred on December 31, 1994 and January 2, 1995 – an unusually long sixteen days after the 1974 market low. Another 90% Upside Day, a superfluous confirmation of the new bull market, occurred on January 27, 1975, thirty-three days after the bottom day.
Three 90% Downside Days were recorded during the final months of the 1980 decline. The 90% Upside Day occurred on March 28, one day after the market low. Another superfluous 90% Upside Day occurred on April 22, after a successful test of the lows.
In 1987, 90% Downside Days occurred on October 16 and on “Black Monday,” October 19. The 90% Upside Day occurred two days later, on October 21. Then, like aftershocks following a major earthquake, two more 90% Downside Days occurred on the first successful test of the lows in late October, followed by a 90% Upside Day on October 29. The aftershocks continued in December and January, each followed by an equivalent 90% Upside reversal.
Three 90% Downside Days were recorded during July and August 1990. As a demonstration that the record is not perfect, a 90% Upside Day was recorded on August 27. The Dow Jones Industrial Average moved sideways for two weeks before dropping to new lows.
Two more 90% Downside Days were recorded during September and October before back-to-back 80% Upside Days were recorded on Friday, November 9 and Monday, November 12 – twenty days after the market low.
During 1998, three 90% Downside Days were registered during August. The 90% Upside Day occurred just five trading days after the market low, on September 8. Another, superfluous 90% Upside Day was registered on October 15, five days after a successful test of the September lows.
This review of 90% Days would not be complete without bringing the record up-to-date. And, the recent history may hold a particularly important message for investors: There were no 90% Downside Days recorded during 1999 or 2000. However, the sharp drop in the Dow Jones Industrial Average during the early months of 2001 generated two 90% Downside Days, on March 12 and April 3. But, during the ensuing rally, investor buying enthusiasm was not dynamic enough to generate a 90% Upside Day, leaving the impression that the final lows had not been seen. After just six weeks of rally to the May, 2001 rally peak, the market began to weaken again, eventually plunging to a three-year low in the midst of the September, 2001 tragedy. But, as strange as it may seem, the selling during that decline never reached the panic proportions found near almost all major market bottoms in the past 69 years. Not even a single 90% Downside Day was recorded from May through September. Thus, the probabilities drawn from past experience suggested that stock prices had not been discounted enough to attract a broad sustained buying interest. In short, the final market bottom had not been seen in September 2001. And, the highly selective rally that ensued from the September 2001 low through early January 2002 was, once again, not strong enough to produce a 90% Upside Day, thus adding to the evidence that the final low for the Dow Jones Industrial Average has not yet been reached, and that a period of investor panic, generating a series of 90% Downside Days, may still be ahead.
It is important to recognize that the pattern of 90% Days is not a new, untried, backrecord discovery. The original research was conducted by the Lowry staff twenty-seven years ago, in early 1975. The findings were first reported to the investment community in 1982 at a Market Technicians Association Seminar. Since that time, the history of 90% Days has been recorded day by day, and has proven repeatedly to be a very valuable tool in identifying the extremes of human psychology that occur near major market bottoms. Obviously, no prudent investment program should be based solely on a single indicator. Other measurements of price, volume, breadth, and momentum are needed to monitor the strength of buying versus selling on a continuous daily basis. But, we believe the 90% indicator, as outlined above, will be an enduring, important part of stock market analysis, since it, like the other facets of the Lowry Analysis, is derived directly from the Law of Supply and Demand – the foundation of all macro-economic analysis.
APPENDIX