Issue 54, Summer/Fall 2000

JOURNAL OF
TECHNICAL ANALYSIS

Issue 54, Summer/Fall 2000

Editorial Board

David L. Upshaw, CMT, CFA

Jeffrey Morton, CMT, MD

Connie Brown, MFTA

Founder

John A. Carder, CMT

Ann F. Cody, CFA

Robert Peirce

Charles D. Kirkpatrick

John R. McGinley, CMT

Cornelius Luca

Theodore E. Loud, CMT

Michael J. Moody, CMT

Dorsey, Wright & Associates

Richard C. Orr, Ph.D.

Ken Tower, CMT

Chief Executive Officer, Quantitative Analysis Service

J. Adrian Trezise, M. App. Sc. (II)

Barbara I. Gomperts

CMT Association, Inc.
25 Broadway, Suite 10-036, New York, New York 10004
www.cmtassociation.org

Published by Chartered Market Technician Association, LLC

ISSN (Print)

ISSN (Online)

The Journal of Technical Analysis is published by the Chartered Market Technicians Association, LLC, 25 Broadway, Suite 10-036, New York, NY 10004.New York, NY 10006. Its purpose is to promote the investigation and analysis of the price and volume activities of the world’s financial markets. The Journal of Technical Analysis is distributed to individuals (both academic and practicitioner) and libraries in the United States, Canada, and several other countries in Europe and Asia. Journal of Technical Analysis is copyrighted by the CMT Association and registered with the Library of Congress. All rights are reserved.

Letter from the Editor

by Henry O. “Hank” Pruden, PhD

Address to MTA 25th Anniversary Seminar – May 2000 “Living Legends” Panel

Robert J. Farrell

[Editor’s Note: At the 25th Anniversary Seminar in May 2000 in Atlanta, Georgia, Bob Farrell was a member of a panel called The Living Legends: A tribute to, and remarks by, the eight winners of the MTA Annual Award. The winners were: Art Merrill, Hiroshi Okamoto, Ralph Acampora, Bob Farrell, Don Worden, Dick Arms, Alan Shaw, and John Brooks and all were in attendance. The panel was hosted by your editor, Henry Pruden. The following is the text from Bob Farrell’s presentation:]

I appreciate being included in this 25th anniversary year for MTA Seminars. I also appreciate being one of the living recipients of the annual award. I remember participating in the first seminar as part of what was called the 1-2-3 panel for the first time. Institutional Investor magazine had included a market timing category in its annual AllAmerican Research Team poll and Don Hahn, Stan Berge and I were those chosen. That was an indication of greater institutional recognition of market analysis and timing. Before the great Bear Market of 1972-74, technical analysis was mostly regarded with suspicion by professionals. Institutional portfolio managers generally denigrated its importance even though in every meeting I had with them, they all carried chart books. The big breakthrough came, however, after so many of them got hurt in the 1972-74 Bear Market. They started asking how could we have anticipated the collapse of the nifty-fifty and most other stocks. They then began to notice that many market analysts and technicians had issued warnings about the coming debacle. From then on, they started paying more attention. But just as they did not care about technical timing at the top of the bull market in the late 1960s, by the mid-1970s they wanted to hear more about how to avoid the next bear market. In fact, the Financial Analysts Federation asked me to speak at their annual conference in New York in 1975 on using technical tools to avoid the next bear market. What I chose to speak about was how to use market timing tools to help identify where to be invested for the coming long-term bull market. It seemed clear to me bear markets of the 1974 intensity did not come along often and set the stage for new long bull runs. They wanted to me to talk about the past instead of the future.

When I chose to be a market analyst instead of a security analyst in the early 1960s, I soon realized that what I needed as a goal was professional recognition. I also realized that it could only come from institutions as their dominance was growing in the market. But I knew most portfolio manager’s eyes glazed over when I spoke of technical indicators or they were outright hostile to technicians. So, I came up with a plan. I incorporated more long-term trend and cycle work in my analysis so portfolio managers could look at my analysis as something beyond short-term trading. I also realized I could get their attention by giving them fundamental reasons for the conclusions I had arrived at using market indicators. Then  I figured out that if I wanted to have impact, effective communication was everything. Of course, I had to be right a good percentage of the time and make sense, but the ability to write and speak in a common sense style without arrogance was crucial to getting their attention.

I also realized, as I am sure many of you have figured out, that most professional money managers have strong views that you are not going to change in a single meeting or with a single report. When I got a conviction about a sector or a market change, I knew it had to offer more than a conclusion or opinion. We had to supply information and present it logically to prove a point. Today, there is more information available more quickly than ever before but, interestingly, results of most managers are still worse than a passive index. Most want and need to be told which information is important. One of the things I capitalized on was the idea that I had information not available elsewhere, i.e., Merrill Lynch internal transaction figures. We, in fact, applied the term sentiment analysis to our figures back in the mid 1960s and used them to advantage as contrary indicators. Even though they were only one tool, they gave us an edge in supplying unique information to clients. Today, of course, many firms have such data and it is less unique.

I don’t believe in us versus them when it comes to technical analysis and fundamental analysis. The goal is to come up with profitable ideas, not whose tools are best. Nevertheless, I had one chance to turn the tables on fundamental security analysts which I enjoyed immensely. When I went to Columbia Business School in 1955 to get a Master’s in Investment Finance, I had both Ben Graham and David Dodd as professors. As you know, they were the original value investors who wrote the bible of fundamental analysis called, “Security Analysis.” Published in 1934, there was a 50th anniversary seminar in 1984 at Columbia to which I was invited as a speaker. When the Dean first invited me, I asked him incredulously, “Do you know what I do?” Even though he understood that most technical analysis was poles apart from the fundamental value training of Graham & Dodd, he said, “Just tell us how they influenced you.” I was the last speaker on the all-day program which included Warren Buffet, Mario Gabelli and others, and I felt very intimidated. But I decided to try a different approach and gave a speech entitled, Why Ben Graham Was A Closet Technician. Surprisingly, it was well received. I cited many references he made to the characteristics of a market top and his references to measures of speculation.

The fact that I was rated number one in 16 of the 17 years I competed in the Institutional Investor All-Star Research poll as Chief Market Analyst was not because I was more right than anybody else. I did have a good platform at Merrill Lynch but not everybody at Merrill was ranked #1 either. I think it was my ability to communicate what was happening or changing in the markets with an historical perspective in a form that mostly fundamental clients could understand. I never talked down to them and always had a sector opinion that I emphasized where my conviction level was high. I thought they usually took away something useful from my presentation even if they disagreed with some of the general conclusions.

As a result of the integration of fundamental reasoning to back up technical conclusions, I became less regarded as a technician and more as a market strategist who used historical precedent and technical tools. I have never liked the term technician because it is too limiting and am very much in favor of finding another way to describe what we do. We study so many things such as price trends, momentum, money flows, cycles and waves, investor behavior and sentiment, supply-demand changes, volume relationships, insider activity, monetary policy and historical precedent. We have a broad field of study that has grown more inclusive with time and the computer age. It is just not adequately summed up in the term technician. At Merrill Lynch, we use the broader term of market analyst to avoid the limiting label of technician. Despite all the attempts at upgrading and professionalizing our craft by our association, we still have the press calling technicians sorcerers, elves, entrail readers and other denigrating terms. We have come a long way, but we have not shaken the negative image of the past that goes with the term technician, particularly with the press. You may disagree or even not care, but experience tells me to emphasize our broader
range of skills.

I am impressed with the advanced techniques being used to analyze the market data and the progress made in working with the Financial Analysts Federation and the academic community. There is much more substance in our craft as a result of your efforts. Nevertheless, the world at large needs to be educated. Investor’s Business Daily does an excellent job of explaining how to use technical tools and integrate technical and fundamental information on an ongoing, real-time basis. We should use this model as an organization and have our members publish regular educational articles in the mainstream press or on a net website. We have created excellent professional credentials over the years. Now we need to market our profession – if not as technicians, perhaps as market behavioral strategists or market timing and behavioral strategists. You deserve recognition for your broader range of skills as well as your ability to provide profitable market and stock conclusions.

Thank you for inviting me.

BACK TO TOP

Exploiting Volatility to Achieve a Trading Edge

by Jeffrey Morton, CMT, MD

About the Author | Jeffrey Morton, CMT, MD

Bio Coming

Purpose

This study was designed to evaluate the theoretical returns for a simple non-directional option strategy initiated after a sudden and significant volatility implosion of an underlying stock.

Methods and Materials

The 30 Dow Jones Industrial stocks from November 1, 1993, through May 30, 1998, were chosen for this study. Delta neutral/ gamma positive straddle positions were initiated on the opening price of the stock after the near-term historical volatility of the stock had significantly imploded relative to its longer-term historical volatility. Any signals generated in the same stock before the 6- week termination date of a prior trade were ignored. On the date of calculation, the options prices were determined with the actual implied volatility using the Black-Scholes model, assuming moderate slippage. All trades were equally weighted. The value of the options’ positions were calculated based on the closing stock price at the 2-, 4-, and 6- week periods respectively. Two trading systems were evaluated. In the first system (time-based system), time was the sole determinant used to determine when the option positions would be closed out. In the second trading system (money management system), simple money management rules were added to reduce draw-downs and to “lock-in” profits in profitable trades. Given the wide variability of brokerage fees, the results are presented without commission costs deducted.

Results

A total of 280 trades were generated between November 1, 1993, and May 30, 1998. For the time-based trading system (trading system 1), the 2-week, 4-week, and 6-week cumulative return was -191.9%, +334.7%, and -84.3% and the average return per trade was -0.69%, +1.20%, and -0.30% respectively. For the money management trading system (trading system 2), the 4-week and 6-week cumulative returns were +993.4%, and +1188.6% and the average return per trade was +3.55% and +4.25% respectively. The use of a simple money management system significantly reduced the draw-downs of the system.

Conclusions

The simple time-based volatility trading strategy produced a positive return holding the options for four weeks. This simple straddle-based options strategy had significant draw-downs that preclude it as a viable trading strategy without modifications. The addition of some very simple money management rules significantly improved the returns while simultaneously decreasing the drawdowns. This volatility-based, market-neutral, delta-neutral (gamma positive) trading strategy yielded a very substantial positive return across a large number of large-cap stocks and across a broad five year period. These results demonstrate the potential positive returns that can be obtained from a market-neutral/delta-neutral strategy. The benefit of a market-neutral strategy as demonstrated here is of significant importance to institutional portfolio managers in search of non-correlated asset classes.

INTRODUCTION

For options-based trading, the price action of any freely-traded asset (e.g., stocks, futures, index futures, etc.) can be grouped into three generic categories (however defined by the trader): (a) bullish price action; (b) bearish price action; (c) congestion/trading range price action. Specific options-based strategies can be implemented which result in profits if any two out of the three outcomes unfold. For example, the purchase of both call and put options on the same underlying asset for the same strike price and same expiration date is termed a “straddle” position (e.g., buying XYZ $100 strike March 1999 call and put options = XYZ $100 March 1999 straddle). This straddle position can be profitable if either (a) or (b) quickly occur with significant magnitude (i.e., price volatility) prior to option expiration. In this sense, a straddle trade is nondirectional since it can profit in both bull and bear moves.

Price volatility can be described by several common technical indicators including ADX, average-true-range, standard deviation, and statistical volatility (also called historical volatility). Volatility has been observed to be “mean-reverting.” Periods of abnormally high or low short-term price volatility are followed by price volatility that is closer to the long-term price volatility of the underlying asset.(1,3) A short-term drop in price volatility (volatility implosion) can be reliably expected to be followed by a sudden volatility increase (volatility explosion). Connors, et. al. have shown that multiple days of short-term volatility implosion is a predictor of a strong price move.(1,2)

The volatility implosion does not predict the direction of the impending price move, but only that there is a high probability that the underlying asset is going to move away from its current price and by a significant amount. In addition, the volatility implosion does not predict when (how quickly) the explosion price move will develop. We can predict which direction the price of the stock, commodity, or market is not going to move with a high degree of probability. It most likely will not move side-ways indefinitely. Knowing this, one can devise a trading strategy that is able to profit, or at least not lose money, if the stock moves quickly higher or lower such as the straddle strategy earlier described above.

In the option straddle strategy described above (e.g., XYZ $100 March 1999 straddle), as the price of the underlying asset moves away from the option’s strike price in either direction, the option that is gaining in value will increase at a greater rate than the opposing option that is losing value. The position is said to be gamma positive in both directions. The straddle will lose if the price of the asset stays at or near the strike prices of the options, i.e. the stock moves side-ways. The straddle position deteriorates because of continued decrease in the volatility of the underlying asset, plus the time-decay value of the option as it approaches expiration.

This study was designed to explore the potential investment returns that could be obtained using the basic option straddle strategy. At PRISM Trading Advisors, Inc., this strategy has been successfully implemented to generate superior returns at lower risk than traditional investment portfolio benchmarks.

METHODS AND MATERIALS

System 1 (Time-Based Strategy)

To test the robustness of this trading strategy, the Dow 30 Industrial stocks from November, 1, 1993, through May 31, 1998, were chosen for this study. They were chosen because they are a wellknown group of stocks that have been designed to  represent the market at large. Volatility is defined by the price statistical volatility formula: s.v. = s.d.{log(c/c[1]),n} * square-root (365). Statistical (or historical) price volatility can be descriptively defined as the standard deviation of day-to-day price change using a log-normal distribution and stated as an annualized percentage. Detailed information on statistical volatility is available from the references.(1,2,3)

  • Rule 1: 6-day s.v. is 50% or less than the 90-day s.v.
  • Rule 2: 10-day s.v. is 50% or less than the 90-day s.v.
  • Rule 3: Both rule #1 and rule #2 must be satisfied to initiate the trade.

Thus in this study, a volatility implosion was defined as when the 6-day and 10-day historical volatilities were 50% or less than the 90-day historical volatility. When this condition is met, a signal to initiate a straddle position was taken the following trading day. The Black-Scholes model was used to calculate the options prices that were used to establish the straddle positions. The opening price of the stock, the actual implied volatility, and the yield of the 90-day U.S. Treasury Bill were used to calculate the price of the options. The professional software package OpVue 5 version 1.12 (OpVue Systems International) was used to calculate the prices of the options assuming a moderate amount of slippage. For the purposes of this analysis, it was assumed that each trade was equally weighted and that an equal dollar amount was invested into each trade. Based on the closing stock price, the value of the option straddle positions were then calculated using the same method described above after 2 weeks, 4 weeks, and 6 weeks respectively. Any trading signals generated in a stock with a current open option straddle position before the end of the 6-week open trade period were ignored. To minimize the effect of time decay and volatility, options with greater than 75 days to expiration were used to establish the straddle positions. The positions were closed out at the end of the 6-week time period with more than 30 days left until expiration. To further minimize the effect of volatility, options were purchased “at or near the money.” Given the current large variability of brokerage fees, the results were calculated without deducting commission costs.

System 2 (Money Management Strategy)

A second trading strategy was explored. It was identical to the first trading strategy except a set of simple money management rules were added. The rules were designed to 1) cut losses short, 2) allow profits to run, and 3) lock in profits.

  • Rule #1: A position was closed immediately if a 10% loss occurred.
  • Rule #2: If a 5% profit (or greater) was generated, then a trailing stop of one-half (50%) of the maximum open profit achieved by the position was placed and the position closed if the 50% trailing stop was violated.
  • Rule #3: If neither rule #1 or #2 was violated then the position was closed out after either 4 weeks or 6 weeks.

RESULTS

System 1 (Time-Based Strategy)

A total of 280 trades were generated between November 1, 1993 and May 30, 1998. Numerous parameters of the 280 trades were analyzed. The results are summarized in Table 1. The 2-week, 4- week, and 6-week cumulative returns were +191.9%, +334.7%, and -84.3% respectively and are shown in Figure 1. The return of the DJIA over the same time period was +241.8% (3,680.59 to 8,899.95). The maximum draw-downs for the 2-week, 4-week, and 6-week series were, -424.3%, (November 12, 1993 – April 28, 1995), -450.8% (November 8, 1993 – May 17, 1995), and -763.3% (December 6, 1993 – May 19, 1995). The maximum draw-ups for the 2-week, 4- week, and 6-week series were, +373.9% (April 7, 1995 – July 1, 1997), +933.2% (April 18, 1995 – November 11, 1997), and +948.2% (April 7, 1995 – November 17, 1997).

System 2 (Money Management Strategy)

A total of 280 trades were generated between November 1, 1993 and May 30, 1998. Numerous parameters of the 280 trades were analyzed. The results are summarized in Table 2. The 4-week, and 6-week cumulative returns were +993.4%, and +1188.6% respectively, and are shown in Figure 2. The return of the DJIA over the same time period was +241.8% (3,680.59 to 8,899.95). The maximum draw-downs for the 4-week and 6-week series were -188.1%, (August 5, 1994 – February 23, 1995) and -246.2% (August 5, 1994 – February 23, 1995). The maximum draw-ups for the 4-week and 6-week series were +641.4% (September 20, 1996 – October 20, 1997) and +704.1% (September 20, 1996 – October 20, 1997).

DISCUSSION

It has been observed that short-term volatility will have a tendency to revert back to its longer-term mean.(1,3) Connors et.al.(1) have published the Connors-Hayward Historical Volatility System and showed that when the ratio of the 10-day versus the 100-day historical volatilities was 0.5 or less, there was a tendency for strong stock price moves to follow.

In this study, PRISM Trading Advisors, Inc., have confirmed the phenomenon of volatility mean reversion by presenting the first large scale option-based analysis while maintaining a strict marketneutral/delta-neutral (gamma positive) trading program. We have shown that a significant price move occurs 75% of the time following a short-term volatility implosion (as defined in the Methods and Materials section).

For this analysis we chose a relatively straightforward strategy: to purchase a straddle. A straddle is the proper balance of put and call options that produce a trade with no directional bias. A straddle is said to be “delta neutral” and will generate the same profit whether the underlying asset’s price moves higher or lower. As the asset price moves away from its initial price one option will increase in value while the other opposing option will decrease in value. A profit is generated because the option that is increasing in value will increase in value at a faster rate than the opposing option is decreasing in value. The straddle is said to be “gamma positive” in both directions.

This option strategy has a defined maximum risk of the trade that is known at the initiation of the trade. This maximum risk of loss is limited to the initial purchase costs of the straddle (premium costs of both put and call options). There is no margin call with this straddle strategy. There is an additional way that this strategy can profit. Since the options are purchased at the time there has been an acute rapid decrease in volatility, one should theoretically be purchasing “undervalued” options. As the price of the asset subsequently experiences a sharp price move, there will be an associated increase in volatility which will increase the value of all the options that make-up the straddle position. The side of the straddle which is increasing in value will increase at an even faster rate, while the opposite side of the straddle which is decreasing in value will decrease in value at a slower rate. So as to not further complicate the analysis, the exit strategy for the first system (time-based strategy) for this study was even more basic using a time-stop exit criteria.

Prior to the study, it was our impression that a 4-week time period would be the most optimal of the three. This is what was seen. The 4-week exit produced a positive return over the study period (334.7%). However, the use of a 2-week time-stop was frequently not sufficient time to allow for the anticipated price move. Note that in Figure 1, the 2-week maximum open-profit draw-up was significantly less than the draw-ups for both the 4-week and 6-week time-stops (373.9% vs. 933.2% and 948.2% respectively). The 6- weeks strategy was too long, allowing for substantially greater maximum draw-down secondary to the adverse effects of time decay, volatility, and price regression back toward the stock’s initial starting price that eroded the value to the straddle position when compared to the 2-week and 4-week strategies. All other aspects of the trades of the three exit strategies were similar. There were no significant differences in the percentage of wining/losing  trades or number of consecutive winning or losing trades.

A second system using a simple set of money management rules was tested (money management system). These rules were designed to close-out non-performing trades early before they could turn into large losses and kept performing positions open as long as they continued to generate profits. These goals were accomplished by closing out any position if its value decreased to 90% of its initial value (10% loss). A position with open profits had a 50% trailing stop of the maximum open profit achieved by the position at anytime open profits exceeded 5%. If neither of these two conditions occurred, the position was closed out at the end of six weeks.

As predicted, the 6-week money management strategy produced both a greater total return (+993.4% versus 1,188.6%) and a slightly greater maximum draw-down than the 4-week money management strategy. By closing positions when a loss of 10% had occurred, we were able to significantly decrease the amount of losses incurred. This is evidenced by the maximum draw-down for the 6-week positions being decreased significantly from (-763.3%) to (-246.2%) employing no money management versus implementing the above money management rules. Also the total returns were markedly improved with the total return increasing from (-84.3%) to (+1188.6%).

While the first trading system (time-based strategy) study demonstrated that this trading strategy with a 4-week time-stop exit produced a positive return, it is not sufficient as a stand-alone system for real-time trading. It does, however, indicate that this strategy can be used as the foundation to design a viable trading system that can capture the majority of the gains while simultaneously eliminating the majority of the loses. There are almost an infinite number of possibilities one could explore to achieve this goal.

The second method, and the one explored in this paper, was the application of a simple set of money management rules. As discussed above, this dramatically improved the overall returns while simultaneously decreasing the draw-downs experienced in the first strategy (time-based strategy). Other possibilities include the addition of a second entry filter such as a momentum indicator like the RSI, ROC or MACD indicator. One could design a more sophisticated exit strategy such as exiting the position if the stock price exceeds a predetermined price objective as defined by price channels, parabolic functions, etc. An additional possibility would be to re-establish a nondirectional option’s position at a predetermined price objective, thereby “locking in” all the profits generated up to that point. The myriad of options-based strategies available to adjust back to a delta neutral position based on technical indicators and predetermined price objectives are beyond the scope of this paper.

Although both systems had positive expectations based on 280 trades, there are several limitations of the study design. Although moderate slippage was used in all the calculations, the robustness of this study might have been improved if access to real-time stock option bid-ask prices were available for all of the trades investigated. Unfortunately, such a large, detailed database is not readily available. Given that the real-time bid-ask prices were not available, the use of the Black- Scholes formula with the known historical inputs (stock price, implied volatility, 90-day T-Bill yield) is an acceptable alternative thereby minimizing any pricing differences between the actual and theoretical option prices systematically throughout the time period used in the study.

The current study revealed that a simple straddle options-based strategy designed to exploit a sudden implosion of a stock’s volatility with time as the only existing criteria produced draw-downs that preclude it as a viable trading strategy in its own right. However, this simple strategy had a positive expectation of generating superior returns, and therefore can be used as the basis to develop trading strategies capable of producing superior returns without the need to correctly predict the direction of a given stock, commodity, or market being traded. The addition of some simple money management rules dramatically improved the overall returns while simultaneously decreasing the excessive draw-downs that plagued the original trading strategy, thereby transforming it into a applicable trading system for every day use. This volatility-based, delta-neutral strategy also is independent of market direction. A market-neutral strategy and portfolio may be considered as a separate asset class by portfolio managers in the efficient allocation of their clients’ investment portfolios to boost returns while simultaneously decreasing their clients risk exposure.

In conclusion, this is the first large-scale trading research study to be shared with the trading public that clearly demonstrated how the phenomenon of price volatility mean-reversion can be exploited by using an options-based delta-neutral approach. Price, time and  volatility factors using options-based strategies to further maximize positive expectancy represent active areas of real-time trading research at PRISM Trading Advisors, Inc. These results will be the subject of future articles.

REFERENCES

  1. Connors, L. A., and Hayward, B.E., Investment Secrets of a Hedge Fund Manager, Probus Publishing, 1995.
  2. Connors, L. A: Professional Traders Journal, Oceanview Financial Research, Malibu, CA. March 1996, Volume 1, Issue 1.
  3. Natenberg, S., Option Volatility and Pricing. Advanced Trading Strategies and Techniques, McGraw Hill, 1994.

BACK TO TOP

Mechanical Trading System vs. The S&P 100 Index

by Art Ruszkowski, CMT, M.Sc.

About the Author | Art Ruszkowski, CMT, M.Sc.

Art Ruszkowski combines his strong scientific background with knowledge and practice of technical analysis specializing in quantitive analysis, and mechanical trading system design and testing. He is currently a partner in a private investment fund, and is responsible for development of models, studies, portfolio selections and money management strategies.

Art is a member of the MTA and the CSTA.

PREFACE

In their quest to outperform the Index, equity fund managers must solve a four-piece puzzle: which stocks should they buy, when should they buy them, when should they sell them and how much capital should they allocate to each stock. The performance of different fund managers varies greatly. Some are able to outperform the Index, and others cannot. This paper investigates the question of whether technical analysis in its most simplistic form along with simple money management can be used to outperform the Index.

THE FOUR-WEEK RULE

Most market technicians will agree that the simplest technical market analysis rule is the Four-Week Rule. The Four-Week Rule (4WR) was originally developed for application to futures markets by Richard Donchian, and can be expressed as follows:

Cover shorts and go long when the price exceeds the highs of the four preceding full calendar weeks and conversely liquidate longs and go short when the price falls below the lows of the four preceding full calendar weeks.

The rationale behind this rule is that the four-week or 20-day trading cycle is a dominant cycle that influences all markets.

For the purpose of further discussion, let’s modify the FourWeek Rule system as follows:

Buy if the price exceeds the highs of the four preceding full calendar weeks and liquidate open positions when the price falls below the lows of the four preceding full calendar weeks.

With this modification, the 4WR – (no shorts) system can be easily applied by many equity fund managers because very few of them can go short.

Let us formally define our modified mechanical system:

System Code: NS-20BS-EQ (No Shorts, 20 days for Buy and Sell Rules, Equally Allocate Capital)

  1. Money management rule – Equal allocation rule
    Use $100,000 of capital into one hundred S&P 100 Index stocks, allocating an equal amount of money into each stock ($1000).
  2. Technical analysis rule – Buy
    Buy a stock if its closing price is higher than the high of last 20 trading days.
  3. Technical analysis rule – Sell
    Sell a stock if its close price is lower than the low of last 20 trading days
  4. Money management rule – Redistribute profits equally
    If the profit from the sale of a stock is greater than the initial allocation of capital to this stock, then that profit is equally distributed among all stocks which are in a potential Buy position.
  5. Money management rule – Earn interest on cash
    All cash on hand earns fix rate interest @ 5% per annum.
  6. Money management rule – Transaction costs
    A fixed transaction cost of $50 is applied to each transaction (this cost represents a fair average of commissions and slippage).

A custom computer software was designed and created to test this system in the time frame from January 1, 1984 to January 1, 1989. During this time frame the following performance statistics were calculated for NS-20BS-EQ system and were compared to the performance statistics of the Index (S&P 100) with a “Buy-and-Hold Strategy.”

Performance Statistics Measured

The following performance statistics were measured for each case (definitions are included in Appendix 1):

  • Average Annual Compounded Return (R)
  • Sharpe Ratio (SR)
  • Return Retracement Ratio (RRR)
  • Maximum Loss (ML)

It is clear that the above system is performing less than the “Buyand-Hold Strategy” of S&P 100 Index.

There are several choices to improve performance of the system by modifying system parameters. The most natural change is to search for better performance by modifying the number days used for the Buy and Sell rules. The performance of the NS-xBSEQ system where x is the number of days for Buy and Sell rules was tested for x between 10 days and 90 days. Results of the test are provided in the Appendix 2.1.

Testing proved that the best performing system was the one with 50 days used for the Buy and Sell rules.

Still the performance of the above system is not very impressive, so let’s consider further research. Let’s modify Rule 1 from the system definition and replace it by following rule:

1A. Money management rule – Proportional allocation rule

Use $100,000 of capital into one hundred S&P 100 Index stocks, allocating money into each stock according to its percentage participation in the index at the starting date of the testing period (January 1, 1984).

So we consider the new system:

System Code: NS-xBS-P (No Shorts, x Days for Buy and Sell Rules, Proportionally Allocate Capital)

The new system consists of Rule 1A and Rules 2-6.

The performance of the NS-xBS-P system where x is the number of days for Buy and Sell rules was tested for x between 10 days and 90 days. Results of the test are provided in the Appendix 2.2.

Testing proved that the best-performing system was one with 50 days used for Buy and Sell rules.

The last system outperforms the S&P100 “Buy-and-Hold Strategy” but let’s consider further research. So far modifications were limited to systems with different number of days for the Buy and Sell rule and for using different initial allocations of the capital – equal and proportional. Let’s consider the following hybrid of the original NS-20BS-EQ system – by replacing Rule 3 with following new rule:

3B. Money management Stop Loss Rule – Sell losing positions

Sell a stock if it is losing more than y% of its buy price, where y is a system parameter

Let’s name this system:

System Code: NS-xB-P-Sy (No Shorts, x Days for Buy Rule, Proportionally Allocate Capital, – Sell When Drops y%)

Only systems with proportionally allocated capital are analyzed due to the fact that they perform better than equally allocated ones in a considered period of time. The performance of the NS-xB-PSy system where x is the number of days for Buy rule was tested for x between 10 days and 90 days and for y between 10% and 70%. Results of the test are provided in the Appendix 3.

Testing of systems NS-xB-P-Sy proved that the best performing system was one with 50 days used for Buy rule and 25% money management stop loss rule.

The last system which is the result of several cycles of modifications to the initial 4WR outperforms the S&P100 “Buy-and-Hold Strategy” by a good margin. To find out how time-stable the above system was, a blind test was conducted.

Blind Test Results: System (NS-50B-P-S25)

The system was tested in a new time period (between January 1, 1989 and January 1, 1994). Here is the comparison of market statistics between the Index and the NS-50B-P-S25 system in the time frame from January 1, 1989 to January 1, 1994.

Comparing these values, we see that the system still outperformed the Index in respect to the Average Annual Compounded Return (by 40%) and the Return Retracement Ratio, but marginally under-performed with the other two statistics. This can be explained by comparing the monthly DMI readings in the time period of 1984-1989 and 1989-1994. In the first time period, the standard 14-month Directional Movement Index (DMI) was well above 25 (a strong trending market); however, in the second time period, the DMI was only marginally above 25 (a weak trending market). So in such a time period, a trend following system doesn’t display as impressive results.

GRAPHS

The following graph presents the performance of one of the optimum combinations of parameters (50 days and 25% ) between January 1, 1984 and January 1, 1997.

Some may argue that the good performance of the NS-50B-PS25 system is the result of a continuous Bull market between 1984 and 1997. So, at such conditions, buying low and not selling unless the stock loses significant percent of its value will always result in a winning strategy. However, such a system will perform very poorly in the bear market.

To investigate this claim, let’s test performance of the NS-50BP-S25 over the time period from August 25, 1987 to August 25, 1992. August 25, 1987 was chosen as the new start date became it is the high of the S&P 100 market before 1987 “crash.’’ The next graph presents the performance of this test.

CONCLUSION

It is clear that when applying simple proven rules of technical analysis (like 4WR) any modifications to the rule (in this case removal of short selling option) can significantly effect profitability of a system. Also it was demonstrated that mechanical trading systems can be transformed by parameters and rules modifications so that their performance improves.

One very interesting observation worth further study is the fact that the large difference in performance was affected by the amount of money allocated into each stock. This is due to fact that the S&P 100 Index is a capitalization-weighted index of 100 stocks. The component stocks are weighted according to the total market value of their outstanding shares. The impact of the component’s price change is proportional to the stock’s total market value, which is the share price times the number of shares outstanding. In other words, the S&P 100 Index can be considered as a relative-strength based index. An index-based capital allocation system (NS-50B-PS25) performs best and gives an objective measure of the validity of its trading rules as well as its money management rules when its performance is compared to the performance of the Index. Systems with sound trading and money management rules, as well as capital-allocation based on relative strength should, in general, outperform both the index and equally-allocated systems.

It is worth observing that the proportionally-allocated system outperformed the equally-allocated system and the Index during the tested time periods regardless whether during those periods the Large Cap outperformed the Small Cap or vice versa.

BIBLIOGRAPHY

  1. John. J. Murphy, Technical Analysis of the Futures Markets, New York Institute of Finance,1986.
  2. Jack D. Schwager, Schwager on Futures – Technical Analysis, John Wiley & Sons, Inc., 1996.
  3. Carla Cavaletti, Trading Style Wars, Futures, July 1997.

APPENDIX 1

Glossary of Terms:

Mechanical Trading System: A set of rules that can be used to generate trade signals and trading performed according to the rules of mechanical system. Primary benefits of mechanical trading systems are elimination of emotions from trading, and consistency of approach and risk management. Mechanical trading systems can be classified as Trend-Following (initiating a position with the trend), and Counter-Trend (initiating a position in the opposite direction to the trend). Trend following systems can be divided into fast and slow. Fast – a more sensitive system responds quickly to signs of trend reversal and will tend to maximize profit on valid signals, but also generate far more false signals. A good trend following system should not be too fast or too slow.[2]

Trading according to signals generated by mechanical trading system is called systematic trading which is opposite to discretionary trading. Discretionary traders claim that emotions, which are excluded from systems trading, offer an edge. On the contrary, systematic traders favor backtesting, analyzing patterns and eliminating emotions. According to Barclay Trading Group Ltd., over the last ten years systematic traders have yielded higher annual returns than discretionary traders six times. [3]

Optimization of the trading system: The process of finding the best performing parameter set for a given system. The underlying premise of optimization is that the parameter set must work not only in its initial time frame but any time frame. Almost any mechanical system can be optimized in a way that it will show positive results in any given period of time.[2]

Parameter: A value that can be freely assigned in the trading system in order to vary the timing of signals.[2]

Parameter Set: Any combination of parameter values.[2]

Parameter Stability: The goal of optimization is to find broad regions of parameter values with good system performance, instead only one parameter which can represent an isolated set of market conditions.[2]

Time Stability: In the case of positive performance of the mechanical system in a specific time frame, it should be analyzed in different time frames to make sure the good performance is not dependent only on the initial time frame.[2]

Blind Simulation: This is the test of an optimized parameter set in a different time frame to see if the good results reoccur.

Average Parameter Set Performance: The complete universe of parameter sets is defined before any simulation. Simulations are then run for all the selected parameter sets, and the average of these is used as an indication of the system’s potential performance.[2]

Average Annual Compounded Return: R = exp(1/N(ln(E) – ln(S)) – 1
S – starting equity
E – ending equity
N – number of years [2]

Return Retracement Ratio: (RRR) = R/AMR
R – average annual compounded return
AMR – average maximum retracement for each data point. Using drawdowns (the worst at each given point in time) to measure risk, the risk component of RRR (AMR) comes closer to describing risk than standard deviation. 

n
AMR=1/n(Σ MRi)
i=1

MRi=max(MRPPi,MRSLi)

MRPPi=(PEi – Ei)/Pei
MRSLi=(Ei – MEi)/Ei-1
Ei – equity at the end of month i,
PEi – peak equity on or prior to month i,
Ei-1 – equity at the end of month prior to month i,
MEi – minimum equity on or subsequent to month i.

RRR represents better return/risk measure than Sharpe ratio.[2]

Sharpe Ratio: SR=E/sdv
E – expected return
Sdv – standard deviation of returns.[2]

Expected Net Profit Per Trade: ENPPT= P*AP – L*AL,
P – percent of total trades that are profitable
L – percent of total trades that are in net loss
AP – average net profit of profitable trades
AL – average net loss of losing trade.[2]

Maximum Loss: ML= max(MRSLi)i<=n, this represent worse-case possibility[2]

Trade-Based Profit/Loss Ratio: TBPLR= P*AP/L*AL[2]

Results for NS-xBS-EQ where x represents number of days used in Buy and Sell rules, x>=10 and x<=90

To find optimal number of days used in Buy and Sell rules follow procedure:

  1. In each column select five best performing results (according to the column definition. So for example in the case of the average annual compounded return, we select five highest numbers, but in the case of maximum loss five lowest numbers). Mark the results by bold typeface.
  2. Find the rows which are marked in each column – mark the rows by italic typeface.
  3. From selected rows choose one with the optimal results.

The optimal the system with 50 days used for Buy and Sell rules.

APPENDIX 2.2

Results for NS-xBS-P where x represents number of days used in Buy and Sell rules, x>=10 and x<=90

The optimal is the system with 50 days for Buy and Sell rules.

APPENDIX 3

To find the best performing system we follow procedure:

  1. For each column in first table select five best-performing rows.
  2. Select best-performing rows from each column in the second table only if the row was selected in first table.
  3. Repeat step two for each subsequent table.
  4. Select optimal cell from still left cells.

This table shows the Sharpe Ratio in %. The columns are the percentages used in the Sell Losing Positions Rule and the rows are the number of days used in the Buy rule.

This table shows the Return Retracement Ratio. The columns are the percentages used in the Sell Losing Positions Rule and the rows are the number of days used in the Buy rule.

This table shows the Expected Net Profit Per Trade (ENPPT) in $. The columns are the percentages used in the Sell Losing Positions Rule and the rows are the number of days used in the Buy rule.

This table shows the Maximum Loss (ML) in %. The columns are the percentages used in the Sell Losing Positions Rule and the rows are the number of days used in the Buy rule.

This table shows the Trade-Based Profit/Loss Ratio (TBPLR). The columns are the percentages used in the Sell Losing Positions Rule and the rows are the number of days used in the Buy rule.

The best performing is the system with 50 days used for Buy rule and 25% Sell Losing Position rule.

BACK TO TOP

Science is Revealing the Mechanism of the Wave Principle

by Robert Prechter, CMT

About the Author | Robert Prechter, CMT

Robert Prechter is CEO of Elliott Wave International, an independent market analysis firm. He is also Founder of Socionomics Institute.

Robert has written 20 books on finance, beginning with Elliott Wave Principle in 1978, which predicted a 1920s-style stock market boom. His 2002 title, Conquer the Crash, predicted the 2006-2011 stocks and property crises. In The Socionomic Theory of Finance (2006), Prechter detailed a paradigm for financial markets that differs in every substantive way from the paradigm borrowed from economics. In July 2007, The Journal of Behavioral Finance published “The Financial/Economic Dichotomy: A Socionomic Perspective,” by Prechter and his colleague, Dr. Wayne Parker. In 2012, Sage Open published Prechter et. al’s “Social Mood, Stock Market Performance, and U.S. Presidential Elections: A Socionomic Perspective on Voting Results,” which became the third-most-downloaded paper on the Social Science Research Network (SSRN) website in 2012.

Prechter has made presentations on socionomic theory to Oxford, Cambridge, the London School of Economics, MIT, Georgia Tech, SUNY and academic conferences.

His most recently completed project is an investigation into the true authors behind pseudonyms used in Elizabethan-era England, posted at www.oxfordsvoices.com.

Robert attended Yale University on a full scholarship and received a B.A. in psychology.

It is one thing to say that the Wave Principle makes sense in the context of nature and its growth forms. It is another to postulate a hypothesis about its mechanism. The biological and behavioral sciences have produced enough relevant work to  make a case that unconscious paleomentational processes produce a herding impulse with Fibonacci-related tendencies in both individuals and collectives. Man’s unconscious mind, in conjunction with others, is thus disposed toward producing a pattern having the properties of the Wave Principle.

THE PALEOMENTATIONAL HERDING IMPULSE

Over a lifetime of work, Paul MacLean, former head of the Laboratory for Brain Evolution at the National Institute of Mental Health, has developed a mass of evidence supporting the concept of a “triune” brain, i.e., one that is divided into three basic parts. The primitive brain stem, called the basal ganglia, which we share with animal forms as low as reptiles, controls impulses essential to survival. The limbic system, which we share with mammals, controls emotions. The neocortex, which is significantly developed only in humans, is the seat of reason. Thus, we actually have three connected minds: primal, emotional and rational. Figure 1, from MacLean’s book, The Triune Brain in Evolution,[1] roughly shows their physical locations.

The neocortex is involved in the preservation of the individual by processing ideas using reason. It derives its information from the external world, and its convictions are malleable thereby. In contrast, the styles of mentation outside the cerebral cortex are unreasoning, impulsive and very rigid. The “thinking” done by the brain stem and limbic system is primitive and pre-rational, exactly as in animals that rely upon them.

The basal ganglia control brain functions that are often termed instinctive: the desire for security, the reaction to fear, the desire to acquire, the desire for pleasure, fighting, fleeing, territorialism, migration, hoarding, grooming, choosing a mate, breeding, the establishment of social hierarchy and the selection of leaders. More pertinent to our discussion, this bunch of nerves also controls coordinated behavior such as flocking, schooling and herding. All these brain functions insure lifesaving or life-enhancing action under most circumstances and are fundamental to animal motivation. Due to our evolutionary background, they are integral to human motivation as well. In effect, then, portions of the brain are “hardwired for certain emotional and physical patterns of reaction”[2] to insure survival of the species. Presumably, herding behavior, which derives from the same primitive portion of the brain, is similarly hardwired and impulsive. As one of its primitive tools of survival, then, emotional impulses from the limbic system impel a desire among individuals to seek signals from others in matters of knowledge and behavior, and therefore to align their feelings and convictions with those of the group.

There is not only a physical distinction between the neocortex and the primitive brain but a functional dissociation between them. The intellect of the neocortex and the emotional mentation of the limbic system are so independent that “the limbic system has the capacity to generate out-of-context, affective feelings of conviction that we attach to our beliefs regardless of whether they are true or false.”[3] Feelings of certainty can be so overwhelming that they stand fast in the face of logic and contradiction. They can attach themselves to a political doctrine, a social plan, the verity of a religion, the surety of winning on the next spin of the roulette wheel, the presumed path of a financial market or any other idea.[4] This tendency is so powerful that Robert Thatcher, a neuroscientist at the University of South Florida College of Medicine in Tampa, says, “The limbic system is where we live, and the cortex is basically a slave to that.”[5] While this may be an overstatement, a soft version of that depiction, which appears to be a minimum statement of the facts, is that most people live in the limbic system with respect to fields of knowledge and activity about which they lack either expertise or wisdom.

This tendency is marked in financial markets, where most people feel lost and buffeted by forces that they cannot control or foresee. In the 1920s, Cambridge economist A.C. Pigou connected cooperative social dynamics to booms and despression.[6] His idea is that individuals rountinely correct their own errors of thought when operating alone but abidicate their responsibility to do so in matters that have strong social agreement, regardless of the egregiousness of the ideational error. In Pigou’s words,

Apart altogether from the financial ties by which different businessmen are bound together, there exists among them a certain measure of psychological interdependence. A change of tone in one part of the business world diffuses itself, in a quite unreasoning manner, over other and wholly disconnected parts.[7]

“Wall Street” certainly shares aspects of a crowd, and there is abundant evidence that herding behavior exists among stock market participants. Myriad measures of market optimism and pessimism[8] show that in the aggregate, such sentiments among both the public and financial professionals wax and wane concurrently with the trend and level of the market. This tendency is not simply fairly common; it is ubiquitous. Most people get virtually all of their ideas about financial markets from other people, through newspapers, television, tipsters and analysts, without checking a thing. They think, “Who am I to check? These other people are supposed to be experts.” The unconscious mind says: You have too little basis upon which to exercise reason; your only alternative is to assume that the herd knows where it is going.

In 1987, three researchers from the University of Arizona and Indiana University conducted 60 laboratory market simulations using as few as a dozen volunteers, typically economics students but also, in some experiments, professional businessmen. Despite giving all the participants the same perfect knowledge of coming dividend prospects and then an actual declared dividend at the end of the simulated trading day, which could vary more or less randomly but which would average a certain amount, the subjects in these experiments repeatedly created a boom-and-bust market profile. The extremity of that profile was a function of the participants’ lack of experience in the speculative arena. Head research economist Vernon L. Smith came to this conclusion: “We find that inexperienced traders never trade consistently near fundamental value, and most commonly generate a boom followed by a crash….” Groups that have experienced one crash “continue to bubble and crash, but at reduced volume. Groups brought back for a third trading session tend to trade near fundamental dividend value.” In the real world, “these bubbles and crashes would be a lot less likely if the same traders were in the market all the time,” but novices are always entering the market.[9]

While these experiments were conducted as if participants could actually possess true knowledge of coming events and so-called fundamental value, no such knowledge is available in the real world. The fact that participants create a boom-bust pattern anyway is overwhelming evidence of the power of the herding impulse.

It is not only novices who fall in line. It is a lesser-known fact that the vast majority of professionals herd just like the naïve majority. Figure 2 shows the percentage of cash held at institutions as it relates to the level of the S&P 500 Composite Index. As you can see, the two data series move roughly together, showing that professional fund managers herd right along with the market just as the public does.

Apparent expressions of cold reason by professionals follow herding patterns as well. Finance professor Robert Olsen recently conducted a study of 4,000 corporate earnings estimates by company analysts and reached this conclusion:

Experts’ earnings predictions exhibit positive bias and disappointing accuracy. These shortcomings are usually attributed to some combination of incomplete knowledge, incompetence, and/or misrepresentation. This article suggests that the human desire for consensus leads to herding behavior among earnings forecasters.[10]

Olsen’s study shows that the more analysts are wrong, which is another source of stress, the more their herding behavior increases.[11]

How can seemingly rational professionals be so utterly seduced by the opinion of their peers that they will not only hold, but change opinions collectively? Recall that the neocortex is to a significant degree functionally disassociated from the limbic system. This means not only that feelings of conviction may attach to utterly contradictory ideas in different people, but that they can do so in the same person at different times. In other words, the same brain can support opposite views with equally intense emotion, depending upon the demands of survival perceived by the limbic system. This fact relates directly to the behavior of financial market participants, who can be flushed with confidence one day and in a state of utter panic the next. As Yale economist Robert Schiller puts it, “You would think enlightened people would not have firm opinions” about markets, “but they do, and it changes all the time.”[12] Throughout the herding process, whether the markets are real or simulated, and whether the participants are novices or professionals, the general conviction of the rightness of stock valuation at each price level is powerful, emotional and impervious to argument.

Falling into line with others for self-preservation involves not only the pursuit of positive values but also the avoidance of negative values, in which case the reinforcing emotions are even stronger. Reptiles and birds harass strangers. A flock of poultry will peck to death any individual bird that has wounds or blemishes. Likewise, humans can be a threat to each other if there are perceived differences between them. It is an advantage to survival, then, to avoid rejection by revealing your sameness. D.C. Gajdusek researched a long-hidden Stone Age tribe that had never seen Western people and soon noticed that they mimicked his behavior; whenever he scratched his head or put his hand on his hip, the whole tribe did the same thing.[13] Says MacLean, “It has been suggested that such imitation may have some protective value by signifying, ‘I am like you.’” He adds, “This form of behavior is phylogenetically deeply ingrained.”[14]

The limbic system bluntly assumes that all expressions of “I am not like you” are infused with danger. Thus, herding and mimicking are preservative behavior. They are powerful because they are impelled, regardless of reasoning, by a primitive system of mentation that, however uninformed, is trying to save your life.

As with so many useful paleomentational tools, herding behavior is counterproductive with respect to success in the world of modern financial speculation. If a financial market is soaring or crashing, the limbic system senses an opportunity or a threat and orders you to join the herd so that your chances for success or survival will improve. The limbic system produces emotions that support those impulses, including hope, euphoria, cautiousness and panic. The actions thus impelled lead one inevitably to the opposite of survival and success, which is why the vast majority of people lose when they speculate.[15] In a great number of situations, hoping and herding can contribute to your well-being. Not in financial markets. In many cases, panicking and fleeing when others do cuts your risk. Not in financial markets. The important point with respect to this aspect of financial markets is that for many people, repeated failure does little to deter the behavior. If repeated loss and agony cannot overcome the limbic system’s impulses, then it certainly must have free rein in comparatively benign social settings.

Regardless of their inappropriateness to financial markets, these impulses are not irrational because they have a purpose, no matter how ill-applied in modern life. Yet neither are they rational, as they are within men’s unconscious minds, i.e., their basal ganglia and limbic system, which are equipped to operate without and to override the conscious input of reason. These impulses, then, serve rational general goals but are irrationally applied to too many specific situations.

PHI IN THE UNCONSCIOUS MENTATIONAL PATTERNS OF INDIVIDUALS AND GROUPS

At this point, we have identified unconscious, impulsive mental processes in individual human beings that are involved in governing behavior with respect to one’s fellows in a social setting. Is it logical to expect such impulses to be patterned? When the unconscious mind operates, it could hardly do so randomly, as that would mean no thought at all. It must operate in patterns peculiar to it. Indeed, the limbic systems of individuals produce the same patterns of behavior over and over when those individuals are in groups. The interesting observation is how the behavior is patterned. When we investigate statistical and scientific material on the subject, rare as it is, we find that our Fibonacci-structured neurons and microtubules (see “Science is Validating the Concept of the Wave Principle”) participate in Fibonacci patterns of mentation.

Perhaps the most rigorous work in this area has been performed by psychologists in a series of studies on choice. G.A. Kelly proposed in 1955 that every person evaluates the world around him using a system of bipolar constructs.[16] When judging others, for instance, one end of each pole represents a maximum positive trait and the other a maximum negative trait, such as honest/dishonest, strong/weak, etc. Kelly had assumed that average responses in value-neutral situations would be 0.50. He was wrong. Experiments show a human bent toward favor or optimism that results in a response ratio in value-neutral situations of 0.62, which is phi. Numerous binary-choice experiments have reproduced this finding, regardless of the type of constructs or the age, nationality or background of the subjects. To name just a few, the ratio of 62/38 results when choosing “and” over “but” to link character traits, when evaluating factors in the work environment, and in the frequency of cooperative choices in the prisoner’s dilemma.[17]

Psychologist Vladimir Lefebvre of the School of Social Sciences at the University of California in Irvine and Jack Adams-Webber of Brock University corroborate these findings. When Lefebvre asks subjects to choose between two options about which they have no strong feelings and/or little knowledge, answers tend to divide into Fibonacci proportion: 62% to 38%. When he asks subjects to sort indistinguishable objects into two piles, they tend to divide them into a 62/38 ratio. When subjects are asked to judge the “lightness” of gray paper against solid white and solid black, they persistently mark it either 62% or 38% light,[18] favoring the former. (See Figure 3.) When Adams-Webber asks subjects to evaluate their friends and acquaintances in terms of bipolar attributes, they choose the positive pole 62% of the time on average.[19] When he asks a subject to decide how many of his own attributes another shares, the average commonality assigned is 0.625.[20] When subjects are given scenarios that require a moral action and ased what percentage of people would take good actions vs. bad actions, their answers average 62%.[21] “When people say they feel 50/50 on a subject,” Lefebvre says, “chances are it’s more like 62/38.”[22]

Lefebvre concludes from these findings, “We may suppose that in a human being, there is a special algorithm for working with codes independent of particular objects.”[23] This language fits MacLean’s conclusion and LeDoux’s confirmation that the limbic system can produce emotions and attitudes that are independent of objective referents in the cortex. If these statistics reveal something about human thought, they suggest that in many, perhaps all, individual humans, and certainly in an aggregate average, opinion is predisposed to a 62/38 inclination. With respect to each individual decision, the availability of pertinent data, the influence of prior experiences and/or learned biases can modify that ratio in any given instance. However, phi is what the mind starts with. It defaults to phi whenever parameters are unclear or information insufficient for an utterly objective assessment.

This is important data because it shows a Fibonacci decisionbased mentation tendency in individuals. If individual decisionmaking reflects phi, then it is less of a leap to accept that the Wave Principle, which also reflects phi, is one of its products. To narrow that step even further, we must be satisfied that phi appears in group mentation in the real world. Does Fibonacci-patterned decisionmaking mentation in individuals result in a Fibonacci-patterned decision-making mentation in collectives? Data from the 1930s and the 1990s suggests that it does.

Lefebvre and Adams-Webber’s experiments show unequivocally that the more individuals’ decisions are summed, the smaller is the variance from phi. In other words, while individuals may vary somewhat in the phi-based bias of their bipolar decision-making, a large sum of such decisions reflects phi quite precisely. In a real-world social context, Lefebvre notes by example that the median voting margin in California ballot initiatives over 100 years is 62%. The same ratio holds true in a study of all referenda in America over a decade[24] as well as referenda in Switzerland from 1886 to 1978.[25]

In the early 1930s, before any such experiments were conducted or models proposed, stock market analyst Robert Rhea undertook a statistical study of bull and bear markets from 1896 to 1932. He knew nothing of Fibonacci, as his work in financial markets predated R.N. Elliott’s discovery of the Fibonacci connection by eight years. Thankfully, he published the results despite, as he put it, seeing no immediate practical value for the data. Here is his summary:

Bull markets were in progress 8143 days, while the remaining 4972 days were in bear markets. The relationship between these figures tends to show that bear markets run 61.1 percent of the time required for bull periods…. The bull market[‘s]…net advance was 46.40 points. [It] was staged in four primary swings of 14.44, 17.33, 18.97 and 24.48 points respectively. The sum of these advances is 75.22. If the net advance, 46.40, is divided into the sum of advances, 75.22, the result is 1.621. The total of secondary reactions retraced 62.1 percent of the net advance.[26]

To generalize his findings, the stock market on average advances by 1s and retreats by .618s, in both price and time.

Lefebvre and others’ work showing that people have a natural tendency to make choices that are 61.8% optimistic and 38.2% pessimistic directly reflects Robert Rhea’s data indicating that bull markets tend both to move prices and to endure 62% relative to bear markets’ 38%. Bull markets and bear markets are the quintessential expressions of optimism and pessimism in an overall net-neutral environment for judgment. Moreover, they are created by a very large number of people, whose individual differences in decision-making style cancel each other out to leave a picture of pure Fibonacci expression, the same result produced in the aggregate in bipolar decision-making experiments. As rational cogitation would never produce such mathematical consistency, this picture must come from another source, which is likely the impulsive paleomentation of the limbic system, the part of the brain that induces herding.

While Rhea’s data need to be confirmed by more statistical studies, prospects for their confirmation appears bright. For example, in their 1996 study on log-periodic structures in stock market data, Sornette and Johansen investigate successive oscillation periods around the time of the 1987 crash and find that each period (tn) equals a value (l) to the power of the period’s place in the sequence (n), so that tn= ln. They then state outright the significance of the Fibonacci ratio that they find for l:

The “Elliott wave” technique…describes the time series of a stock price as made of different “waves.” These different waves are in relation with each other through the Fibonacci series, [whose numbers] converge to a constant (the so-called golden mean, 1.618), implying an approximate geometrical series of time scales in the underlying waves. [This idea is] compatible with our above estimate for the ratio l @ 1.5-1.7$.[27]

This phenomenon of time is the same as the one that R.N. Elliott described for price swings in the 1930-1939 period recounted in Chapter 5 of The Wave Principle of Human Social Behavior. In the past three years, modern researchers have conducted experiments that further demonstrate Elliott’s observation that phi and the stock market are connected. The October 1997 New Scientist reports on a study that concludes that the stock market’s Hurst exponent,[28] which characterizes its fractal dimension, is 0.65.[29] This number is quite close to the Fibonacci ratio. However, since that time, the figure for financial auction-market activity has gotten even closer. Europhysics Letters has just published the results of a market simulation study by European physicists Caldarelli, Marsili and Zhang. Although the simulation involves only a dozen or so subjects at a time trading a supposed currency relationship, the resulting price fluctuations mimic those in the stock market. Upon measuring the fractal persistence of those patterns, the authors come to this conclusion:

The scaling behavior of the price “returns”…is very similar to that observed in a real economy. These distributions [of price differences] satisfy the scaling hypothesis…with an exponent of H = 0.62.[30]

The Hurst exponent of this group dynamic, then, is 0.62. Although the authors do not mention the fact, this is the Fibonacci ratio. Recall that the fractal dimension of our neurons is phi. These two studies show that the fractal dimension of the stock market is related to phi. The stock market, then, has the same fractal dimensional factor as our neurons, and both of them are the Fibonacci ratio. This is powerful evidence that our neurophysiology is compatible with, and therefore intimately involved in, the generation of the Wave Principle.

Lefebvre explains why scientists are finding phi in every aspect of both average individual mentation and collective mentation:

The golden section results from the iterative process. …Such a process must appear [in mentation] when two conditions are satisfied: (a) alternatives are polarized, that is, one alternative plays the role of the positive pole and the other one that of the negative pole; and (b) there is no criterion for the utilitarian preference of one alternative over the other.[31]

This description fits people’s mental struggle with the stock market, it fits people’s participation in social life in general, and it fits the Wave Principle.

It is particularly intriguing that the study by Caldarelli et al. purposely excludes all external input of news or “fundamentals.” In other words, it purely records “all the infighting and ingenuity of the players in trying to outguess the others.”[32] As Lefebvre’s work anticipates, subjects in such a nonobjective environment should default to phi, which Elliott’s model and the latest studies show is exactly the number to which they default in real-world financial markets.

CONCLUSION

R.N. Elliott discovered before any of the above was known, that the form of mankind’s evaluation of his own productive enterprise, i.e., the stock market, has Fibonacci properties. These studies and statistics say that the mechanism that generates the Wave Principle, man’s unconscious mind, has countless Fibonacci-related properties. These findings are compatible with Elliott’s hypothesis.

NOTES

  1. MacLean, P. (1990). The triune brain in evolution: role in paleocerebral functions. New York: Plenum Press.
  2. Scuoteguazza, H. (1997, September/October). “Handling emotional intelligence.” The Objective American.
  3. MacLean, P. (1990). The triune brain in evolution, p. 17.
  4. Chapters 15 through 19 of The Wave Principle of Human Social Behavior explore this point further.
  5. Wright, K. (1997, October). “Babies, bonds and brains.” Discover, p. 78.
  6. Pigou, A.C. (1927). Industrial fluctuations. London: F. Cass.
  7. Pigou, A.C. (1920). The economics of welfare. London: F. Cass.
  8. Among others, such measures include put and call volume ratios, cash holdings by institutions, index futures premiums, the activity of margined investors, and reports of market opinion from brokers, traders, newsletter writers and investors.
  9. Bishop, J.E. (1987, November 17). “Stock market experiment suggests inevitability of booms and busts.” The Wall Street Journal.
  10. Olsen, R. (1996, July/August). “Implications of herding behavior” Financial Analysts Journal, pp. 37-41.
  11. Just about any source of stress can induce a herding response. MacLean humorously references the tendency of governments and universities to respond to tension by forming ad hoc committees.
  12. Passell, P. (1989, August 25). “Dow and reason: distant cousins?” The New York Times.
  13. Gajdusek, D.C. (1970). “Physiological and psychological characteristics of stone age man.” Symposium on Biological Bases of Human Behavior, Eng. Sci. 33, pp. 26-33, 56-62.
  14. MacLean, P. (1990). The triune brain in evolution.
  15. There is a myth, held by nearly all people outside of back-office employees of brokerage firms and the IRS, that many people do well in financial speculation. Actually, almost everyone loses at the game eventually. The head of a futures brokerage firm once confided to me that never in the firm’s history had customers in the aggregate had a winning year. Even in the stock market, when the public or even most professionals win, it is a temporary, albeit sometimes prolonged, phenomenon. The next big bear market usually wipes them out if they live long enough, and if they do not, it wipes out their successors. This is true regardless of today’s accepted wisdom that the stock market always goes to new highs eventually and that today’s investors are “wise.” Aside from the fact that the “new highs forever” conviction is false (Where was the Roman stock market during the Dark Ages?), what counts is when people act, and that is what ruins them.
  16. Kelly, G.A. (1955). The psychology of personal constructs, Vols. 1 and 2.
  17. Osgood, C.E., and M.M. Richards (1973). Language, 49, pp. 380-412; Shalit, B. (1960). British Journal of Psychology, 71, pp. 39-42; Rapoport, A. and A.M. Chammah (1965). Prisoner’s dilemma. University of Michigan Press.
  18. Poulton, E.C., Simmonds, D.C.V. and Warren, R.M. (1968). “Response bias in very first judgments of the reflectance of grays: numerical versus linear estimates.” Perception and Psychophysics, Vol. 3, pp. 112-114.
  19. Adams-Webber, J. and Benjafield, J. (1973). “The relation between lexical marking and rating extremity in interpersonal judgment.” Canadian Journal of Behavioral Science, Vol. 5, pp. 234-241.
  20. Adams-Webber, J. (1997, Winter). “Self-reflexion in evaluating others.” American Journal of Psychology, Vol. 110, No. 4, pp. 527- 541.
  21. McGraw, K.M. (1985). “Subjective probabilities and moral judgments.” Journal of Experimental and Biological Structures, #10, pp. 501-518.
  22. Washburn, J. (1993, March 31). “The human equation.” The Los Angeles Times.
  23. Lefebvre, V.A. (1987, October). “The fundamental structures of human reflexion.” The Journal of Social Biological Structure, Vol. 10, pp. 129-175.
  24. Lefebvre, V.A. (1992). A psychological theory of bipolarity and reflexivity. Lewinston, NY: The Edwin Mellen Press. And Lefebvre, V.A. (1997). The cosmic subject. Moscow: Russian Academy of Sciences Institute of Psychology Press.
  25. Butler, D. and Ranney, A. (1978). Referendums Washington, D.C., American Enterprise Institute for Public Policy Research.
  26. Rhea, R. (1934). The story of the averages: a retrospective study of the forecasting value of Dow’s theory as applied to the daily movements of the Dow-Jones industrial & railroad stock averages. Republished January 1990. Omnigraphi. (See discussion in Chapter 4 of Elliott Wave Principle by Frost and Prechter.)
  27. Sornette, D., Johansen, A., and Bouchaud, J.P. (1996). “Stock market crashes, precursors and replicas.” Journal de Physique I France 6, No.1, pp. 167-175.
  28. The Hurst exponent (H), named for its developer, Harold Edwin Hurst [ref: Hurst, H.E., et al. (1951). Long term storage: an experimental study] is related to the fractal, or Hausdorff dimension (D) by the following formula, where E is the embedding Euclidean dimension (2 in the case of a plane, 3 in the case of a space): D = E – H. It may also be stated as D = E + 1 – H if E is the generating Euclidean dimension (1 in the case of a line, 2 in the case of a plane). Thus, if the Hurst exponent of a line graph is .38, or /Φ-2, then the fractal dimension is 1.62, or Φ; if the Hurst exponent is .62, or Φ-1, then the fractal dimension is 1.38, or 1 + Φ-2. [source: Schroeder, M. (1991). Fractals, chaos, power laws: minutes from an infinite paradise. New York: W.H. Freeman & Co.] Thus, if H is related to Φ, so is D.
  29. Brooks, M. (1997, October 18). “Boom to bust.” New Scientist.
  30. Caldarelli, G., et al. (1997). “A prototype model of stock exchange.” Europhysics Letters, 40 (5), pp. 479-484.
  31. Lefebvre, V.A. (1998, August 18-20). “Sketch of reflexive game theory,” from the proceedings of The Workshop on Multi-Reflexive Models of Agent Behavior conducted by the Army Research Laboratory.
  32. Caldarelli, G., et al. (1997, December 1). “A prototype model of stock exchange.” Europhysics Letters, 40 (5), pp. 479-484.
BACK TO TOP

Testing the Efficacy of the New High/New Low Index Using Proprietary Data

by Richard T. Williams, CMT, CFA

About the Author | Richard T. Williams, CMT, CFA

Richard Williams is a Senior Vice President and Fundamental/Technical analyst for Jefferies & Company. He specializes in enterprise software and e-commerce infrastructure stocks. During 1999, his stock and convertible (equity substitute) recommendations returned in excess of 360%, making him the top performing software analyst based on Bloomberg data.
Prior to joining Jefferies in 1997, Mr. Williams was an institutional salesman at Kidder Peabody/Paine Webber from 1992-97 and a convertible and warrant sales trader from 1988-92.

Mr. Williams received his MBA in Finance from NYU’s Stern School of Business in 1991. He received a B.A. in Government and Computer Science from Dartmouth College. He is a Chartered Financial Analyst and a Chartered Market Technician. Mr. Williams is a member of the Association for Investment Management & Research and the Market Technicians Association. He was recently voted runner-up for the Charles Dow award, for contributions to the body of technical knowledge. He has published several articles in the MTA Journal, been a frequent Radio Wall Street guest and is regularly quoted in the foreign/domestic press and magazines.

INTRODUCTION

Based on comments by fellow MTA members shortly after submission of my 1999 CMT paper, Testing the Efficacy of New High/New Low Data, I began to ponder how I might explore further the possibilities of using new high/new low data as a stock market indicator. The NYSE new highs and new lows have been used in technical analysis and by market watchers for many years. The theory is that the stocks reaching new 52-week highs or lows represent significant events relative to the market and its sectors. If possible, the study of the actual prices underlying new high/low data might suggest intriguing new ways to explore different applications of indicators like the 10-Day High/Low Index (referred to herein after as TDHLI) for predicting market action.

There are a number of ways to use new high and new low data. In order to test the efficacy of new high/new low indicators using proprietary data, we will employ the same 10-day moving average of the percentage of new highs over new highs plus new lows that I employed last year to test publicly available new high/new low data. The traditional rules, by way of review, were introduced to me by Jack Redegeld, the head of technical research at Scudder, Stevens & Clark in 1986. A definition of terms from my 1999 CMT paper follows below:

What differentiates the TDHLI from other indicators that use new high and new low data is that it tracks the oscillation from 0 to 1 of the net new highs (new highs/(new highs + new lows)) and then uses a 10-day simple moving average to smooth the results. The TDHLI signals a buy when the indicator rises above 0.3 (or 30% of the range), and indicates a sell when it falls below 0.7 (or 70% of the range). The origin of the 70/30 filter is from Jack Redegeld’s work over time. A.W. Cohen published an approach in 1968 using similar rules to Jack Redegeld’s application of the TDHLI while at Scudder (1961-1989):

The extreme percentages on this chart are above 90% (occasionally above 80%) and below 10% (occasionally below 20%). Intermediate down moves and bear markets usually end when this percentage is below the 10% level. The best time to go long is when the percentage is below the 10% level and turns up. This is a bull alert signal. Short positions should be covered and long positions taken. A rise in the percentage above a previous top or above the 50% level is a bull confirmed signal. The best time to sell short is when this percentage is above the 90% level and turns down. This is a bear alert signal. Long positions should be closed out and short positions established. A drop in the percentage below a previous bottom or below the 50% signals a bear confirmed market.

In my CMT paper, I suggested a dynamic filtering method to improve performance of the TDHLI. The technique is simply to apply a percentage filter based on two standard deviations from the mean of the data to the TDHLI. The net effect of the dynamic filter method is to capture a greater portion of the market’s move, but at the cost of higher transaction costs and more frequent signals (some of which will be false signals that incur loses – further increasing the cost of doing business). In my prior study, the data showed substantial performance gains from employing the dynamic filtering method to new high/low data taken from publicly available sources and applied to the TDHLI.

The purpose of this paper shall be to evaluate the efficacy of the TDHLI using four years of historical prices to calculate new high/low data on the largest 5000 stocks on the NYSE, Amex and NASDAQ from January 31, 1996 to December 31, 1999. The conclusions of my earlier paper were that the traditional rules suggested by both Messrs. Cohen and Redegeld did not perform particularly well. The dynamic rules that I suggested improved performance and flexibility, but did little to explore the impact of several factors like market cap on the performance and predictive power of indicators like the TDHLI. The aim of the current study is to explore the merits of calculating new high/low data and parsing it by market cap to provide a deeper look into the technicals of the stock market.

The attempt at compiling a relatively large database of stock prices necessary to create the capability to generate new high/low data nearly undid my project to submit for the 2000 Charles Dow Award. Despite my best efforts over several months and a surprisingly ineffectual collection of state-of-the-art PC hardware, the entire enterprise nearly collapsed from the strain on memory and processing speed available today. The final, successful effort required four PCs with Pentium 500-600 megahertz processors with a combined 704 Megabytes of RAM running in parallel on small subsets of data that had to be reorganized and recompiled after the initial run. To say that the additional capabilities made possible by using proprietary data comes at a cost is an understatement. Still, the benefits of flexibility may yet prove to be worth the ultimate effort.

My suggested approach, as before, uses percentage filters held to a tolerance of two standard deviations from the mean for past signals. Filter percentages varied as one might expect with the market cap and volatility of the stocks. When the value of the indicator changes, for example, by 20% from its most recent high or low, a buy or sell is signaled. The percentage hurdle was derived by taking the percent move that correctly captured 95% (or 2 standard deviations from the mean) of the historic index moves. The mid cap data required a 21% filter for the period studied while small caps needed a 25% band to meet the criterion. Totals for all the new high/low data, perhaps as a result of the time period utilized and the relative numbers of small cap stocks in the data, required a 23% filter. The benefit of dynamic rules appears to be supported by my experience with both studies conducted to date.

METHODOLOGY

New highs are defined as stocks reaching a new high over the previous year of daily prices. New lows conversely are stocks descending below the lowest price over the prior year. Stocks were selected based on the 5000 largest market capitalizations of the three main US exchanges and may represent biased data to the extent that prior performance influenced the market caps at the time that the data was sorted (the end date). Spreadsheets were then constructed to reflect new highs/lows from yearly databases of daily closing prices.

The tables below show buys and sells in the first column, the trade date in the second, the index value in the third, the return for each trade in decimal format next and the final column shows cumulative results (1.04 = 4% gain, 0.99 = 1% loss) for all trades in sequence. At the bottom of each table the total index and indicator returns can be found.

OBSERVATIONS

After evaluating the TDHLI performance characteristics based on proprietary new high/low data, the first conclusion was that as my previous study showed, the traditional buy/sell rules did not work effectively. Another conclusion was that the TDHLI using dynamic rules performed better than the traditional rules, but did not perform as well in the current period than it did in the earlier study. The slippage between trades based on the TDHLI was partially responsible for a return deficit compared to the S&P 500 over the evaluation period. In strongly trending markets, any trading activity tends to negatively impact overall performance. The S&P 500 was interrupted in its upward march only by two corrections, one 22.5% and the other 13%. It is interesting to note that using aggregate data, the TDHLI lost only 5.16% during the 22.5% October 1998 decline in the S&P 500. Similarly during the pullback last Fall, the TDHLI fell 7.55% versus 13% in the S&P 500 In fact, the most significant slippage in relative performance occurred around periods of high performance: by either starting late or finishing prematurely during a significant market move, the TDHLI underper-formed the averages.

Loss management was a significant issue for the TDHLI over each of the subdivisions of the data. Relatively large losses were incurred as the TDHLI moved to a sell, but the market reacted more sharply. Given the significant bifurcation of the markets during the last four years, an increasing number of stocks are lagging the performance of the averages. Put another way, fewer and fewer stocks are leading the way higher and the volatility of the averages is increasing. This effect has been documented in the media. This stratification implies that the timing value of new highs/lows will be diminished for the time being. Due to the magnitude of the data management task, a more comprehensive study was not possible using PC based computing.

Still, the predictive power of new high/low data remains formidable. In each case, the periods of time excluded by the TDHLI underperformed substantially in each market cap segment and across all the data. For large caps, the excluded returns were 12.37% vs. 92.06% for the TDHLI. For mid cap data, the excluded returns were slightly negative compared to nearly triple digit positive returns for the TDHLI. The small cap segment mirrored this result, but the TDHLI provided a less robust performance, in line with the mid cap index, MID. Only in the aggregate data did the excluded returns amount to much, with a 37.02% gain versus 67.09% for the TDHLI and 118.92% for the S&P 500. In spite of the obvious applicability of the Wall Street maxim that its the time in the market, not the market timing that yields the best results, the risk as measured by standard deviation suggests that the TDHLI was exposed to less risk over the period. Granted that the market volatility recently has been extraordinary, the TDHLI standard deviations for each market cap and for aggregate data remained in single digits (8.3%, 9.5%, 9.6%, 6.9% for large, mid, small and total cap groups respectively) while the market indices ranged between 630% for Nasdaq and 23% for the small cap, SML index.

The dynamic TDHLI signaled trades more often than the traditional rules, which is consistent with prior results. The number of total trades (and losing trades) for each segment were 20 (7) for large caps, 20 (7) for mid caps, 17 (7) for small caps and 18 (5) for aggregate data. The magnitude of losses averaged about 25% of gains. Looking beyond the market’s extraordinary performance, the TDHLI provided fairly respectable results.

Performance of the dynamic filter method for the TDHLI was respectable compared to the traditional rules, but less robust against the averages. The large cap dynamic indicator returned 92.06% vs. 27.68% for traditional rules and 131.03% for the SPX. The mid cap indicator was up 97.61% vs. traditional rules with  64.50% and MID at 90.92%. The small cap results were 65.57% vs.traditional methods with 33.20% and SML at 58.98%. The aggregate return was 67.29% vs. traditional rules with 46.73% and SPX at 118.92%. The essential difference was that by tracking the relative movements of the TDHLI and signaling reversals of intermediate magnitude, significant moves in the market were captured by the indicator. On the other hand, the TDHLI in the current period tended to prematurely exit the market while meaningful returns remained. One interpretation of this result is that a divergence between a few stocks with high returns and the majority of stocks with much less robust results over the period has created a distortion in market returns that is not fully captured by the new high/low data.

CONCLUSION

The traditional TDHLI and, in the current period, the dynamic TDHLI as well, failed to keep pace with the market despite posting strong results. The utilization of proprietary data made the TDHLI considerably more flexible and provided new and interesting dimensions to the indicator. While it provided useful sell and buy signals under most conditions, the TDHLI even with the added functionality of proprietary data did not perform well enough and robustly enough to be considered an effective indicator solely on its own. The ability to predict market direction, to provide reasonably competitive performance particularly considering results from my prior study and to track the market along the lines of market capitalization may prove over time to be a worthwhile addition to the art of technical analysis as embodied by the TDHLI.

ATTRIBUTION

  • Joseph Redegeld, The Ten Day New High/New Low Index, 1986.
  • A.W. Cohen, Three-Point Reversal Method of Point & Figure Stock Market Trading, 8th Edition, 1984, Chartcraft, Inc., pg 91.
  • 2Data was provided by Factset, Inc.

 

BACK TO TOP

Birth of a Candlestick

by Jonathan T. Lin, CMT

About the Author | Jonathan T. Lin, CMT

Jonathan Lin has been with Salomon Smith Barney since 1994. In his capacity as a technical research analyst there, he contributes to the weekly publications Market Interpretation, and Global Technical Market Overview. Prior to Salomon, he was a technology specialist at Price Waterhouse for one year, and spent six years at Merrill Lynch as a senior programmer/ analyst.

Jonathan has an MBA in management information systems from Pace University, Lubin Graduate School of Business, and a BE in electrical engineering & computer science from Stevens Institute of Technology.

INTRODUCTION

Basics of Candlestick Charting Techniques

Candlestick charts are the most popular and the oldest form of technical analysis in Japan, dating back almost 300 years. They are constructed very much like the Open-High-Low-Close bar charts that most of us use everyday, but with one difference. A “real body,” a box instead of a line as in a bar chart, is drawn between the opening and closing prices. The box is colored black when the closing price is lower than the opening price and colored white if the close is higher than the open. With the colors of the real bodies adding a new dimension to the charts, one can spot the changes in market sentiments at a glance – bullish when the bodies are white, and bearish when black. The lines extending to the high and to the low remain intact. The part of the line between the real body and the high is called the “upper shadow,” while the part between the real body and the low is termed the “lower shadow.”

The strength of candlestick charting comes from the fact that it adds an array of patterns to the technical “toolbox,” without taking anything away. Chart readers can draw trendlines, apply computer indicators, and find formations such as ascending triangles and head-and-shoulders on candlestick charts as easily as they can on bar charts. Let us now examine some candlestick patterns, their implications, and the rationale behind them.

A bearish engulfing pattern is formed when a black real body engulfs the prior day’s white real body. As the name implies, it is a signal for a top reversal of the proceeding uptrend. The rationale behind a bearish engulfing pattern is straightforward. A white candlestick is normal within an uptrend as the bulls continue to enjoy their success. An engulfing black candlestick the next day would mean that the open price of the next day is higher than the close of the first day, signaling possible continuation of the rally. However, the bulls of the first day turn into losers as the price closes lower than the first day’s open. Such a shift in sentiment should be alarming to the bulls and signals a possible top.

A morning star is comprised of three candles: a long black candlestick, followed by a small real body that gaps under that black candlestick, followed by a long white real body. The first black candlestick is normal within a downtrend. The  subsequent small real body, whose close is not far off the open, is the first warning sign as the bears were not able to move the price much lower as they did the first day. The third day marks a comeback by the bulls, completing the “morning star,” a bottom reversal pattern.

There is really no need for me to cover too many candlestick patterns here. The examples are given merely to illustrate the fact that most candlestick patterns are nothing more than collections of up to three sets of openhigh-low-close prices and their relative positions to the others.

Importance of Size and Locations of Candle Patterns

The other important points to keep in mind are the sizes of the candlesticks in the patterns and the locations of the patterns within recent trading range of the price. Let us consider a stock that trades around $60. Scenario One: On the first day, it opened at $60 and closed at $63.75. On the next day, it opened at $64 and closed at $58.75. Those two days’ trading constitutes a bearish engulfing pattern. This pattern should be considered meaningful since a roughly $4 run-up followed by a $5+ pullback is of considerable impact on a $60 stock. Scenario Two: The same stock opened at $60 and closed at $61 on the first day, and opened at $61.25 and closed at $59.875 the next day. Those two days’ trading did indeed still constitute a bearish engulfing pattern, but the effect of the pattern would not be considered as meaningful. A $1 fluctuation for a $60 stock is pretty much a non-event. It should be evident that the sizes of the candlesticks within a pattern do matter as much as the pattern itself.

We should now look at the importance of the location of the pattern relative to its trading range. Let us assume that the aforementioned $60 stock has been trading between $45 and $55 for five months, broke out to new high, and ended the last two trending days as described in Scenario One, one could speculate that support at $55 probably will be tested in the near future. If the stock, however, has been trading between $58 and $64 for five months, tested the support at $58, rallied up to $63.75, just slipped back to $59.875, the bearish engulfing pattern described in Scenario One should have little meaning. The validity of this pattern is limited here since there was not much of a uptrend proceeding the pattern and the downside risk to $58 is only $1.875 away.

From these comparisons, it should be obvious that the usefulness of candlestick patterns rely on: 1) The size of the candlestick components – real bodies, upper and lower shadows; 2) The relative positions among themselves – gapping from one another, overlapping. After all, that is how patterns are defined; and 3) The patterns’ locations relative to the previous periods’ trading range and 4) The size of the trading range itself. In order to find an effective candlestick pattern, these points should all be considered integral parts of the pattern definition.

[Author’s Note: Although volume and open interest accompanying the candlestick patterns could be used as confirmation, the author has decided not to include either as part of the pattern definition for two reasons: 1) A stock can rally or decline without increasing volume. Volume tends to be more evident around structural breakouts and breakdowns, but not always so around early reversal points. This is especially true for thinlytraded stocks where the price can move either way quickly without much volume. Depending on volume as confirmation is impractical at times. 2) The pattern the author plans to find should be rather universal just like the other candlestick patterns. Shooting stars are not unique to crude oil futures; nor is the bearish engulfing pattern only designed for Microsoft stock. Since some market data, such as spot currencies and interest rates, do not contain volume nor open interest information, a candlestick pattern with volume or open interest as an integral part of it will not be universally useful.]

Basics of Genetic Algorithm – “Survival of the Fittest”

“Survival of the Fittest.” Darwin’s theory of evolution remains one of the scientific theories that has had the most profound impact on humankind to date. In his theory, Darwin proposed that species evolve through natural selection, that is, species’ chance of existence and their ability to procreate depend on their ability to adapt to their natural habitat. Since only the organisms fit for their natural environment survive, only the genes they carry survive. During the reproductive process, the next generation of organisms are created from the genes drawn from the surviving, and hopefully superior, gene pool. The new generation of organisms should, in theory, be even more adaptive to their environment. As some of the offspring produced will certainly be more adaptive to the environment than their peers and therefore survive, the natural selection process repeats itself, and again only “superior” genes will be left in the gene pool. As the process is repeated generations after generations, nature will preserve only the “fittest” genes and dispose of the “inferior” ones. It should be noted that during the process, the genes sometimes will experience certain degrees of mutation that might create combinations of genes never seen in previous generations. Mutation actually brings about a more diversified pool of genes, perhaps creating even more adaptive organisms than otherwise possible. At times in nature, an array of species evolve from the same origin, with each of them as fit for its habitat as the others in its own right.

So, what is “genetic algorithm?” In plain English, genetic algorithm is a computer program’s way of finding solutions to a problem by the process of eliminating poor ones and improving on the better ones, mimicking what nature does. In constructing a genetic algorithm, one would start out by defining the problem to be solved in order to decide on the evaluation procedure, imitating the process of natural selection. Try a few possible solutions to a problem. Rank them based on their performance after applying the evaluation process. Keep only the top few solutions and let them reproduce, or mix elements of the top solutions to come up with new ones. The new solutions are then, in turn, evaluated. After a few iterations, or generations, the best solutions will prevail.

For a better explanation, we should now try a fun, practical exercise. Let us consider the process of finding that perfect recipe for a margarita. We start out with six randomly mixed glasses; record the proportion between tequila and the lime juice and taste them. The evaluation process here is your friends’ reactions. They agreed that two glasses were found okay, three of them so-so, and one which yielded the response, “My dog is a better bartender than you.” You then mix five more glasses using proportions somewhere between those found in two okay glasses, and throw in one wildcard, a randomly mixed sample. Now let them try picking the two best ones again. After you get your friends stone-drunk after repeating this process 20 times, you will have yourself two nice glasses of margarita. Most importantly, those two glasses should be of similar proportion of lime juice and tequila; that is, the solution to this problem converged.

Let us now review our margarita experiment. The goal was to find the best tasting margaritas, and therefore the way to evaluate them was to taste them and rate them. Assuming that you have an objective way to total your friends’ opinions of the samples, the ones that survived by this “somewhat natural selection” – the okay glasses – will get to “reproduce.” The wildcard thrown in represents the mutated one. Just like mutations’ importance in nature as a way of injecting new genes into the gene pool and bringing about a more diverse array of species, artificial mutations are very important in opening up more possibilities in the range of solutions for the problem we intend to solve.

In a more involved problem with more variables, the procedure should be repeated many more than 20 times for any half-way decent solutions to prevail. The more iterations the program perform, the more optimal the solution should be. In fact, if the surviving solutions do not resemble each other at all, they are probably far from optimal and require more time to evolve. What we are searching for here are “sharks.” Sharks, one of the fastest species in water, have been swimming in the ocean for millions of years. For generations, the faster ones survived by getting to their food faster in a feeding frenzy. Only the “fast” genes are left after all these years. As sharks’ efficient hydrodynamic lines became “perfect,” all of them started to “look alike.” (At least to most of us.)

End Note: To explain the concept of genetic algorithm in an academic manner, I will turn to the principles set forth by John Holland, who pioneered genetic algorithms in 1975:

  1. Evolution operates on encodings of biological entities, rather than on the entities themselves.
  2. Nature tends to make more descendants of chromosomes that are more fit.
  3. Variation is introduced when reproduction occurs.
  4. Nature has no memory. [Author: Evolution has no intelligence built in. Nature does not learn from previous results and failures; it just selects and reproduces.]

Now the steps of the algorithm:

  1. Reproduction occurs.
  2. Possible modification of children occurs.
  3. The children undergo evaluation by the user-supplied evaluation function.
  4. Room is made for the children by discarding members of the population of chromosomes. (Most likely the weakest population members.)

Definition of the Project’s Intent: Applying Genetic Algorithm to Identify Useful Candlestick Reversal Patterns

Let me now define the problem I intend to solve in this study. I intend to find one candlestick pattern that has good predictability in spotting near-term gains in future price by using the bond futures contract as my testing environment. Since I have cited sharks as the marvelous products of natural selection, I would like to call the organisms that should evolve through my study “candlesharks.”

The candlesharks will have genes that tell them when to signal potential gains, or “eat” if one would parallel them to the real sharks. The first generation candlesharks should be pretty “dumb” and not really know when or when not to “eat” – maybe some are eating all the time while some simply do not move at all. As some of them over ate or starved to death, the smarter ones knew how to “eat right,” correctly spotting potential for profit. As only the smart ones survive, they begin to preserve only the smart genes. As these smarter ones mate and reproduce, some of the next generation may contain, by chance, some even better combination of genes, resulting in even smarter candlesharks. As the process continues, the candlesharks should evolve to be pretty smart eaters. Once in a while, some genes will mutate, creating candlesharks unlike their parents. Whether the newly injected genes will be included in the gene pool will depend on the success of these mutated creatures in adapting to their environment.

One thing that should be pointed out is that the late generation candlesharks will probably swim better in “bond futures pool” better than in a “spot gold pool” or “equity market pool” which they have never been in before. “Survival of the fittest” is more like “survival of the curve-fittest” here. It should be understood that the candlestick pattern found here has evolved within the “bond” environment and thus is best-fit for it. If thrown into the “spot gold pool,” these candlesharks might “die” like the dinosaurs did when the cold wind blew as the Ice Age hit them so unexpectedly, as one of the many theories goes. As in many cases in life, the finer a design with one purpose in mind, whether natural or artificial, the less adaptive it will be when used for other purposes. For example, a 16-gauge wire stripper, while great with 16-gauge wires, will probably do a lousy job stripping 12-gauge wires even when compared to a basic pair of scissors, which can strip any wire though slowly.

BUILDING A SUITABLE ENVIRONMENT FOR EVOLUTION

Defining the Genes

As described in the previous sections, there are, but not limited to, four major deciding factors of the significance of an occurrence of a particular candlestick pattern. They include the size of the recent trading range, current position within that range, relative position of the candlesticks to each other, and the sizes of the those candlesticks. To successfully survive in the “bond futures pool,” it must be in the genes of the candlesharks to be able to distinguish variations in these environmental parameters. I have therefore designed a candleshark to possess the listed genes in Exhibit 1. Genes within Chromosome C-2, for example, tells the candleshark the range of position and size of the candlestick of two days ago should be within, combined with other chromosome’s parameters, before giving a bullish signal. Actually, think of all these genes as simply tandem series of on-off switches for a candleshark to decide to give a bullish signal or not.

All the genes defined here come in pairs. The first, with suffix “-m”, tells the candleshark the minimum of the range in question. The second, with suffix “+”, signifies the width of the range. Here is one example. If RB-m of C2 is -24 and RB+ of C2 is 16, the candleshark will only give a bullish signal when the candlestick of two trading days ago has a black real body sized between 8 (-24 + 16 = -8) ticks and 24 ticks, or the contract closed between _ to _ lower than it opened. Please keep in mind that C2 only contributes to a part of the decision-making process for the candleshark. Even if the real body of two days ago fits the criteria, the other criteria have to be met as well before the candleshark will actually give a bullish signal.

The number of digits needed within each gene can be calculated. The size of a real body for a bond contract can not be larger than 96 ticks since the daily trading limit is set to three points. A seven-bit binary number is capable of handing numbers up to 128 and therefore needed for a gene like RB-m of C2. The 40-day trading could be no larger than 96 ticks times 40, or 3840 ticks. (Besides the fact that it seems very unlikely that the bond contract would go up, or down, 3 points day after day for 40 days.) A 12-bit number, capable of handing a decimal number up to 4096 is needed here.

As the reader might have noticed after referencing Exhibit 1, a number of trading ranges are used. As previously mentioned, the size of the recent trading range and the candle pattern’s position within the range are crucial elements that might make or break the pattern. How does one define “recent” though? It then seemed obvious to me that a multitude of day ranges is needed here, much like the popular practice of using a number of moving averages to access the crosscurrents of shorter and longer-term trends. I have chosen to include 5-day, 10-day, 20-day and 40-day trading ranges, which are more or less one, two, four, and eight trading weeks, in my study. The advantage of using a multitude of day ranges could be demonstrated using two examples. Let us say a morning star, a bullish candlestick reversal pattern, was found near when the price is at the bottom of both the five-day trading range and the 40-day trading range, as it would if it was just making a new reaction low. A morning star around this level is probably less useful since that pattern is more indicative of a short-term bounce. While this morning star, a one-day pattern, could be signaling a possible turn of the trend of the last five days, it is unconvincing that this one-day pattern could signal an end to the trend that lasted at least 40 days.

Let us now say that the same morning star was found near the bottom of the five-day trading range, but near the top of the 40-day trading range, as the price of a stock would if it experienced a shortterm pullback after breaking out of a longer-term trading range. This morning star now could be a signal for the investor to go long the stock. The morning stars in both examples could be of the same size, both found near the bottom of the five-day trading range, but would have significantly different implications just because they appear at the different points of the 40-day trading range. It should be clear now that the inclusion of multiple trading ranges in the decision-making process could be very beneficial.

Since each gene is a binary number, a string of 0’s and 1’s, each will have a MSB (most significant bit, the leftmost digit, like 3 in 39854) and LSB (least significant bit, the rightmost digit.) Since the MSB obviously has more impact on the candlesharks’ behavior, the common state of the more significant bits among the candlesharks are what we should closely examine after the program has let the candlesharks breed for a while. The candlesharks’ main features should be similar after a while. That is, if we do have a nice batch of candlesharks to harvest, they should all have the same sets of 0’s and 1’s among the more significant bits within each gene. The 0’s and 1’s among the trailing bits are less significant by comparison, much like the curvature of an athlete’s forehead should have less to do with his speed than his torso structure. The trailing bits are in the genes and do make a difference, but are basically not significant enough for us to worry about.

When we are ready to harvest our candleshark catches as the performance improvement from one generation to the next decreased to a very small level, we could reverse-engineer the gene to find the criteria that trigger their bullish signals. For instance, if all eight candlesharks have “-00011” as the first five digits in RB-m of C2 (that means RB-m is between -00011000 and -00011111, or -24 and -31) and “00000” as the first five digits of RB+ of C2 (that means RB+ is between 0000000 and 0000011, or 0 and 3), these candlesharks would only give bullish signals when the real body of two days ago is black, and between size of -21 and -31. (The minimum = -31 + 0 = -31; the maximum = -24 + 3 = -21)

Defining the Evaluation Process

I have weighed several methods to evaluate the performance of each of the organisms. First of all, a suitable method has to be of short-term nature since the pattern in question is basically formed in three days. It is very unlikely that, for instance, a morning star formed in three days that occurred twenty days ago has much influence on current price. Secondly, the upward price movement that comes after the pattern has to be greater than the downward movement, at least on the average. It should be realized that no matter how useful a candlestick pattern, or any technical tool, may be, there will be a time that it would not have predictive ability, or even give a downright wrong signal. It then seems reasonable that including total downward moves is essential in the evaluation process. That is, we would like to find a pattern that is not only right and right enough most of the time, but one that will not take anyone to the cleaners when it is wrong.

What I have decided on is the average of the maximum potential gain less the maximum potential loss. First, we find the maximum potential gain by totaling all the differences between the highest of the high prices reached by the contract within the next five trading days from the closing price when the pattern gave the signal. We then find the maximum potential loss by totaling all the differences between the lowest of the low prices reached by the contract within the next five trading days from the closing price when the pattern gave the signal. The maximum loss is subtracted from the maximum gain, and is then divided by the number of signals generated. This ratio is the average of the maximum potential gain less the maximum potential loss I am looking for.

Defining the Reproductive Process

We desire enough permutations of the parents’ genes to create diversity among the offspring. Yet, too many offspring in each generation would greatly increase the processing time needed to evaluate the performance of all the offspring. After a fair amount of contemplating, I believe that eight offspring from two parents is adequate. A trial run shows that the evaluation of eight organisms with 1,500 days worth of data requires roughly four minutes of processing time on my personal computer. That equals 1,080 generations after three straight days of processing. Since a large number of generations might be required for the effective evolution, eight organisms per generation would have to do. Besides, allowing only two out of eight offspring to survive is a stringent enough elimination process. Many large mammals have fewer offspring in their life times.

The second crucial element of the reproduction is the introduction of mutation. Under the principles set forth by John Holland, mutations come in two modes: binary mutation, the replacement of bits on a chromosome with randomly generated bits, and one-point crossover, the swapping of genetic material on the children at a randomly selected point. I have favored a higher level of mutation since it would introduce more diverse gene sequences into the gene pool more quickly. Both the binary mutation level and one-point crossover level have been set to 0.1, or 10% of the time.

EVALUATION OF RESULTS

Preliminary Results and Modifications

As some people might have suspected, what I thought was a wonderful study had a pretty rocky start. The final version of the gene definition, as the readers know it, is actually the third revision. A large number of genes slows down the processing time dramatically. Having too many trading ranges defined also proved to be a waste of time.

Choosing the right gene pool to start with, much to my surprise, was in fact quite tricky. I first wrote a small routine using the Visual Basic Macro in Excel to randomly generate eight candlesharks. These candlesharks did indeed reproduce and the evolution program, also written in Visual Basic, performed its duty and evaluated each one of them. After testing the programs and becoming convinced of their ability to perform their functions, I let the program run overnight, evolving 50 generations. What I found the next morning were eight candlesharks that did absolutely nothing. None of them gave any signal. As they all performed equally poorly, the two selected to reproduce were basically chosen arbitrarily. What I had come up with were the equivalents of the species that would have been extinct in nature.

The next logical step then was to use two organisms that would give me signals all the time and to write a small routine to generate six offspring from them. The eight of them would then be my starting point. This proved to be much more effective. Since the first batch of candlesharks did indeed provide me more signals than I needed, their offspring could only give an equal amount or less signals. As some of the children became more selective, their performance did improve. After viewing the printed results of the first seven or so generations, I was glad to see the gradual, yet steady improvement in the ability to find bullish patterns. I again let them grow overnight.

What I found the next morning was a collection of candlesharks that gave either one or two signals. Each of the signals was wonderful, pointing to large gains without much risk. As it turned out, a few of them were simply pointing to the same date. These candlesharks basically curve-fitted themselves just to pick up the day followed by the best five-day gain in the data series. They were again useless.

It then occurred to me that I had to set a minimum number of signals per organism that the candlesharks would have to meet before the program would consider evaluating them as the breeding ones. Since I was using 1,500 days, or roughly six years, worth of data, the minimum of 20 signals, or a little over three a year, should suffice.

I am glad to report that things started to look up afterward. I have also decided that running the program with smaller iteration could be beneficial. I began running only five generations each time so that I could observe the progress and make any necessary adjustment. It was not until that most things had been fine-tuned that I began my overnight number-crunching again.

Finding Useful Gene Sequences

I first wrote the program with the intention to let the candlesharks evolve and hoped to see them converge into a batch of similarlyshaped creatures. In other words, I was looking for a group of consistent performers. It so happened that in the evolution program, I had coded a line to print out the gene pool with the evaluation results after every generation. This feature was first put in place as a way to monitor the time needed for each generation, and to ensure that the reproduction process was performed correctly. As it turned out, this feature had an extra benefit.

Looking through pages and pages of printouts, I every so often spotted one candleshark with excellent performance. Since a genetic algorithm is based on the principle that nature has no memory, this candleshark’s excellent gene sequence could not be preserved. The only traces of the sequence is in its children which might not be as good a predictor as it was. This does occur in nature as well. Einstein’s being an incredible physicist did not imply that his children would have been, too. Even if some of his genes would live on within his children, none might be as great as he was. A number of Johann Sebastian Bach’s sons were outstanding composers, but none as great as he was.

With the printouts in my hands, though, I could reconstruct that one unique gene sequence. I could examine the organism by itself and let it reproduce several times to see if it could be improved upon. Better yet, I could find two great performing candlesharks who did not have to be of the same generation, and let them mate, something impossible in nature. Imagine the possibility of seeing the children of J.S. Bach and Clara Schumann if they were ever married, or turning Da Vinci into a female to mate with Picasso. One might hesitate doing so in nature, but I face no moral dilemma moving a few 0’s and 1’s all over the place.

Evaluating Gene Sequences for Useful Patterns

Running the program from different starting points yielded varying results. One of the runs that intrigued me the most was one that ended with at least five or six organisms out of the last three generations with similar if not identical performance numbers. Upon more careful inspection of the signals they generated and referencing candlestick charts for the bond futures contract, one pattern seemed prominent. (For a sample of the signal results, see Exhibit 6.) The first day of the pattern shows a large black candlestick, usually with modestly sized upper and lower shadows. The second candlestick is a slightly smaller black one with more pronounced upper and lower shadows. The last candlestick is a small real body, usually white or at times doji, with upper and lower shadows of considerable sizes as well. Very importantly, all three real bodies rarely, if at all, overlap one another.

Let us examine a few of these patterns. All the charts included here are those of bond perpetual futures contract, a weighted moving average of the prices of the current contract and those of the forward contracts. The perpetual contract, versus the continuous contract, is ideal for long-term study of futures since it includes more than one active contract and eliminates price gaps around contract month switching dates that have plagued the continuous contracts.

As one can see by examining the sample charts, the last candlestick within the pattern is always near the bottom of the five-day trading range. This observation does follow the fact that the two preceding days are both down, by definition of this pattern. I hope that some readers will find this pattern useful in making their trading and investment decisions. I myself now have another tool in my “technical tool belt,” and I am planning on constantly re-evaluating the validity of this pattern by looking for it in the bond contract’s future price activities.

FUTURE POSSIBILITIES 

While this study has some interesting results, the possibilities are endless. This study was set to find candlestick patterns that would identify possible up moves in bond prices in the next five days. The next study could be set to find a complete entry/exit trading system with both long and short positions. A different version of the devised program could be used to find an optimal combination of technical indicators and parameters for a particular security instrument. In fact, the use of genetic algorithms as a way to optimize neural networks has been widespread.

Looking not so far out, one could even use the existing program to find other candlestick patterns. By simply changing the gene pool one starts with, very different organisms from the one we found might evolve. Much like different species in nature, of which each excels in its habitat and its own way, different candlestick patterns can be found to be effective under different situations. Change the gene pool a little and let the computer go at it. One day, your machine might find something to surprise you. 

As the computers become faster and faster, one day I should be able to include a large number of organisms. Using a multitude of selection/evaluation criteria, several species might evolve at the same time, just like an ecosystem in nature. I might be able to find, after numerous iterations, one pattern particularly good for a fiveday forecast while another one is found to have an excellent oneday forecasting ability. The possibilities are endless. 

EXHIBITS

BIBLIOGRAPHY

  • Nison, Steve. Japanese Candlestick Charting Techniques: A Contemporary Guide to the Ancient Investment Technique of the Far East. New York, N.Y.: New York Institute of Finance, 1991.
  • Deboeck, J. Guido, Editor. Trading on the Edge: Neural, Genetic and Fuzzy Systems for Chaotic Financial Markets. New York, N.Y.: John Wiley & Sons, Inc., 1994. Chapter 8 by Laurence Davis. Genetic Algorithm and Financial Applications.
  • Note: All gene definition as described in Exhibit V-1 and the Excel Macro program Evolution written in Visual Basic are of original work, and therefore no further reference is given. The author drew on the knowledge and experience as an electrical engineering and computer science major during undergraduate years and seven years as an programmer/analyst to develop the program.
BACK TO TOP