JOURNAL OF
TECHNICAL ANALYSIS
Issue 74, Fall 2024

Editorial Board
Bruce Greig CFA, CMT, CIPM, CAIA
Director of Research, Q3 Asset Management
Chris Kayser, CMT
Cybercriminologist Founder, President & CEO
Matt Lutey, PhD
Assistant Professor of Finance
Paul Wankmueller, CMT
Investment Solutions Specialist
Private: Jerome Hartl, CMT
Vice President, Investments, Wedbush Securities
Eric Grasinger, CFA, CMT
Managing Director, Portfolio Manager, Glenmade

CMT Association, Inc.
25 Broadway, Suite 10-036, New York, New York 10004
www.cmtassociation.org
Published by Chartered Market Technician Association, LLC
ISSN (Print)
ISSN (Online)
The Journal of Technical Analysis is published by the Chartered Market Technicians Association, LLC, 25 Broadway, Suite 10-036, New York, NY 10004.New York, NY 10006. Its purpose is to promote the investigation and analysis of the price and volume activities of the world’s financial markets. The Journal of Technical Analysis is distributed to individuals (both academic and practicitioner) and libraries in the United States, Canada, and several other countries in Europe and Asia. Journal of Technical Analysis is copyrighted by the CMT Association and registered with the Library of Congress. All rights are reserved.
Letter from the Editor
by Sergio Santamaria, CMT, CFA

Welcome to the 74th issue of the Journal of Technical Analysis (JoTA). After more than four decades and approximately 500 peer-reviewed articles, the JoTA continues to serve as the leading publication for advancing the field of technical analysis. Drawing on multidisciplinary perspectives, from psychology to advanced statistics, the journal remains essential for not only CMT Association members but also academics and investment professionals worldwide who seek a deeper understanding of market behavior.
This edition presents four original contributions, including the 2024 Dow Award-winning paper, each of which sheds new light on critical technical analysis tools and methodologies.
First, in the “Bullish at the Bottom: A Statistical Study of the Bullish Percent Index” manuscript, Jonathan Burson applies rigorous statistical analysis to the Bullish Percent Index (BPI), a market breadth indicator with a long-standing reputation. While often praised in financial literature, the BPI’s merits have received little scholarly attention until now. The findings suggest that the BPI is particularly effective in identifying market troughs, with its strongest signals occurring when the indicator falls below 30, making it a valuable tool for long-entry decisions. However, its performance at market peaks appears less reliable, offering little in terms of timely exit strategies.
Second, we are pleased to feature the winner of the 2024 Charles H Dow Award, “The Ripple Effect of Daily New Lows”. This work by Ralph Vince and Larry Williams highlights the often-overlooked significance of Daily New Lows in predicting market trends. The authors argue convincingly that Daily New Lows, though underutilized, are highly reliable for detecting the onset and progression of bear markets, as well as signaling capitulation phases. Their research makes a compelling case for the inclusion of Daily New Lows in quantitative models, urging market participants to pay closer attention to this neglected metric.
In the third paper, “A Measure of Market Incertitude”, Jeff McDowell introduces a novel approach to assessing market strength by exploring whether today’s market resembles yesterday’s one. Building on traditional breadth and diffusion measures, the paper proposes a more granular metric that evolves into an indicator and oscillator, providing analysts with additional tools for detecting shifts in market character. The paper assesses the usefulness of these signal cases in identifying potential trend exhaustion and correction phases.
Finally, in “Frequency of Structures, Length, and Depth of Waves Observed in a Range of Markets using the Elliott Wave Theory”, Lara Iriarte conducts an extensive study spanning eight markets over 304 years, generating over 8,000 data points. Her analysis challenges several commonly accepted assumptions within Elliott Wave Theory, such as wave length and depth ratios. With detailed probability tables and confidence intervals for various wave structures, this research offers a more precise foundation for Elliott Wave analysts to improve the accuracy of their forecasts.
As usual, I would like to take this opportunity to thank the authors for their valuable contributions and for sharing their insights with the broader investment community. Special thanks go to my editorial board colleagues for their dedication in upholding the highest standards of peer review, and to the CMT Association staff, particularly to Alayna Scott, for her exceptional support in producing this journal. Furthermore, our new CEO Tyler Wood and our committed board of directors provide critical support to this publication.
If you are interested in contributing to future issues of the JoTA, with its readership of approximately 5,000 CMT members across 137 countries, please feel free to contact me. We welcome innovative ideas that can further enrich the field of technical analysis.
Sincerely,
Sergio Santamaria, CMT, CFA
Bullish at the Bottom: A Statistical Study of the Bullish Percent Index
by Jonathan Burson, Ph.D.

About the Author | Jonathan Burson, Ph.D.
Jonathan Burson is an Assistant Professor of Finance in the Robert W. Plaster School of Business at Cedarville University. He holds a Ph.D. in Business (Finance) from Auburn University. He also has an M.B.A. in International Business, a Master’s in Military Operational Art & Science, and a B.A. in Computer Information Systems. He is the faculty advisor for the Student- Managed Investment Fund in which the students oversee a portfolio of stocks, actively engaging in investment and risk-management decisions. His scholarly interests span behavioral finance, investments, real estate, and technical analysis, and he has published multiple articles in those areas. Prior to joining the faculty at Cedarville, Jonathan spent 15 years in the United States Air Force.
Abstract
The Bullish Percent Index (BPI), a market breadth indicator that has existed since the mid-1950s, is extolled for its value as a technical analysis tool in several books, magazine articles, and internet sources. Despite its history and the availability of resources on it, there is a lack of scholarly analysis on the merits of the BPI. This study applies statistical analysis to the BPI to determine its timeliness, accuracy, and profitability as a technical analysis tool. The findings indicate that the BPI is more effective at identifying market troughs than peaks. Specifically, the results show that the index can be used successfully as a long entry indicator when the BPI indicator is below 30. However, at the high end of the scale, the BPI fails to provide timely, accurate, or profitable exit signals.
Introduction
The Bullish Percent Index (BPI) has been lauded as the “greatest market indicator,”1 “the absolute best market indicator,”2 and “the most important indicator.”3 These are high praises of a single indicator that require evidence to support such claims. There are several means to measure the quality of any technical analysis indicator, but as a general statement, every useful indicator provides timely, accurate, and profitable trading and investing signals.
This paper uses statistical analysis to evaluate the BPI in light of these criteria. The timeliness of a technical indicator is related to how quickly an indicator provides a usable signal at an opportune time to get into or out of a trade. Accuracy is associated with how closely an indicator gives a signal to the actual start and/or end of a move. Accuracy is also measured by how often the indicator gives a false signal or fails to give a signal when it should. Profitability is determined by the consistency with which an indicator generates an economic alpha above some baseline or expected return. If the BPI is truly timely, accurate, and profitable, it may live up to its reputation as a great market indicator.
The rest of this paper is divided into six major sections. First, I provide some background on the BPI along with a short literature review of relevant books, articles, and media. Next, I describe the data used in this paper, followed by the methodology used to analyze the data. The results of the analysis are explained in the next section. In the last two sections, I summarize my findings and give my conclusions along with some suggestions for future research.
Background and Literature Review
Background and History
The history of the Bullish Percent Index begins in the mid-1940s with Earnest Staby’s search for “a soulless barometer.”4 Although Staby was unable to develop such an indicator, he laid the groundwork for Abe W. Cohen of ChartCraft to eventually create the Bullish Percent Index in 1955.5 Earl Blumenthal and Mike Burke continued to improve the BPI in the 1970’s and 80’s.6
As a market breadth indicator, the BPI is designed to gauge market risk levels on a scale of zero to one hundred. In general, when the BPI is low—below 30 is the traditional level—investors should take a long position because market risk is low, or oversold. In contrast, when the BPI is high—above 70—investors should look to exit long positions because the risk is high or overbought.7 While the actual employment of the BPI within a trading or investing system is more nuanced than what I describe here, this simplified explanation is sufficient for the purposes of this paper.
The BPI is derived from the Point and Figure (PnF) charts of all stocks in a particular index. While the BPI is based on the PnF charts of index components, the BPI itself can be plotted in either a PnF chart or a traditional time-based chart. Traditionally analysts have utilized the BPI PnF chart as part of their decision-making process, but more recently, practitioners have employed additional technical analysis indicators such as moving averages, oscillators, and candlestick patterns on time-based BPI charts. Both methods present advantages and disadvantages. Further details on how the BPI is generated can be found in Thomas Dorsey’s book, Point and Figure Charting: The Essential Application for Forecasting and Tracking Market Prices. Dorsey’s work represents the preeminent text on the background and uses of the BPI.
Books
In addition to Point and Figure Charting, Dorsey discusses the BPI in Tom Dorsey’s Trading Tips: A Playbook for Stock Market Success. An entire chapter is dedicated to bullish percent in Jeremy du Plessis’s book The Definitive Guide to Point and Figure. Du Plessis details how to use the BPI in both PnF form and in line chart form.8 He also covers bullish percent usage in 21st Century Point and Figure. Dorsey’s and du Plessis’s works compose the foundational resources of the BPI.
Du Plessis and Dorsey use different indexes as the basis for BPI in their books. Du Plessis focuses on European stocks such as the Financial Times Stock Exchange (FTSE) All Share Index (ASX) in the United Kingdom and the Deutscher Aktienindex (DAX) in Germany as examples of BPI in his books. While Dorsey’s aforementioned praise of the BPI arose in reference to the New York Stock Exchange (NYSE) Bullish Percent Index (the BPNYA), he also uses the over-the-counter (OTC) BPI as one of his primary market indicators. In addition, Dorsey discusses several secondary market indicators including the Bullish Percent Indexes of the S&P 500 (known as the BPSPX), the Nasdaq composite (the BPNDX), and sector and international indexes.
Dorsey spends several pages discussing how to use various sector BPIs to rotate investment dollars. This rotation involves reallocating assets from areas of high risk to areas of low risk to improve a portfolio’s return-to-risk ratio and outperform the market. The basic concept is relatively simple: Even though the overall market may be rising, and the BPI with it, the BPI of some sectors may rise faster than others; investors can potentially enhance their returns by moving money out of sectors that have crossed into a high-risk area (above 70) and into sectors with much lower risk.
Although Dorsey predominately relies on the BPNYA, this paper analyzes the BPSPX. Analysis of the BPSPX offers several advantages over the BPNYA: 1) the S&P 500 is the most followed index in the world10, and 2) while the BPNYA boasts a larger number of stocks, the BPSPX includes over 150 widely owned and heavily traded stocks that are not in the NYSE. The smaller number of stocks and the inclusion of non-NYSE stocks means that, in general, the BPSPX moves marginally faster than the BPNYA. Despite these differences, the application and outcome of the two BPIs are very similar.
Other Media
In addition to Dorsey and du Plessis’s books, a few magazines and newspapers have published pieces about the BPI. In their articles, Wayne Thorpe11 and Ron Walker12 build on Dorsey’s and du Plessis’s foundation and highlight additional ways to use the BPI. Additionally, articles published by Forbes and The Wall Street Journal have referenced various BPI indicators or included interviews with traders who use BPI to make trading decisions.
Numerous other websites and internet videos discuss bullish percent. Two of the best sites about BPI are ChartSchool from StockCharts14 and PnF University from Dorsey Wright and Associates.15 The series Market Misbehavior with David Keller, CMT 16 and the StockCharts TV videos on BPI represent some of the best video sources on the topic.
Motivation for Research
Despite the BPI’s lengthy history, there is a noticeable gap in academic research on the index’s merits. My survey of the available literature failed to identify any peer-reviewed papers on the BPI’s usefulness as a technical analysis tool. This study, therefore, aims to begin the process of filling this substantial knowledge gap via a simple statistical analysis of the BPI.
Data
The daily BPSPX levels for this study are drawn from StockCharts.com18. With this data, the BPSPX can be charted every trading day since January 11, 1996, except for September 27, 2023. The data may be charted as either a traditional timeline-based chart (line, bar, candlestick, etc.) or as a PnF chart. In this paper, I utilize all of the data available through October 31, 2023, for a total of 6,999 daily closing levels of the BPSPX.
The S&P 500 levels are from Yahoo! Finance.19 In order to calculate the annual, 6-month, 3- month, and 1-month returns from January 11, 1996, I collected the S&P 500 levels beginning on January 11, 1995. I also calculate daily returns (from previous close to current close) starting on January 11, 1996 to match the beginning of the BPSPX series. Table 1 provides summary statistics of both the daily S&P 500 returns and the BPSPX series.
Table 1: Summary Statistics of BPSPX and S&P 500 Daily Returns from January 1996 to October 2023
In Table 1, both the median and the mean of the BPSPX are above 60. This indicates that between January 1996 and October 2023 the market was bullish a majority of the time. Likewise, daily S&P 500 returns are slightly positive; the mean annualized daily return is around 9%. This does not include dividends.
The distribution of the daily BPSPX levels is displayed in Figure 1 using a bin size of 1. The mean and median from Table 1 are marked in the figure with vertical lines, as are the traditional risk levels of 30 and 70 and more extreme risk levels of 20 and 80.
Figure 1: Distribution of BPSPX Observations
The vast majority of observations (approximately 74%) are between 48 and 80. There is also a cluster of observations from 82 to 84 that form an additional small peak. The bin with the largest number of observations is the 74 to 75 bin with 239 observations.
The distribution in Figure 1 is distinctly left-skewed which results in some low statistical power issues due to limited observations on the lower end of the scale. Additionally, the fact that there are significantly fewer observations below 30 than there are above 70—this is also true for the extreme risk levels of 20 and 80—plays an important role in this study.
Methodology
This research paper uses statistical analysis to evaluate the BPSPX in terms of timeliness, accuracy, and profitability. I assess timeliness by examining how long the BPSPX spends above or below certain levels and how quickly the BPSPX gives a usable trading or investing signal at a trend change. For accuracy, I compare peaks and troughs in the BPSPX PnF to those in the S&P 500. To evaluate profitability, I analyze the probability of positive returns and the average level of returns in the S&P 500 around BPSPX levels.
Timeliness
Consecutive Days within a Specified Range
Table 1 and Figure 1 both indicate that the BPSPX spends more time above 50 than below. 50 Neither the table or the figure, however, reveal the average length of time the BPSPX spends above or below certain levels. Using increments of 10, I measure the total number of consecutive days the BPSPX spends below 10, 20, 30, 40, and 50. I apply the same analysis for BPSPX levels above 50, 60, 70, 80, and 90.
After measuring consecutive days, I compare the results at low levels to the results at high levels. For example, I utilize the traditional levels of 30 as low (or oversold) and compare the average time to the average time above 70 as high (or overbought). I also compare 40 to 60 and 20 to 80. By these comparisons, I highlight the timeliness of BPSPX signals.
Peak & Trough Timing
In the second test of timeliness, I evaluate the difference in the number of days between a usable investment signal from the BPSPX, either long or short, and the local peaks (troughs) in the S&P 500. Dorsey notes that the BPSPX is designed to indicate when investment risk is high or low, not necessarily to pinpoint peaks and troughs in the market.20 However, the ability of the BPSPX to allow an analyst to recognize local bottoms and tops in the market serves as a proxy for how well the BPSPX serves as an indicator of risk.
Local peaks (troughs) in the S&P 500 are identified after moves of at least 5, 10, 15, and 20 percent. After determining the S&P 500 peaks (troughs), I match them to the closest peaks (troughs) on the BPSPX PnF chart, with a peak being at the top of an X column and a trough being at the bottom of an O column. Due to the nature of the PnF chart, a peak (trough) is not evident until there is a 3-box reversal into the next column. I label the date that a 3-box reversal is realized as the Usable Date.
After all the peaks (troughs) are identified, I measure the absolute difference between the Usable Date and the S&P 500 peak (trough). Although an actual BPSPX peak (trough) day may closely match an actual peak (trough) of the S&P 500, the number of days to the Usable Date can vary greatly. I suspect this delay makes the Usable Date an ineffective market timing tool. Figure 2 provides a visual representation of how the peaks (troughs) and the 3-box reversal dates are identified.
Figure 2: Peaks and Troughs in the BPSPX and the S&P 500 during 2022
The BPSPX PnF chart for 2022 on the left side of this figure is labeled with the closest peak and trough dates corresponding to local peaks and troughs in the S&P 500 after moves of at least 5%. The line chart on the right side has the Usable Date labeled on the BPSPX dashed line. The crests and canyons of both the BPSPX and the S&P 500 are marked with small circles and diamonds, respectively. The vertical green dotted line marks the Usable Date so it can be seen clearly where the date falls on both the BPSPX and S&P 500.
Accuracy
To determine the accuracy of the BPSPX, I further exploit the peak/trough analysis. First, I repeat the procedure of measuring the absolute difference in days as I did for timeliness. However, instead of utilizing the BPSPX Usable Date, the statistic used to determine accuracy is calculated as the time between the actual PnF peak (trough) in the BPSPX and the actual S&P 500 peak (trough).
After examining the distance between peaks (troughs) in the BPSPX and peaks (troughs) in the S&P 500, I evaluate BPSPX’s failure to predict a peak (trough) in the S&P 500. This measurement is calculated by identifying the total number of instances where the S&P peaked (troughed) while the BPSPX failed to peak (trough). Finally, I quantify how frequently the BPSPX PnF incorrectly predicts a peak (trough) in the S&P 500. This calculation derives from the number of times the BPSPX PnF had a peak (trough) that did not correspond to an S&P peak (trough).
Profitability
For profitability, I calculate both the forward-looking and the trailing (backward-looking) 1-, 3-, 6-, and 12-month returns of the S&P 500 every day. I then find the average forward and trailing return of each timeframe for rolling BPSPX bin levels. For example, using a bin size of 10, I get the average return from 0 to less than 10. Then I calculate the average return from 1 to less than 11. This process is repeated through the last bin of 90 to less than 100. Within each bin, I also count the total number of observations and the number of observations with positive returns. Using these values, I calculate the percentage of up observations per bin. In addition to bins of size 10, I also use bin sizes of 5, 15, and 20. Table 2 shows the summary statistics for 1-, 3-, 6-, and 12-month trailing return starting January 11, 1996 and ending October 31, 2023. Forward return summary statistics are similar to the trailing return summary statistics with one exception: There are fewer overall observations in forward return summary statistics because the 12-month forward returns are not known after October 31, 2022, 6-month forward returns are unknown after April 28, 2023, 3-month returns after July 31, 2023, and 1-month after September 29, 2023. This table contains the baseline expectations for forward and trailing returns used to measure the profitability of the BPSPX.
Table 2: Summary Statistics of S&P 500 Trailing Returns from January 1996 to October 2023
For the BPSPX to give profitable signals, the average backward-looking returns are expected to be significantly negative (or at least well below the averages in Table 2) when the BPSPX is low (for example, below 30). Likewise, the percentage of positive returns (or percentage up) for backward- looking returns on low BPSPX days should be low (well below 50). The average forward-looking returns on days with low BPSPX levels should be significantly positive, and greater than the results in Table 2. Correspondingly, the percentage up of forward-looking returns are expected to be high (well above 50). In contrast, on days with high BPSPX levels (for example, above 70), the expectations for a profitable signal are reversed. In other words, the average backward-looking returns are expected to be significantly positive, and the backward-looking percentage up should be high. Likewise, the average forward-looking returns should be significantly negative, and the forward-looking percentage up is expected to be low.
The results of the profitability statistics are reported in both table and chart form. To conserve space, I only report the results of non-overlapping bins in the tables. For the charts, rolling bins are utilized for several reasons. First, the number of observations in the tails is limited, particularly at the low end; the larger bin size of the rolling bins means there are more observations per bin compared to using consecutive bins of size one, mitigating some of the limitation of low tail numbers. Second, like moving averages, rolling bins remove some of the noise associated with small-size, non-overlapping bins. The tails are notably noisy, but because the tails are of particular interest in this study, some smoothing helps identify trends in those crucial areas.
Results
Timeliness
Consecutive Days within a Specified Range
The results of the first timeliness analysis are displayed in Table 3. The table shows the number of observations where the BPSPX spent any length of time within specified ranges. The table also displays the associated standard summary statistics for each range. It should be noted that at the extreme tails, the number of observations is below the acceptable threshold for statistical power. However, useful information can still be gleaned from the tails despite this drawback.
Table 3: Number of Days Within Specified Range Levels
While the number of observations above 50 and below 50 are nearly identical (96 and 97, respectively), the average number of consecutive days above 50 is more than triple the average number of consecutive days below 50. There are nearly twice the number of observations above 60 as there are below 40 (84 and 48, respectively), and the average consecutive days above 60 is more than triple below 40. This discrepancy also holds true when comparing above 80 to below 20. The results of comparing 70 to 30 are even stronger; the average number of consecutive days above 70 is over four times larger than that below 30.
Notice that when the BPSPX gets below 30, it only spends about eight consecutive trading days (around a week and a half) on average before rising back above 30. The longest consecutive time period it has spent below 30 is 23 trading days (about one month), which took place from the end of September 2008 to the end of October 2008, during the depths of the Financial Crisis. In contrast, the BPSPX spends over a month and a half (about 33 consecutive trading days) on average at or above 70; this includes seven observations where the BPSPX hit 70 exactly for one day. After five of those single-day observations, the BPSPX dropped back below 70 to between 67 and 69 for one day to one week before continuing back above 70 for a longer time span. Additionally, the far-right column of Table 3 shows that the BPSPX has spent more than 100 consecutive trading days over 70 on seven separate occasions. The longest of these occasions was eleven months (234 trading days) from the end of May 2003 to the end of April 2004 as the market recovered from the Dotcom bubble burst and subsequent bear market of 2001 and 2002.
Another finding from Table 3 comes from the standard deviation of the ranges. The standard deviation of 6.59 in the below-30 range means that 95% of the time the BPSPX spends from one day to one month (21 trading days) below 30 before rising back above 30. This is a relatively short period of time within which a local bottom in the stock market can be identified. Contrast that statistic with the standard deviation of 45.28 in above 70 range. Two standard deviations from the mean in this range indicate the BPSPX spends from one day to nearly six months (124 trading days) above 70 before falling back below 70. Six months is a long period of time to identify a local top. Even one standard deviation above the mean is 79 trading days, or nearly four months. Thus, simply comparing the standard summary statistics of low BPSPX levels to those of high BPSPX levels shows that low BPSPX levels are more timely in identifying local stock market troughs than high BPSPX levels are at identifying local market peaks.
Based on the information in Table 3, the BPSPX gives fairly timely signals below 30 for a long entry into the market, or to exit a short position. However, the timeliness as a long exit signal, or short entry signal, above 70 is questionable. When the BPSPX falls below 30, an analyst can have high confidence a market bottom will happen within a few weeks. Even if it is not the ultimate trough of a bear market, the market is likely to move higher before falling to a lower low; the analyst has an opportunity to take a long position for a short- term gain. On the other hand, when the BPSPX goes above 70, it could stay above 70 for a few months while the market continues to grind higher. Thus, using the BPSPX as a measure of timeliness to determine the end of a move up appears to be unreliable.
Peak & Trough Timing
The second measure of BPI timeliness comes from the difference in the time the S&P 500 peaks (troughs) and the time the BPSPX gives a usable investment signal. Table 4 presents the statistics for the absolute difference in the S&P 500 peak (trough) to the BPSPX peak (trough) and to the Usable Date of the peak (trough). The smallest observation may be negative if the BPSPX peaks (troughs) before the S&P 500, but only the absolute values of the negative numbers are used to calculate the statistical values.
Table 4: Absolute Difference in Days from Usable BPSPX to S&P 500 Peak (T rough)

The “Mean” column of Table 4 shows that it takes about a week and a half (a little over eight trading days) on average to identify a local trough as usable, once the BPSPX shifts over one column from O’s to X’s. Panel B shows that when the BPSPX is below 30, the usable date occurs even sooner, on average within one week (less than five trading days), making the BPSPX trough signal below 30 quite timely for medium to long-term traders and investors. The results in Table 4 confirm the timeliness results of Table 3 that the BPSPX is a timely indicator at the low end.
However, when the BPSPX is high, its timeliness is less certain. On average, the BPSPX peak signal is usable more than two weeks away (over ten trading days) from the S&P 500 peak. When the BPSPX is above 70, that time increases to almost 12 trading days. Because the range of Usable Peaks includes negative days nearly three months in advance, a 12 trading- day difference means the BPSPX gives a usable signal either 12 trading days early or 12 trading days late. For long-term investing, that signal may be timely enough, but it certainly is less timely than when the BPSPX is used to identify troughs .
Accuracy
The results in Table 3 and Table 4 not only speak to the timeliness of the BPSPX as an indicator, but also to its accuracy. The fact that the time the BPSPX spends below 30 is shorter and the standard deviation is smaller than when it is above 70 provides some evidence of the BPSPX being more accurate on the low end of the scale than on the high end. Furthermore, the comparison of the actual BPSPX peaks (troughs) to the S&P 500 peaks (troughs) provides even more documentation of accuracy at the low end versus the high end.
Peak & Trough Timing
In Table 5, the BPSPX actual local trough is about two days from the S&P 500 local trough on average. That number falls to just over one day when looking only at observations where the BPSPX is below 30. Furthermore, over 90% of the BPSPX trough observations are within three days of the actual S&P 500 trough. That number increases to over 93% using only BPSPX observations below 30. In other words, the BPSPX and S&P 500 bottom at nearly the same time, regardless of whether or not the BPSPX is below 30. That means the BPSPX is highly accurate in identifying troughs.
Table 5: Absolute Difference in Days from Actual BPSPX to S&P 500 Peak (T rough)
On the other hand, the BPSPX actual peaks are on average about a week and a half (a little over eight trading days) from S&P 500 peaks. What is not shown in Table 5 is that over 30% of the observed BPSPX peaks occur before the S&P 500 peaks (compared to less than 10% of BPSPX troughs). About 25% of the BPSPX peaks happen five days or more before the S&P 500 peaks (compared to zero troughs). In other words, the BPSPX often peaks more than a week in advance of the S&P 500. While this may be advantageous to a trader wanting to use the BPSPX as a long exit signal, it highlights just how inaccurate the BPSPX is at identifying peaks.
Failed and Incorrect Peak & Trough Predictions
While the first measure of accuracy evaluated how close the BPSPX actual peaks (troughs) were to the S&P 500 peaks (troughs), the second and third measures of accuracy judge the number of failed and false predictions of peaks and troughs. The S&P 500 had 212 moves of at least five percent from 1996 to 2023, 106 upward moves and 106 downward moves. Those 212 moves include 13 moves of 20% or more (seven up and six down) another 16 of 15% or more (eight up and eight down), and 19 of 10% or more (nine up and ten down). The BPSPX had 174 peaks (troughs) that matched the S&P 500 peaks (troughs), but failed to provide a signal for 38 of the S&P 500 peaks (troughs). That means that 82% of the peaks (troughs) were correctly identified by the BPSPX. However, there were an additional 88 times when the BPSPX made a peak or trough, and there was no associated S&P 500 peak or trough. Thus, the BPSPX forecasted about 40% more peaks (troughs) from 1996 to 2023 than actually happened in the S&P 500.
Boiling it down to the 174 peaks (troughs) that match, Table 6 displays the count of S&P peaks (troughs) and BPSPX peaks (troughs) within each bin of size 20. When the BPSPX is below 20, it correctly indicates every S&P 500 peak (trough). But on the other end of the spectrum, the BPSPX forecasts 60% more peaks than what appeared in the S&P 500.
Despite the limited number of observations below 20, the evidence supports better accuracy at lower BPSPX levels than at higher levels
Table 6: Count of BPSPX and S&P 500 Peaks (T roughs) by BPSPX Bins of Size 20
Profitability
The final analysis looks at the profitability of the BPSPX at various levels. Table 7 shows the trailing returns and percentage of positive (% up) observations within each BPSPX bin of size 10 with the exception of the tails, where I use a bin size of 20. I use a larger bin size in the tails because there are so few observations in the extreme tails (below 10 and above 90). While I combine the bottom two bins and top two bins for Table 7, I do not combine them in the charts in Figure 3, which exhibit noticeable noise in the extreme tails. The Appendix contains tables and figures of using a bin size of 20, which smooths out the noise in the tails.
Table 7: Trailing S&P 500 Returns and Percentage Up using BPSPX Bin Size of 10
The first take-away from Table 7 is that the trailing returns when the BPSPX is low (below 40) are significantly negative for all time frames from one month to 12 months, as expected. These results are well below the mean and median returns found in Table 2. Furthermore, even in the 40 to 50 bin of Table 7, the trailing returns are significantly below the mean and median of Table 2. Additionally, a significant portion of the downward move appears to take place one to three months prior to the BPSPX reaching those lower levels. For example, below a BPSPX of 20, the average trailing return at 12 months is -22.77%, but over two- thirds of that move occurs in the last one-month period, and over 80% happens in the last three-month period.
Notice the values in the percentage of up observation columns when the BPSPX is low. These four far right columns of Table 7 show that when the BPSPX is below 20, all of the trailing returns are negative, except for the 12-month period. But even in the 12-month period, nearly 90% of the trailing returns are negative when the BPSPX is below 20. Similarly, below a BPSPX of 40, the percentage of up observations is low, as expected, except for at 12 months in the 30 to 40 bin, where the percentage of up observations is nearly 50%, although even that value is still well below the average in Table 2.
On the higher end of the BPSPX bins in Table 7, when the BPSPX is above 70, trailing returns are positive and significantly higher than the mean and median found in Table 2. Likewise, trailing percentage up is significantly higher than average when BPSPX is above 70. However, unlike in the lower tail where a majority of the move happens in the final three months, only a small portion of the upward movement happens in the last one- to three- month period.
There is also a noticeable imbalance when comparing the average S&P 500 returns at higher BPSPX levels to returns at lower BPSPX levels. The lower 50% of the BPSPX bins have significantly lower trailing returns than average while only the upper 30% provides significantly higher trailing returns than in Table 2. This is not surprising considering the distribution of BPSPX observations in Figure 1. However, what is somewhat unexpected is that the difference in the trailing returns for the upper 30% of the BPSPX bins compared to the average is not as great as the difference in the lower 40%.
Figure 3: Trailing S&P 500 Returns and Percentage Up using BPSPX Bin Size of 10
Figure 3 provides a visual depiction of Table 7. The trailing average return and the trailing percentage of up (positive return) observations for S&P 500 are plotted by rolling BPSPX bin levels with a bin size of 10. There are four graphs: (1) trailing 1-month return, (2) trailing 3-month return, (3) trailing 6-month return, and (4) trailing 12-month (1-year) return. The left- hand side of each chart is the scale for the average return in each bin. On the right-hand side is the scale for the percentage of positive return observations within each bin.
All the charts in Figure 3 show a negative return for BPSPX levels below about 40. There is a notable downward turn of both the average return curve and the percentage up curve above BPSPX levels of 90. This is most likely due to the limited number of observations of BPSPX above 89—there are only 15. Because of the limited number of observations, any observations with low returns have a substantial influence on the average in those bins. What is obvious from the charts is that in all time frames, the curves of both the average return and the percentage up start in the lower right and end in the upper left, which was expected.
While the trailing returns are as expected, the forward-looking returns are not quite what I hypothesized. Table 8 shows the forward-looking returns and percentage up using a bin size of 10. Again, in the table I combine the lowest two bins and highest two bins because of the lack of observations, but do not do so in the figure.
Table 8: Forward S&P 500 Returns and Percentage Up using BPSPX Bin Size of 10
The message communicated in Table 8 is different from that of Table 7. First, all the forward returns are positive, which is not what I expected. When the BPSPX is above 70, the forward returns and percentage ups of the S&P 500 for all time periods are not much different from the averages in Table 2. The one exception is the 12-month percentage up in the 80 to 100 bin. That value is significantly higher than the 12-month percentage up in Table 2, and is completely contrary to what I hypothesized.
However, when BPSPX is low (below 30), the S&P 500 forward returns and the percentage ups are much higher than the averages of Table 2. Here again the 12-month value is the exception, because it is similar to the Table 2 averages. Below 20, the results are particularly strong; the 1-month return is more than eight times bigger than the average, the 3-month return is four and a half times larger, the 6- month return is over three times bigger, and the 12-month return is almost three times as big.
The visual depiction of Table 8 is shown in Figure 4. The setup of Figure 4 is the same as in Figure 3, but it uses forward S&P 500 returns instead of trailing returns. The noise in the tails is readily obvious. With the exception of the noise in the tails due to low observation numbers, all of the average returns in the charts of Figure 4 start in the upper left and move to the lower right. However, the move downward happens quickly below a BPSPX level of about 40 and levels off above 40. In fact, there is even a move up in the average return around a BPSPX of 70 in some of the charts.
Figure 4: Forward S&P 500 Returns and Percentage Up using BPSPX Bin Size of 10
Another point to note from Figure 4 is the smile in the percentage up curve. This is most pronounced in the 12-month chart, but can be seen in the other charts, too. This may be related to momentum; as the overall market moves up, it continues to move up on the back of momentum. As the BPSPX moves higher above 50, the average forward return of the S&P 500 changes very little, but the number of positive return observations increases, creating the smile effect.
Application
The analysis of the BPSPX’s timeliness, accuracy, and profitability indicates that while it effectively identifies highly profitable long entry points at low levels, it is less reliable for making trading or investing decisions at high levels. When the BPSPX falls below 30, it provides timely, accurate, and profitable signals for technical analysts. However, at higher levels, the BPSPX proves neither timely nor accurate, and its signals are not profitable. In simpler terms, the BPSPX highlights a favorable risk/reward tradeoff at low levels— particularly below 20—but offers less clarity about the tradeoff at higher levels.
These findings seem to support Dorsey’s notion that when the Bullish Percent Index is low, investment risk in the market is low and investors should pile into the market. While the findings do not show strong evidence of being able to use the BPI as an exit indicator when it is high, they do not negate Dorsey’s claim that the risk of investing is high when the BPI is high.
In fact, this research gives every indication that Dorsey is correct. When the BPSPX is above 70, there is no abnormal positive return. Furthermore, there is ambiguity about whether the market will continue to rise or take a dramatic downturn. If the market does drop, there is uncertainty about whether that drop will happen in two days or two months. Since there is so much doubt about the future of the overall market as the BPI continues to rise higher, Dorsey’s sector rotation concept makes excellent sense; sector BPIs should be used to identify lower risk sectors to move investments into.
An additional application point from this research is that no single technical indicator should be used in isolation, and the BPI is no exception. There is a plethora of research on moving averages, candlestick patterns, and other technical analysis tools. These should be used in conjunction with the BPI, particularly when trying to isolate market peaks. But when the BPI gets really low (below 20), it often gives the investor an earlier entry point than could otherwise be found with other indicators.
Conclusion
While the Bullish Percent Index may still be a strong indicator, the research herein finds it is most useful at the bottom. When the BPI is low, it gives far more timely signals that are more accurate and profitable than when it is high. While I used simple statistical analysis tools, the findings overwhelmingly support the idea that investors should be bullish at the bottom— when the BPI is below
- At that level, the BPI is timely, accurate, and profitable.
This paper starts to fill the void of academic research on the Bullish Percent Index. However, there is much research that remains to be conducted. Similar to other technical analysis indicators, the performance of the BPI within a trading or investing strategy can be improved by using chart patterns, bullish and bearish divergences, trendlines, and many other techniques. All of these ideas are areas for future research.
Furthermore, this paper only covered the BPSPX from 1996 to 2023. Opportunities exist to study the Bullish Percent Index using longer datasets, which may provide more robust results. Research into other BPIs such as the BPNYA, the BPNDX, various sector BPIs, and BPIs on foreign indexes could yield some significant findings that add to the body of literature on the BPI.
References
“Charting.” PnF University. Dorsey Wright & Associates, LLC. https://oxlive.dorseywright.com/university/index.html.
ChartSchool. “Bullish Percent Index.” StockCharts, 2023. https://school.stockcharts.com/doku.php?id=market_indicators:bullish_percent_inde x.
Dorsey, Thomas J. Point and Figure Charting: The Essential Application for Forecasting and Tracking Market Prices, 3rd Edition. John Wiley & Sons, 2007.
Dorsey, Thomas J. Point and Figure Charting: The Essential Application for Forecasting and Tracking Market Prices, 4th Edition. John Wiley & Sons, 2013.
Dorsey, Tom. “NYSE BULLISH PERCENT INDEX the most important index created by Earnest Staby and later A. W. Cohen.” LinkedIn, March 28, 2018. https://www.linkedin.com/pulse/nyse-bullish-percent-index-most-important- created-earnest-tom-dorsey/.
du Plessis, Jeremy. The Definitive Guide to Point and Figure: A Comprehensive Guide to the Theory and Practical Use of the Point and Figure Charting Method. Harriman House Limited, 2012. du Plessis, Jeremy. 21st Century Point and Figure: New and Advanced Techniques for Using Point and Figure Charts. Harriman House Limited, 2015.
Keller, David. “Market Misbehavior with David Keller, CMT.” YouTube.com, various dates. https://www.youtube.com/@DKellerCMT/search?query=bullish%20percent.
Kilgore, Tomi.“Fund Manager’s Formula for Finding a Winner: Look for a Winner.” The Wall Street Journal, October 6, 2014. https://www.wsj.com/articles/fund-managers- formula-for-finding-a-winner-look-for-a-winner-1412636338.
Navin, John. “The Stock Market Is Topping. Here’s The Evidence.” Forbes, June 14, 2023. https://www.forbes.com/sites/johnnavin/2023/06/14/the-stock-market-is-topping- heres-the-evidence.
“StockCharts TV.” YouTube.com, various dates. https://www.youtube.com/@StockChartsTV/search?query=bullish%20percent.
“S&P 500 Bullish Percent Index.” StockCharts.com, 2023. https://stockcharts.com/h- sc/ui?s=$BPSPX.
“S&P 500 (^GSPC).” Yahoo!Finance, 2023. https://finance.yahoo.com/quote/^GSPC/history?p=^GSPC.
Thorp, Wayne A. “Bullish Percent Index.” American Association of Individual Investors, March 2015. https://www.aaii.com/journal/article/bullish-percent-index.
“U.S. Market Cap.” S&P Dow Jones Indices, 2023. https://www.spglobal.com/spdji/en/index- family/equity/us-equity/us-market-cap/.
Walker, Ron. “Timing Techniques Using The Bullish Percent Index.” Technical Analysis of Stocks and Commodities, April 23, 2010. https://technical.traders.com/tradersonline/display.asp?art=5310.
Appendix
This appendix contains additional profitability tables and figures using a bin size of 20.
Table A1: Trailing S&P 500 Returns and Percentage Up using BPSPX Bin Size of 20

Figure A1: Trailing S&P 500 Returns and Percentage Up using BPSPX Bin Size of 20
Table A2: Forward S&P 500 Returns and Percentage Up using BPSPX Bin Size of 20
Figure A2: Forward S&P 500 Returns and Percentage Up using BPSPX Bin Size of 20
The Ripple Effect of Daily New Lows
by Ralph Vince

About the Author | Ralph Vince
Ralph Vince currently serves as CEO and Founder of Exsuperatus LLC, which creates “performance indexes” for passive institutional programs and ETF providers. He has worked for fund managers, sovereign wealth funds and family offices around the world since the early 1980s. He was Larry Williams’ programmer and trader during Williams’ 1987 legendary championship run where he turned $10,000 to over $1,100,000 in a 12-month competition. He served on the Board of Directors of CMT Association when it was the Market Technician’s Association of New York. He has written numerous books published by John Wiley & Sons as well as professional papers on financial as well as general mathematics and statistics. His work was featured in a chapter of the Edwards & Magee classic, Technical Analysis of Stock Trends, regarded as the bible of Technical Analysis, and is one of the very few additions since the original publication in 1948.
He was one of the early pioneers of 3-dimensional CAD systems, natural language processing systems, and was an integral part of the team that created the original Reinberger Hall of Earth & Planetary Exploration at The Cleveland Museum of Natural History.
He has worked as a Portfolio Manager at the Abu Dhabi Investment Authority and is regarded as a recognized authority on position sizing in trading. His allocation algorithms have been licensed by Dow Jones Indexes. He has been invited to give colloquiums on his ideas at such esteemed institutions as the Massachusetts Institute of Technology, The London Chamber of Commerce, and delivered papers at the World Finance Conference, to name a few. Ralph Vince’s books and peer-reviewed academic papers on mathematics – often co-authored with the top minds in the field of mathematics – appear in such publications as Mathematics, as well as The Journal of Investment Strategies and The Far East Journal of Theoretical Statistics. His academic writings are among the top 1/2 of 1% of the most read research of all academic writings across all disciplines.
Among his most notable and heterodox academic contributions are:
– The notion that “expectation,” the probability-weighted mean outcome, accepted as “expectation” since the 18th century, is asymptotic to the “actual” expectation of median-sorted outcomes.
– Functions of geometric growth optimization can be “inverted,” and thus used as a tool to diminish malevolent geometric growth functions (e.g., national aggregate debt, growth of infected cells in an organism, or infected individuals in a population) entirely mathematically, and therefore, in politically-agnostic manners.
…among other novel ideas.
In recent years, in addition to continuing his academic writings, he has published fiction in his home in Southern France (La Théologie de la Luxure) as well as textbooks for refugees and new arrivals to France to learn the language.
by Larry Williams

About the Author | Larry Williams
Larry Williams is an author, and trader. His career began in 1962 and has written several best-selling market books. Larry won the 1987 World Cup Championship of Futures Trading taking $10,000 to over $1,100,000 (11,300%) in a 12-month competition with real money. It’s no wonder Jim Cramer and a host other refer to him as, “Legendary Larry.”
His greatest accomplishment was teaming with Glen King Parker and Bob Prechter in a U Supreme Court battle that ended the forced registration of publishers with the SEC and CFTC.
He has created numerous market indicators including Williams %R, Ultimate Oscillator, COT indices, accumulation/distribution indicators, cycle forecasts, market sentiment and value measurements for commodity prices.
Abstract
Breadth data is ubiquitous in the study of the stock market. Daily New Lows have a predictive reliability as robust as any breadth data element yet remain the most overlooked of all daily breadth metrics, relegated to being the “wallflower” of daily breadth data. Typically, market analysts heavily examine absolute and relative advances/declines and various derivative indicators like the advance-decline line and McClellan Oscillator when assessing daily breadth. Analysts also scrutinize volume on advancing / declining stocks, often combined with advance / decline data in indicators like TRIN.
Herein, we examine the seemingly overlooked predictive reliability of daily New Lows for signaling both the onset and progression of bear markets, the final capitulation phases of declines, and shorter-term interim tops.
Through extensive analysis, we make the case that this reliable, yet underutilized data element warrants daily examination by serious market participants and inclusion in quantitative models – not only in addition to their immediate ramifications, but the persistence of such ramifications into the following weeks and months of significant readings in this mostly-overlooked data point.
The paper aims to demonstrate why daily New Lows merit very serious consideration among the critical pantheon of market breadth indicators.
Introduction
We will be focusing solely on daily New Lows data of NYSE stocks with respect to price. With any data set, derivative calculations can be formed, examined, and conclusions drawn. For example, moving averages, data differences over various time periods, etc., are all worthy of examination.
We will keep the study of these derivative calculations of daily New Lows outside of the scope of our examination herein, save for the following: as we are dealing with a positive, integer value for the number of stocks making new 52-week lows, the number must be put into the context of how many stocks represent the universe of stocks under examination from which the daily number of New Lows is examined. Thus, rather than examine the raw, positive integer value representing the number of daily New Lows of NYSE stocks (Using NYSE as a proxy for “stock markets” in general), we will examine these as a ratio, a percentage of issues traded making daily New Lows on any given day.
We must examine a second restriction to make our examination be not only exhaustive, but formally legitimate. One problem with using NYSE daily New Lows data is the reporting method change which occurred January 2, 1980. Beginning at this time, New Lows were calculated as the lowest price seen of a rolling 52-week lookback. Prior to this time, daily New Lows were calculated for the calendar year, where certain “fudge factor” adjustments where imposed so that, on the first few trading weeks of the year, the previous calendar year was included so as to avoided all stocks making daily New Lows on the first trading day of the year.
Returning to January 2, 1980, we find the total number of issues traded on the NYSE was 1590, and throughout the subsequent months, this number ranged from the high 1400s to the low 1600s. Contrast this to today (late 2023) where the numbers are more than double that of daily issues traded on NYSE. This is the reason for examining our raw data as a percentage of total issues traded. Whereas we wish to use raw “unprocessed” data, this adjustment is necessary given the nature of the data.
Thus, we will be examining the percentage of NYSE stocks making daily, new 52-week lows since January 2, 1980.
We will discuss three very effective uses of this data, both for discerning market bottoms and bear move final phases (the conventionally regarded implementation of this data point) as well as two novel applications, both of which provide signals which, peculiarly, induce a persistence to their ramifications into subsequent future months.
Among these, we will detail, a signal for detecting periods of “Extreme Vulnerability”, and lastly, another novel signal, one with an exciting track record, which also induces this persistence, but one to tell us when price drops should be acted upon defensively.
Review of the Literature
Daily New Lows have been extensively studied as a market indicator with analyses evolving from contrarian sentiment gauge to informing momentum and technical strategies. This review summarizes key publications that have examined the application of daily New Lows in chronological order.
One of the earliest studies on using daily New Lows was posited by Haller1 wherein he presents looking at various breadth measures versus the broad indexes but also examines the net of New Highs and New Lows. Haller relies on weekly data in his book, but it is the first early mention we find referring to using the aggregate number of annual New Lows.
Haller and Larry Williams were pen pal friends. According to Williams “Gil was a very precise person who did takes own advice in following the strongest stocks. His background as a former ship captain played a lot into his thinking when we discussed the markets”.
Haller’s focus on the new Lows/Highs was picked up, chart wise, by Security Market Research a charting service from Denver, Colorado, by using a 10-day moving average of the two values to replicate the Haller work.
A decade after Haller’s book, market breadth analysis has well-penetrated the professional stock analysts’ offices, as evidenced by Joseph Granville who elaborated at length on market breadth and the effects of daily New Highs and Lows at major market turning points, both in his 1976 book2 and his previous writings for the famous “Hutton Daily Market Wire.”
Another early exposé on New Lows is found in Norman Fosback’s 1979 book “Stock Market Logic” which introduced the new high/new low indicator that has since become a widely used technical analysis tool3. Fosback found New Lows useful for gauging market psychology and ascertaining potential turning points. The new high/new low indicator quantifies market breadth based on the number of stocks hitting new periodic highs and lows. Fosback laid the groundwork for utilizing New Lows in sentiment analysis.
Alexander Elder’s “Trading for a Living” delved deeper into applying daily and weekly New Lows for contrarian analysis4. Elder examined New Lows through the lens of crowd psychology, finding extreme spikes in New Lows often marked major market bottoms. He concluded that major shifts in sentiment drove spikes in New Lows at market turning points. Elder advocated combining new low analytics with other sentiment and technical indicators for robust signaling. His work expanded on sentiment applications of New Lows.
In his 1999 book, Gary Smith provided extensive insights into trading strategies utilizing daily New Lows based on his own market experiences5. Smith highlighted the crucial importance of tracking New Lows as a sentiment gauge to identify bearish extremes. He covered multiple technical strategies centered around New Lows, while also detailing real-world examples of how he successfully incorporated New Lows analysis into his market forecasts and trades during his career. Smith delivered an in-depth account of applying New Lows in practice.
Charles M.C. Lee and Bhaskaran Swaminathan conducted an empirical investigation into the predictive relationship between New Highs / New Lows and subsequent returns6. Through quantitative analysis, they established that increased New Lows reliably predicted higher future returns, while increased New Highs predicted lower future returns. Their findings formally evidenced New Lows’ value as a contrarian sentiment indicator foreshadowing market reversals. The paper contributed robust statistical proof of New Lows’ predictive abilities.
Combining behavioral finance and technical analysis, Ronald Anderson, Anup Agrawal and Jeffrey Jaffe examined market extremes through the lens of New Highs and Lows7. They found New Lows were often driven by investor overreaction and shifts in sentiment. Moreover, combining New Lows data with complementary technical indicators proved useful for objectively identifying market extremes. The paper highlighted the value of blending New Low analytics with other frameworks.
Demonstrating New Lows’ applicability beyond contrarian trading, Thomas George and Chuan-Yang Hwang researched New Highs as a momentum indicator8. They established that stocks hitting 52-week highs strongly tended to continue moving higher over the short- term. Their evidence formalized New Highs’ ability to signal momentum persistence. The findings suggested incorporating New Highs data could enhance momentum strategies and confirm market strength.
In his technical analysis book, Gerald Appel advocated tracking daily New Lows to gauge bearish sentiment, especially during latter bear market stages9. Appel found properly interpreting spikes in New Lows proved useful for identifying capitulation bottoms that marked reversals. He focused especially on contrarian utilities of New Lows for predicting impending trend changes late in bear markets. The work expanded on using New Lows for change in sentiment.
Taking a scholarly approach, Menachem Brenner and Rafi Eldor statistically analyzed the symmetry and persistence of New Highs and Lows in market data10. They discovered New Highs and Lows exhibited clear symmetrical relationships, whereby periods of increased new high frequency were followed by spikes in New Lows, and vice-versa. Their findings evidenced recurring cyclicality and mean reversion tendencies between New Highs and Lows. The paper contributed mathematical proof of New Lows’ tendencies.
Discussion
We now examine three methods of utilizing daily NYSE New Lows data for actionable signals in the stock market.
1. The Selling Climax Signal
This is among the oldest used applications of daily New Lows data. Essentially, the idea is that market bottoms naturally witness an excessive number of daily New Lows (as would be expected).
Many types of indicators have been constructed of all sorts of data over the decades to detect this, these so-called “selling climax” situations. Ultimately, any such indicators are ineluctably redundant to simply looking at daily New Lows.
According to Haller, “When 750 or more net New Lows are recorded within a week, that week is the first climax week. Buy at the start of the fourth week after the first climax week.”
Examination of the total number of issues in 1965, the year of publication and the few years leading up to it, reveals there were roughly 1000 to the low 1100s of issues trades.
Thus, Haller’s 750 or more net New Lows can be translated, roughly, into being “70% of total issues.”
Figure 1 shows us, over the time period we are considering (January 2, 1980 through November 20, 2023), the weekly New Lows along with a green horizontal line at this 70% mark of Haller’s.
Figure 1: Lognormal Dow Industrials in black and weekly New Lows percentage, weekly s ince 1980:
There have been two such signals in this nearly 44-year time period:
84% Oct 3, 2008
88% March 13, 2020
The daily difference in calculation of daily New Highs and Lows, post-Haller, may account in the weekly data, perhaps, such that a lower threshold, 50%, perhaps even less, may be more germane to markets in the post-1980 world.
Figure 2: Lognormal Dow Industrials in black and daily New Lows percentage s ince 1980
Employing daily data, we seek to re-examine Haller’s original rule of buying “at the start of the fourth week after a reading of 70% New Lows,” amending it for daily data. Our results are displayed in Table 1. As such, we look at various thresholds, 70%, 50%, 40%, 30%, and 20% daily issues making 52-week New Lows, and buy on the close 22 market days after such a reading of New Lows. We will look how such positions play out 22, 65, 130, and 260 market days afterwards to replicate 1, 3, 6 and 12 months after entry. We will use S&P 500 Index data and percentage gains and losses given the disparity of index prices over the near 44-year test period between January 2, 1980 and November 20, 2023.
Table 1. Entering 22 market days after recording daily New Lows in the far left column, exiting the position 22, 65, 130 and 265 days later using S&P 500 prices:
First, what we cannot conclude, the percentage of daily New Lows to use as a signal is not clear; they all appear to work well with various trade-offs.
Nevertheless, the results demonstrate a conclusions we can discern from the data.
The first is that the longer the period a position is held after a “New Lows Climax,” the greater the gains, as well as likelihood of profitability. Secondly, even readings as low as 20% daily New Lows show significance looking out one calendar quarter or longer.
1. A Signal for Periods of “Extreme Vulnerability”
We now present a very novel indicator utilizing daily New Lows and their persisting resonance in weeks and months after significant readings wherein very low readings of daily New Lows can be used to isolate periods of “Extreme Vulnerability.”
Essentially, in the markets of the late 20-teens and early 2020s, wherein we typically see roughly 3000 big board shares trading daily, we have used this signal with respect to looking for “single-digit” daily New Lows in order to record the signal. Thus, to extend this out to a percentage of daily New Lows so as to make it atemporal, we use a number of 0.325% of daily New Lows. Using 0.325% daily New Lows makes the daily New Low readings of the past equate to single digit daily New Low readings of the current era.
The signal is very simple but with profound market implications as evidenced historically in Figure 3. The signal is simply, if we have not witnessed a reading as low as 0.325% or lower in daily New Lows for any of the past 65 (roughly 1 calendar quarter, or 3 months) trading days, we stipulate that we have entered a period of “Extreme Vulnerability.”
Figure 3: Lognormal Dow Industrials overlaid with periods of Extreme Vulnerability in blue.
Let us examine important market milestones of the past four-plus decades with respect to periods of Extreme Vulnerability.
Beginning with the Crash of October 1987, as displayed in Figure 4, the vulnerability appears on Sept 27, 1987.
Figure 4: Lognormal Dow Industrials and the Crash of ’87, transpired in a period of Extreme Vulnerability.
Next, we turn our attention to the Global Financial Crisis of 2007-9, as show in in Figure 5.
Notice the period of Extreme Vulnerabiity, as with most of the major tops over this 44-year time-span, was flagged by this indicator (contrary to what
one might expect, late stage bull moves are not characterized by days of extreme low readings of daily New Lows). There was a period of a few months in late Spring, early Summer of 2008 where things cleared (typical in a major bear market as evidenced by the history in Figure 3), as well as in the final stages of the bear market beginning late in 2008 where this signal subsided.
Turning our attention to the major moves of the most recent 6 of the 44 years under examination, shownin Figure 6, we can say we have seen four major market drops from significant tops.
The “Volmageddon” incident of February 5, 2018, wherein President Trump’s tweet about tariffs caused a very strong drop in stocks that day coupled with a meltdown in inverse volatility products that evening, occurred during a period of Extreme Vulnerability that went into effect during the third week of January, and remained in force into the fourth quarter where the rate panic began.
The onset of the Covid selloff in March of 2020 was not marked as a period of Extreme Vulnerability, one of the few major market drops of the 44-year time-span ensued which was not precipitated by this condition.
The top of late 2021 / early 2022 however, did transpire under a period of Extreme Vulnerability, a condition which noticeably did not appear until just before the top, steering clear of giving confusing signals to the analyst in the post-Covid run-up.
3. The “Bloodbath Sidestepping” 4% Rule
This is perhaps the most powerful and reliable of all rules we have seen pertaining to daily New Lows. The rule is, if we experience a day with more than 4% daily New Lows, we go short or flat (or fully-hedge our long position). A perfunctory glance, graphically, of this indicator over the past 44 years appears as Figure 7.
The indicator is far from perfect, but if we examine the hard-and-fast rule of going short, flat or fully hedged into the close of a day that is seeing 4% or more daily New Lows (and you know this value before the close of trading, as it is broadcasted throughout the trading day, and can only increase throughout the day), and re-establishing our position into the close on the first subsequent day where daily New Lows are less than 4%, we find the following track record.
Figure 6: Lognormal Dow Industrials with days of 4% or more daily New Lows superimposed in red, January 2, 1980 – November 20, 2023.
We will examine the track record where we are always either long or short based on this indicator. First there were 630 trades of this signal of which only 257 were profitable (40.79% winning trades). As evidenced in Figure 7, however, nearly every major market selloff in the 44-year history saw an unusually high number of such days and the rule gets us out early on in such drops. As a result, over the nearly 44-year time-period, this simple “bloodbath-sidestepping” rule alone, despite its numerous, short-lived, false signals along the way, netted 10,981.86 Dow Jones Industrial Average points more than mere buy & hold over that period. That’s greater than a 31 percent improvement by standards of today’s DJI price (November 20, 2023 close being 35,151.04). This 10,981.96-point surfeit was garnered on an DJI that closed as low as 759.13 in the period under examination.
This “Bloodbath Indicator,” based on daily readings, gives the analyst advance warnings that what he is witnessing has a very high chance of being more than a mere, prosaic market down day, but may well be the early stages of a market bloodbath. In fact, the analyst would be very hard pressed to find a market “bloodbath” condition that did not exhibit this signal!
Figure 7: Lognormal S&P 500 with days of 4% or more daily New Lows superimposed in red, over the October ’87 Crash.
We now examine a few critical points in this test period. Again, looking back at the October, 1987 Crash in Figure 7.
October 9, a full week and a day before the Crash, we started to see this indicator serve up readings of greater than 4% daily New Lows. This indicator would have had one implementing it either short, flat or fully-hedged into this bloodbath.
We now turn our sights to the past six years, as we see in Figure 8.
Figure 8: Lognormal S&P 500 with days of 4% or more daily New Lows superimposed in red, January 2018 through November 2023.
Let us zoom-in and examine once again the Volmageddon episode of February 2018, shown in Table 2.
Table 2: Volmageddon and Percentage of New Lows.
The actual Volmageddon incident occurred on February 5. The hard-and-fast application of this rule would have had the implementor of this rule out or short on the close of February 2, or fully hedged by the close of that day, sidestepping Volmageddon entirely, perhaps even profiting off of it had they opted to be short for such signals.
Skipping ahead to the Covid “bloodbath,” (the reader can readily see how the rate hikes of Q4 2018 were handled by this indicator), we see the results of this expressed in Table 3.
Table 3: Covid and New Lows > 4%.
Here we clearly see the “Bloodbath Indicator,” having sidestepped nearly the entirety of the sudden, Covid selloff, or having been short on it throughout the 21-day long period of greater than 4% daily New Lows.
Lastly, let us examine this rule on the full-history we possess of daily New Lows data. Despite the pre-1980 flaws, the robustness of the “Bloodbath Indicator” clearly holds up. This is presented as Figure 9.
Figure 9: Lognormal S&P 500 with days of 4% or more daily New Lows superimposed in red, March 1, 1965 – November 20, 2023
Summary and Conclusion
We have unequivocally demonstrated using empirical data, not only the value of daily perusal of New Lows to the analyst, but the necessity of such, given the reliability of these indicators based on the past four-plus decades of daily data.
It is important to note that, at no point, do we advocate using daily New Lows in isolation. Naturally, what we present should be used with the rest of the analyst’s tools. Our point here is to demonstrate why the analyst should likely be paying more attention to daily New Lows, provide new tools to do this, and point out the “future resonance,” that days of extreme readings in daily New Lows possess; this is not a data point to look at with respect to simply the latest daily reading.
It is exactly this “future resonance,” the ripples in subsequent weeks and months that single, extreme daily readings in this sole data element signal to the analyst
Thus, the central and novel insight of this paper:
Days of extreme readings, either high or low, in daily New Lows, create an “echo” much like waves in a pond resonating out from a stone thrown into it. Such days have been shown to persist in terms of their effect on price action for several months afterwards.
References
George, T. J., & Hwang, C.-Y. (2004). The 52-Week High and Momentum Investing. Journal of Finance, 59(5), 2145–2176.
Anderson, R. J., Agrawal, A., & Jaffe, J. F. (2003). Highs and Lows: A Behavioral and Technical Analysis. Journal of Finance, 58(3), 921–945.
Appel, G. (2005). Technical Analysis: Power Tools for Active Investors. FT Press.
Brenner, M., & Eldor, R. (2013). New Highs and New Lows: Symmetry and Persistence in the Stock Market. Review of Quantitative Finance and Accounting, 41(2), 297–308.
Elder, A. (1993). Trading for a Living: Psychology, Trading Tactics, Money Management. John Wiley & Sons.
Fosback, N. G. (1979). Stock Market Logic. Dearborn Financial Publishing.
Granville, J. E. (1976). A Strategy of Daily Stock Market Timing for Maximum Profit. Prentice Hall.
Haller, G. (1965). The Haller Theory of Stock Market Trends. Self-published.
Lee, C. M. C., & Swaminathan, B. (2000). New Highs and New Lows: Evidence on the Role of Investor Sentiment. Journal of Financial Markets, 3(4), 389–422.
Smith, G. (1999). How I Trade for a Living. McGraw-Hill.
A Measure of Market Incertitude
by Jeff McDowell, CMT

About the Author | Jeff McDowell, CMT
Jeff McDowell is a highly regarded figure in quantitative analysis, with a distinguished track record of innovation and impact across multiple domains. As the founder of Synaptric Capital LLC, he has been at the forefront of empirical research, advancing the development and application of data-driven methodologies. He has noted expertise in data modeling, risk metrics, life-cycle cost estimating, statistical analysis, and the development of technical indicators.
Introduction
A trending market leads to inevitable questions: Is the trend exhausting? Is a market correction looming? Technical analysts seek to answer these questions with internal strength measures based on characteristics of constituent price movement. These techniques are intended to detect a change in market character by revealing transitions from robust strength to potential deterioration.
Internal strength techniques fall primarily into two areas. First, internal market breadth measures, which quantify the extent to which constituents are going along with the overall trend – often via a count of declining issues and advancing issues. Second, diffusion measures, which quantify breadth via a count of the number of issues meeting a given criteria such as those above a 40-day moving average. This paper explores a third area of strength measurement emanating from the question: Does today look like yesterday?
This paper begins by examining the nature of advance/decline (A/D) counting and then introducing and exploring a more granular measure. The measure will then be extended into an indicator and, in turn, extended into an oscillator. Signal cases will be presented and their usefulness assessed for judging trend strength and detecting changes in market character.
Background and Literature Review
Market breadth analysis is an approach to understanding overall market conditions associated with market movement. A succinct description is provided by Martin Pring (1985): “Market breadth measures the degree to which a market index is supported by a wide range of its components.” Pring further states two beneficial purposes. “First, it indicates whether the environment for most items in a universe (normally equities) is positive or negative. Second, market breadth indicators signal major turning points through positive and negative divergences.”
Numerous technical analysis reference works cover the subject of breadth analysis. Notable is the comprehensive survey of breadth methods provided by Gregory Morris (2015). Many of these methods are internal market breadth measures built upon advances, declines, up and down volume, new highs and new lows. These are “used in almost every conceivable method and mathematical combination, by themselves, or in combination with other breadth components. After they are mathematically arranged, they are then again smoother, averages, summed, and normalized.” The number of A/D techniques abound yet their calculations depend upon only a few direct measures.
Breadth indicators are prevalent in contemporary technical analysis literature. Recent Technically Speaking articles (Deemer 2023 and Wells 2022) and recent Dow award papers (Diodato 2019 and Whaley 2010) directly address or touch upon market breadth topics. The tempo of research and publications attest to the enduring relevance of breadth studies.
Framing Advance-Decline Counting as Data Binning
In data science, data binning is a pre-processing method for data smoothing whereby a large set of original data is segregated into intervals called bins, and the discrete values in every bin are treated to derive a representative value. Data binning categorizes continuous data to decrease noise but it does so at the risk of information loss. The advance-decline count in technical analysis is a type of data binning as it divides an entire range of daily price change values into three subranges and applies the subrange labels of advancers, decliners, and unchanged as substitutes for the actual values.
In other fields, many situations lend themselves to proper data treatment by binning. Even so, researchers in those fields often lament that information is lost by doing so. For example, biomedical researchers Bennett and Vickers (2012) have noted cautions regarding binning namely “it requires an unrealistic step- function … that assumes homogeneity … within groups”. In the field of behavioral research Kim and Frisby (2018) state “discretization is considered to be a downgrading of measurement, because it transforms ratio or interval scale data into ordinal scale data” and “(continuous) scale data include more numeric information than do ordinal scale data.” Data mining and analytics expert Dorian Pyle (1999) states “Binning itself discards information in the variables for a practical gain in usability.” The potential consequence is information loss, over-smoothing, or under-smoothing, which can further result in misinterpretation and inaccurate outcomes.
Should technical analysis discard information in order to gain expedient usability? Given the many widely- used variants of A/D the first blush answer is yes. And given the many tested and demonstrated uses of those techniques, this paper does not discourage their use. Even so, exploring the use of all the data remains enticing. Examining the full complete distributions is a new way of examining trend strength and trend exhaustion. Can treating all the data lead to a useful measure of strength?
Daily Price Changes as Bins
Consider the advance-decline count of the S&P 500 on Monday, September 25, 2023 and its visual representation in Figure 1. Each trading day stocks experience a daily price change which is expressed as a one-bar rate of change (ROC(1)). A/D puts the entire continuum of index constituents’ daily price change into a mere three bins: 300 Advancers, 200 decliners, and 3 unchanged11. Here data has been categorized and binned into three discrete buckets. The horizontal axis is comprised of categories rather than values.
Figure 1. ROC as a Categorical Distribution
To illustrate the loss of information from discretization, five stocks from the index are shown on Table 1. First consider three stocks at the center of the distribution (RSG has an ROC of
-0.00683, TRV has an ROC of zero, and COST has an ROC of +0.005371). In this binning rubric these three datapoints, though nearly indistinguishable, are placed into three separate buckets. Using zero as the bin boundary is perhaps an unrealistic step function. Consider now the minimum stock WBD with an ROC of -3.96. It is placed into the same bin of decliners as is near-zero RSG despite their considerable difference being ~4 apart. Consider as well that the maximum stock SEE with an ROC of +3.57 is placed into the same bucket of advancers as near-zero COST even though they are ~3.5 apart. Too much homogeny is imputed into both the advancers bin and the decliners bin. In short, this approach lacks granularity.
Table 1. Five Example Stocks
Daily Price Change as Distribution
Looking again at the S&P 500 for 9/25/2023, the daily price change of a group of stocks can be shown as a frequency distribution as depicted in Figure 2 which, when smoothed, is a probability density function that ranges from -4.0% to +3.6%. The distribution is
continuous and has observable features of shape and has characteristics of location. Shape is quantified by descriptive statistics such as variance and skew. Location is quantified by descriptive statistics such as median and mean. The horizontal axis, instead of discrete categories, is a continuum of values. That is, any unique value of ROC has a given probability. A large body of inferential statistics can be garnered from this including distribution tests. In short, this approach has granularity.
Figure 2. Daily Rate of Change as a Probability Density Function
To illustrate further the distinction between granular continuous distributions and less- granular categorical distributions, consider for example the data for 8/20/2010 shown in the upper panel of Figure 3 and compare it to the data from 3/19/2009 in the lower panel. Both have the same advance-decline ratio but clearly have different distributions – one narrow and one wide. Similarly, in Figure 4, both 2/13/2020 and 4/4/2022 have the same A/D but again clearly different distributions – one skewed left and one skewed right.
Figure 3. Days with Same A/D But Differing Dispersion: Narrow and Wide
Figure 4. Days with Same A/D But Differing Dispersion: Skewed Left and Skewed Right
Distribution Tests
It stands to reason then that if the A/D count has indeed discarded information, then every downstream use of it (e.g., McClellan Oscillator, ARMS Index, breadth thrusts, and many others) will likewise carry information loss. So, with motivation to not discard information, let’s turn to statistical ways to use all the data.
Statistics is a field replete with methods to compare distributions. The first published statistical test was centuries ago by Arbuthnot (1710). The idea of testing was further codified and elaborated early in the twentieth century, mainly by R. A. Fisher (1925). The basic steps outlined in his work continue to be the framework in use today. The first step is formulating a null hypothesis as an assertion regarding a characteristic of this data. By starting with the proposition that this characteristic exists, statistical tests can estimate the probability that an observed characteristic could be due to chance. A test statistic T as a function of the data, “is used to indicate the degree to which the data deviate from the null hypothesis.
And the significance of the given outcome of the test statistic is calculated as the probability, if the null hypothesis is true, to obtain a value of T which is at least as high as the given outcome.” (Snijders 2015). This empirical rote is familiar to students of sophomore statistics (form null hypothesis; compute a statistic; compare statistic to table value; reject the null hypothesis or not). The process results in a binary outcome which is certainly an appropriate use case in many settings but not this one. Rather here the interest is in a measure of how different today is compared to yesterday.
If we were to perform a full test around our thesis (Does today look like yesterday?), we would phrase the null hypothesis in the form of “today is the same as yesterday”, compute a measure of the difference between today and yesterday, obtain an appropriate reference value at a stated level of confidence, and compare the two. If the computed measure is larger than the reference value, the hypothesis is rejected. That is, the difference is so great we cannot say that they are equal days. But the dichotomous outcome, concluding today is different from yesterday, is not useful here. After all, what would we do with that outcome? An indicator of binaries is unappealing. So, moving forward we refine the thesis question to: How different is today than yesterday? And by asking “how different?” we need to measure the degree to which they differ.
The Measure
The statistic used in this paper was devised by Yves Lepage (1971). The Lepage test statistic is a combination of two nonparametric rank-ordering tests: the Wilcoxon Rank-Sum2 test for location (1945) and the Ansari–Bradley test for scale (1960). The Wilcoxon Rank Sum test is used to test the equality of medians from two samples and its calculation involves replacing observations of the combined samples with their ascending ranks. The Ansari- Bradley test is used to test the equality of scale from two samples and its calculation involves replacing the observations of the combined sample less than or equal to the median with their ranks in increasing order and those larger than the median with their ranks in decreasing order. The ranks of the second sample in each case are summed to form the respective statistic of each. Each of the cited references provides details on these tests
2 The Wilcoxon Rank-Sum test is also known as the Mann-Whitney U test (Mann and Whitney 1947).
but for the purposes of this paper an illustrative calculation example is provided in four steps.
To illustrate calculation of the Lepage statistic, consider the fictional closing price data for twelve stocks on three consecutive days in the left-hand portion of Figure 5. Closing prices are shown for Monday, Tuesday, and Wednesday followed by the rate of change for Tuesday and Wednesday. Rate of change is the one-day price movement computed as: ROC(1)=((Today’s Close-Yesterday’s Close))/(Yesterday’s Close)*100.
Lepage step 1: The rank-ordering process begins with combining Tuesday’s 12 ROCs and Wednesday’s 12 ROCs into one 24-member superset in ascending sort. Two sets of ranks are assigned as shown in the right-hand portion of Figure 5. First, assign ordered ranks 1 through 24 to each member. Second, assign ranks from the top and from the bottom toward the middle. Both the Wilcoxon and Ansari-Bradley statistics will be computed from the sum of these ranks. For example, AAPL Wednesday ROC of 1.1765 is assigned a rank of 19 in the second column and a rank of 6 in the third column. Note that the remaining calculations do not use closing prices or ROCs but use only these ranks.
Figure 5. Illustrative Example Rank Ordering
Lepage step 2: The Wilcoxon uses the rank order of the combined Tuesday and Wednesday’s ROCs and sums the ranks of only the Wednesday values. From the sum, W, the standardized Wilcoxon statistic is computed by subtracting the expected values and dividing by the square root of the expected variance:
The resulting value of 0.404145 is illustrated in Figure 6.
Figure 6. Illustrative Example Wilcoxon Rank-Sum
Lepage step 3: The Ansari-Bradley uses the ranks ordered from each end of the combined Tuesday and Wednesday’s ROCs and sums the ranks of only the Wednesday values. From the sum, C, the standardized Ansari-Bradley statistic is computed by subtracting the expected values and dividing by the square root of the expected variance:
The resulting value of -0.590248 is illustrated in Figure 7.
Figure 7. Illustrative Example Ansari-Bradley
Lepage step 4: The Lepage statistic, D, is the sum of the squares of the standardized Wilcoxon and Ansari- Bradley statistics:
The resulting value of 0.511727 quantifies the difference between Wednesday’s ROC distribution and Tuesday’s ROC distribution. This is the degree to which Wednesday deviated from Tuesday.
A pair of similar days with a small degree of deviation from one another will result in a small Lepage value. Two similar examples are shown in Figure 8 with low values.
A pair of dissimilar days with a large degree of deviation from one another will result in a large value. Two dissimilar examples are shown in Figure 9 with large values.
Construct the Incertitude Approach
Thus far, we have presented characteristics of a continuous ROC distribution contrasted with the discretized A/D. In addition, we have presented a statistical measure of the difference between the distributions of two consecutive days’ ROCs. Now we introduce a new breadth measure in the form of an indicator and in the form of an oscillator.
The Incertitude Indicator is the three-day simple moving average of each day’s Lepage statistic. This indicator represents the extent to which days are behaving unlike their previous days. When the indicator value is high it indicates a degree of chaos in the market; when the indicator is low it indicates a degree of sameness in the market.
The Incertitude Oscillator is the difference between two exponential moving averages of the Incertitude Indicator in the same manner as the McClellan Oscillator is constructed. Subtracting the 39- day exponential moving average of Incertitude Indicator from the 19- day exponential moving average of Incertitude Indicator forms the Incertitude Oscillator. Oscillators typically support interpretations of overbought at their highs and oversold at their lows. But these terms are not applicable here rather, the interpretation here is overchaos at its highs and oversameness at its lows.
The incertitude approach posits that either high degrees of sameness or high degrees of chaos may portend a change in market character. On the one hand, repeated days of sameness occur at the end of a trend with a dearth of new ideas. On the other hand, repeated days of chaos occur at the end of a trend with an abundance of new, but weak, ideas. Either extreme coincides with trend exhaustion.
A case of sameness could occur when most market participants believe the last blowout has taken place, have acted on the macro factors in play, and are awaiting new information to launch new sector leadership for the next counter wave. The lack of new information in either sector price movements or in market-moving news leads to complacent resignation to the move. With market directional movement as the primary reinforcing factor, the trend continues as the market overshoots fundamentals.
A case of chaos could occur when most market participants trade on each day’s news as though it is genuine informational signal but they experience no confirmational price action follow-through. “Noise traders are investors who buy and sell based on signals that they think are informative but that are not” (Aronson 2011). Moves are made with more psychologically-driven factors than fundamental ones as the market overshoots fundamentals.
Candidate Incertitude Signals
The incertitude approach, built upon the Lepage measure of daily price change distribution differences, provides a measure on the scale of sameness to chaos. Figure 10 presents a notional depiction of the incertitude scale. The center of this scale represents the norm of a market behaving healthily as market participants with varying information insights, goals, and time horizons provide liquidity to one another in orderly fashion. An over-extended market with characteristics of extreme chaos or extreme sameness are conditions ripe for
a trend reversal. It is from these extremes that we will seek signal. This section presents a discussion of candidate counter-trend signals observed in assessing the viability of the incertitude approach. The section that then follows will quantitatively assess each candidate signal.
Figure 10. The Incertitude Scale
Each of these candidate signals is presented as a chart with accompanying text. The signal type is described in prose followed by a formulaic set of signal logic. In the signal logic formulas “II” represents Incertitude Indicator and “IO” represents Incertitude Oscillator. Observed anecdotal episodes of signal are discussed. Before jumping into the signal descriptions, a template is offered to define the content of the charts.
Candidate Signal Template
Each figure in this section is comprised of six panels as depicted in the template definition of Figure 11. Panel 1 at the top presents the II or IO with overlays that highlight the pertinent patterns of that signal. The signal is noted with a red circle and a vertical dashed line anchoring the signal date across all six panels. Panel 2 is the SPX in candlestick format with the trend prior to the signal noted with a highlighted line and three simple moving averages. Panel 2 also notes the post-signal counter-trend with a highlighted line.
Panels 3 and 4 present two internal breadth measures: The AD Line and the percentage of index members above their 50-day moving averages. A highlighted line on each will note post-signal conditions of deteriorating (or strengthening) breadth.
Panels 5 and 6 present two momentum indicators on the SPX itself: The Relative Strength Index (RSI) and the Moving Average Convergence/Divergence (MACD) oscillator. RSI is shown with a 14-day parameter and the MACD is shown with a 12-day and 26-day configuration with no signal line. A highlighted line on each will note post-signal increasing or decreasing momentum.
Figure 11. Template for Signal Charts
Figure 12. Incertitude Indicator Cross Up from Sameness
Signal Type 1: Incertitude Indicator Cross Up from Sameness
channels and a fast pair of EMAs. A signal is triggered when the smoothed
indicator is below the channel and the 4-day EMA crosses up over the 9-day
EMA. Signal Type 1 is defined as:
- Today’s 4-day EMA of the II > Today’s 9-day EMA of the II; and
- Yesterday’s 4-day EMA of the II <= Yesterday’s 9-day EMA of the II;
and - Yesterday’s 9-day EMA of the II < (Yesterday’s 20 Day Minimum Channel
of the II + 50).
The idea captured here is when the indicator is reading sameness, a change
in market character is pending. When it begins to abandon sameness and churn
begins, the character is indeed changing. New emergent leaders are beginning
to move in the countertrend direction. On 7/28/2023 a signal is triggered
(top panel) when the index trend is upward (second panel) and afterward
market breadth deteriorates (panels three and four), momentum declines (the
bottom two panels), and index reverses direction (second panel).
Figure 13. Incertitude Indicator Cross Down from Chaos
Signal Type 2: Incertitude Indicator Cross Down from Chaos
The top panel of Figure 13 depicts the Incertitude Indicator with channels and a fast pair of EMAs. This signal type pertains to the opposite side of the scale from signal type 1. A signal is triggered when the smooth indicator is above the channel and the 4-day EMA crosses down over the 9-day EMA.
Signal Type 2 is defined as:
- Today’s 4-day EMA of the II > Today’s 9-day EMA of the II; and
- Yesterday’s 4-day EMA of the II <= Yesterday’s 9-day EMA of the II; and
- Yesterday’s 9-day EMA of the II > (Yesterday’s 20-Day Minimum Channel of the II * 6).
The idea captured here is when the indicator is reading chaos a change in market character is pending. When it begins to abandon chaos and exhibits a more routine churn, the character is indeed changing. New emergent leaders are beginning to move in the countertrend direction.
On 1/5/2023 a signal is triggered (top panel) when the index trend is downward (second panel) and afterward market breadth strengthens (panels three and four), momentum advances (bottom two panels), and the index reverses direction (second panel).

Signal Types 3 and 4: Smoothed Incertitude Indicator Extremes
In the interest of attaining a fast responsive signal, the ConnorsRSI (Connors and Radtke 2014) is applied to smooth the Incertitude Indicator. The top panel of Figure 14 depicts the smoothed Incertitude Indicator with horizontal values 10 and 90 drawn. A chaos signal is triggered when the smoothed indicator is above 90 and a sameness signal is triggered when the smoothed indicator crosses below 10.Signal Type 3 is defined as:
- Today’s ConnorsRSI of the II > 10; and
- Yesterday’s ConnorsRSI of the II <= 10. Signal Type 4 is defined as:
- Today’s ConnorsRSI of the II > 90; and
- Yesterday’s ConnorsRSI of the II >=
The idea captured here is when the indicator is reading chaos, a change in market character is pending. The smoothed indicator does not tend to stay at the extreme very long so this signal is not formulated with a prerequisite (e.g. expressed as a pending range from which a signal is then noted as the prior signal types were). When it signals the counter trend direction may have already begun.
On 10/13/2021 a sameness signal is triggered (the first red circle in the top panel) when the index trend is downward (second panel) and afterward → market → breadth strengthens (panels three and four), momentum advances (bottom two panels), and index reverses direction (second panel). On 11/26/2021 a chaos signal is triggered (second red circle in top panel) when the index trend is upward and afterward market breadth deteriorates, momentum declines, and the index reverses direction.
Figure 15. Incertitude Oscillator Signal Line Cross Up
Signal Type 5: Incertitude Oscillator Signal Line Cross Up
The top panel of Figure 15 depicts the Incertitude Oscillator with a 9-day EMA signal line. A signal is triggered when the oscillator crosses above the signal line when the oscillator is less than 10.
Signal Type 5 is defined as:
- Today’s IO > Today’s 9-day EMA of the IO; and
- Yesterday’s IO <= Yesterday’s 9-day EMA of the IO; and
- Yesterday’s IO < -10.
The idea captured here is when the oscillator is reading relative sameness and then reverses away from continued sameness, the character is indeed changing. Normal liquidity is being restored. New emergent leaders are moving in the counter-trend direction.
On 7/28/2023 a signal is triggered (top panel) when the index trend is upward (second panel) and afterward market breadth deteriorates (panels three and four), momentum declines (bottom two panels), and the index reverses direction (second panel).
Figure 16. Incertitude Oscillator Signal Line Cross Down
Signal Type 6: Incertitude Oscillator Signal Line Cross Down
The top panel of Figure 16 depicts the Incertitude Oscillator with a 9-day EMA signal line. This signal type pertains to the opposite side of the scale from signal type 5. A signal is triggered when the oscillator crosses below the signal line when the oscillator is greater than 10.
Signal Type 6 is defined as:
Today’s IO < Today’s 9-day EMA of the IO; and
Yesterday’s IO >= Yesterday’s 9-day EMA of the IO; and
Yesterday’s IO > 10.
The idea captured here is when the oscillator is reading relative chaos and then reverses away from continued chaos, the character is indeed changing. New emergent leaders are moving in the counter-trend direction.
On 3/23/2020 a signal is triggered (top panel) when the index trend is downward (second panel) and afterward market breadth strengthens (panels three and four), momentum advances (bottom two panels), and the index reverses direction (second panel).
Empirical Assessment
The six candidate signal types were tested using data from January 3, 1990 to October 17, 2023. The data set is daily unadjusted closing prices for the S&P500 index SPX and its constituents as they were comprised on each day. Unadjusted data, that is data not altered to accommodate splits or dividends, represents the prices available to traders on that day. The data source for index value, constituent prices, and daily index constituents was Norgate Data. The internal measures were obtained from StockCharts.com. The incertitude formula and tests were coded in Python.
Given the goal of this paper is assessing changes in market strength, for testing purposes we choose other strength metrics as the objective measures for signal outcomes. Since the goal is not a trading system for the index, framing the assessment as trading profits, drawdowns, percent profitable trades, etc. is not applicable. The assessment here is the counter-trend change in the selected strength metrics for 20, 40, and 60 days after each signal is triggered.
Each signal type is neutral as to the trend direction. No matter if the trend is up or down, the signal portends a reversal in trend strength. Their utility though is based on the presence of a trend so the test design must also define what constitutes a trend. For the purposes of this paper, the selected trend definition is three simple moving averages in directional order defined as:
- Downtrend: Index 10-day SMA < Index 20-day SMA < Index 40-day
- Uptrend: Index 10-day SMA > Index 20-day SMA > Index 40-day
Table 2 presents the downtrend reversal results. Table 3 presents the uptrend reversal results. For each table the columns are the signals. The rows are grouped into internal strength metrics. Percent of constituents above their 50-day simple moving average expressed as the mean arithmetic change and the A/D Line expressed as mean percent change. The second set of rows are the momentum measures of the SPX index itself (RSI and MACD) both expressed as mean arithmetic change. The positive cells are shaded green in the downtrend table and the negative values shaded red in the uptrend table. Most of the cells support the conclusion that incertitude signals lead to reversals in strength. In each case, a t-test was made to determine if the mean change to each metric is different from zero. The idea is that if the signal does not influence the outcome, then the outcomes would be random in which case the collective results would converge to zero. Cells not passing this test are noted with shading. 92% of the cells passed the test.
Table 2. Average Metrics Following Incertitude Signals in a Downtrend
Table 3. Average Metrics Following Incertitude Signals in an Uptrend
The candidate that produced the most signals in a downtrend was the Incertitude Indicator Cross Down from Chaos. This column is noted in Table 2 with a thicker border. Note that all cells in that column are positive and passed the t-test. This intuitively resonates as when a market is chaotically down, the trend ends with the emergence of order as evidenced by strengthening breadth and momentum.
The candidate that produced the most signals in an uptrend was the Incertitude Indicator Cross Up from Sameness. This column is noted in Table 3 with a thicker border. Note that all cells in that column are negative and most passed the t-test. This intuitively resonates as when a market is up with no new emerging sectors, the trend ends with the emergence of disorder as evidenced by deteriorating breadth and momentum.
Conclusions and Further Considerations
This paper introduced a new approach for examining market strength that benefits from the granular inclusion of all the applicable constituent data. A statistic was fully described to measure the difference in the structure of each day’s price change with its prior day’s structure. The interpretations of the statistic were placed into the context of a range from extreme chaos to extreme sameness.
The statistic was then fully developed into an indicator and an oscillator from which candidate signal cases were developed, quantitatively assessed, and shown to be statistically significant in signaling counter- trend changes. The incertitude approach has merit as a granular measure of the market environment and is a recommended addition to the technical analysis community’s tool set. The benefit of incertitude techniques to practitioners is that they will augment existing count-based market breadth and market strength techniques by providing a more granular approach to detecting trend change.
Although developed and tested using the S&P 500, the incertitude approach is broadly applicable to any market, sector, or index (e.g. NASDAQ market, Technology sector, or S&P 100 index) that contains a sufficient number of constituents from which to make the calculations described herein. Note also the fixed values such as moving average durations chosen for testing purposes are not permanent fixtures of the incertitude approach. Parameters stated in the description of candidate signals are malleable in practice and recalibration to each market of interest is recommended.
This topic is fertile ground for further research. The signal types discussed herein were crossovers at the extremes but additional signal types and alternative oscillator interpretations are worthy of further study. Devising methods to incorporate incertitude into other technical analysis indicators would also be a potentially beneficial pursuit. Although the research presented in this paper is anchored on the Lepage statistic as the measure of today’s difference from yesterday, it is not the sole measure available. The discipline of distribution comparison has many alternate statistical tests other than Lepage that merit exploration.
References
Ansari, Abdur Rahman, and Ralph A. Bradley. “Rank-sum tests for dispersions.” The annals of mathematical statistics (1960): 1174-1189.
Arbuthnot, John. “II. An argument for divine providence, taken from the constant regularity observ’d in the births of both sexes. By Dr. John Arbuthnott, Physitian in Ordinary to Her Majesty, and Fellow of the College of Physitians and the Royal Society.” Philosophical Transactions of the Royal Society of London 27, no. 328 (1710): 186-190. [sic]
Aronson, David. Evidence-based technical analysis: applying the scientific method and statistical inference to trading signals. John Wiley & Sons, 2011. pp 343.
Bennette, Caroline, and Andrew Vickers. “Against quantiles: categorization of continuous variables in epidemiologic research, and its discontents.” BMC medical research methodology 12 (2012): 1-5.
Connors, Larry, and Matt Radtke. “Parameter-Results Stability: A New Test of Trading Strategy Effectiveness.” Journal of Technical Analysis 68 (2014).
Deemer, Walter. “Breakaway Momentum 101” Technically Speaking January 2023.
Diodato, Christopher. “Making The Most of Panic: Exploring the Value of Combining Price & Supply/Demand Indicators.” Journal of Technical Analysis 70 (2020).
Fisher, Ronald Aylmer. “Theory of statistical estimation.” In Mathematical proceedings of the Cambridge philosophical society, vol. 22, no. 5, pp. 700-725. Cambridge University Press, 1925.
Kim, Se-Kang, and Craig L. Frisby. “Gaining from discretization of continuous data: The correspondence analysis biplot approach.” Behavior research methods 51 (2019): 589- 601.
Lepage, Yves. “A combination of Wilcoxon’s and Ansari-Bradley’s statistics.” Biometrika 58, no. 1 (1971): 213-217.
Mann, Henry B., and Donald R. Whitney. “On a test of whether one of two random variables is stochastically larger than the other.” The annals of mathematical statistics (1947): 50- 60.
Morris, Gregory L., The Complete Guide to Market Breadth Indicators, 2015.
Pring, Martin J. “Technical analysis explained: An illustrated guide for the investor.”
McGraw-Hill (1985). Pyle, Dorian. Data preparation for data mining. Morgan Kaufmann, 1999.
Snijder, T.A.B. “Hypothesis Testing: Methodology and Limitations” In “International encyclopedia of social & behavioral sciences, 2nd edn. pp. 7121 – 7127, Elsevier, Amsterdam (2015).
Wells, Drew; “What is a ‘Breadth Thrust’ and What Are the Risks?”, Technically Speaking, November 2022.
Whaley, Wayne. “Planes, Trains and Automobiles: A Study of Various Market Thrust Measures.” December 31, 2009. Also published in Journal of Technical Analysis 67 (2013).
Wilcoxon, Frank, “Individual Comparisons by Ranking Methods”, Biometrics Bulletin Vol. 1, No. 6 (Dec., 1945), pp. 80-83.
Frequency of Structures, Length, and Depth of Waves Observed in a Range of Markets using the Elliott Wave Theory
by Lara Iriarte, CMT

About the Author | Lara Iriarte, CMT
Lara Iriarte, CMT holds a BSc in Science from Auckland University. A science degree has taught her to view data objectively and think logically. She provides daily Elliott wave and technical analysis of the S&P500 cash and Gold spot markets to her members at ElliottWaveStockMarket.com and ElliottWaveGold.com. On these websites Lara has built a small community of experienced professionals who share their knowledge and experience trading these markets.
Abstract
Elliott Wave analyses that met Elliott Wave rules were completed on 8 markets spanning a total of 304 years, generating 8,432 data points. Elliott Wave structures were recorded along with wave lengths of actionary waves and depths of reactionary waves. Common assumptions regarding the basic Elliott Wave structure, such as wave 2 usually correcting 0.618 of the depth of wave 1, wave 3 being usually 1.618 times the depth of wave 1, wave 4 typically correcting 0.382 of the depth of wave 3, wave 5 being usually equal in length to wave 1, and wave C typically being equal in length to wave A, were found to be false, bar one; the only assumption that holds true is wave C is normally about equal in length to wave
- Further, wave 2 is commonly expected to subdivide as a zigzag or multiple zigzag, and wave 4 is commonly expected to subdivide as a flat, triangle, or combination, but it was found that both waves 2 and 4 are most commonly single or multiple zigzags with very little difference in the probability of corrective structure in these wave positions. The data set from this research is presented in tables of mean lengths and depths and 95% confidence intervals for wave lengths and depths, and also the frequency of each Elliott Wave structure in each wave position within the basic pattern. It is expected that with this data Elliott Wave analysis will improve accuracy as analysts can use 95% confidence intervals for target calculation and probability tables for each structure in each wave position to anticipate the most likely pathway for price.
Introduction
Most technical analysis methods are backward-looking. This is because most technical analysis methods are based on past data. There are few forward-looking technical analysis methods; the two most obvious are cycle analysis and Elliott Wave analysis.
Figure 1
The basic Elliott Wave pattern (figure 1) is five steps forward, followed by three steps back. When it can be determined where in the basic pattern price may be at any given time, then a future direction for price may be known.
Within this basic pattern, actionary waves are defined as waves that move in the direction of the trend one degree higher, that is waves 1, 3, 5, A and C. Reactionary waves are defined as waves that move against the trend one degree higher, that is waves 2, 4 and B. The terms actionary and reactionary refer to the direction of waves within the trend one degree higher.
Waves 1, 3, 5, and C within the basic pattern must always subdivide as motive wave structures, and waves 2, 4, and B must always subdivide as corrective structures. Wave A may subdivide as either a motive or corrective structure. The terms motive and corrective refer to the structure of waves and how the waves subdivide as defined by Elliott Wave rules.1
Figure 2
The basic pattern repeats and builds on itself to create fractals (figure 2). Each fractal is labeled with a different Elliott Wave degree. The major degrees are super cycle, cycle, primary, intermediate, minor, minute, minuette, subminuette, and micro. This research identifies all these degrees and, where appropriate, up to three smaller degrees: sub-micro, minuscule, and nano.
Commonly, some actionary waves extend while other actionary waves do not. When actionary waves extend, they last longer, so how long each degree should last is elastic.
Within the basic Elliott Wave structure, the following assumptions are made:
- Wave 1 will most commonly subdivide as an
- Wave 2 will most commonly subdivide as a zigzag and most commonly correct to 618 the depth of wave 1.
- Wave 3 will most commonly be 618 the length of wave 1.
- Wave 4 will most commonly subdivide as a flat, triangle, or combination and most commonly correct to 0.382 the depth of wave 3.
- Wave 5 will most commonly subdivide as an
- Wave 5 will most commonly exhibit a length equal to wave
In addition, it is assumed that the rules and guidelines in the “Pure Elliott Wave”2 will be valid over a range of market types and time.
This research tests these assumptions.
This research also provides data for the following points:
- The probability of occurrence for each corrective structure in wave positions 2, 4, A, and B.
- The probability of occurrence of each motive structure in wave positions 1, 5, A, and C.
- The most likely length of waves 3, 5, and C to actionary waves of the same
- The most probable depth of waves 2, 4, and B to the prior wave of the same
An Elliott Wave count is the labeling of a chart of a data series over time with Elliott Wave labels that follow the basic Elliott Wave pattern. Elliott Wave theory uses a set of rules3 that should be adhered to and a set of guidelines that determine the probability of the Elliott Wave count. The more guidelines an Elliott Wave count meets, the higher the probability that the Elliott Wave count is correct, providing predictive value. Because Elliott Wave rules should be met for all Elliott Wave counts, these rules must be clearly outlined. Correctly identified waves may only be detected through adherence to Elliott Wave rules. The use of rules in this manner should limit the subjective bias of the analyst. A failure to follow a clear set of rules may lead to the analyst fitting the Elliott Wave count to their bias.
When analyzing a market using Elliott Wave, the experienced analyst will identify when one wave has ended and the next has begun. As each wave begins, the analysis accuracy would improve if the analyst could determine which corrective or motive structure would most likely unfold in the wave position considered within the larger basic structure. If subsequent price action shows the most likely structure is unlikely in that instance, then the analyst would be better informed if it was known what the following most likely structure could be.
As one of waves 3, 5, or C begins, the analyst may calculate targets based on prior actionary waves of the same degree. If a mean and 95% confidence interval for the length of the actionary wave were known with reliability then target calculation would be improved.
Literature Review
Throughout the past few decades, the viability of Elliott Wave theory has been fiercely debated between Technical Analysts and market participants alike. Recent research over the past 10 years has helped to cement the validity and efficacy of Elliott Wave Theory.
Artificial Neural Networks (ANN) may be used for complicated and difficult tasks as they are suited to generalization and classification. Volna et al.4 used ANN, training algorithms of ANN, and synthesised pseudo neural networks to identify Elliott Wave structures in volume data over time. This approach identified four different Elliott Wave structures: impulses, corrective waves, triangles, and the basic Elliott Wave structure. These structures were defined as follows:
- Impulses – 5 wave patterns in the direction of the trend one degree
- Corrective waves – 3 or 5 wave patterns in the direction opposite to the trend one degree higher.
- Triangles – 5 wave patterns with each wave progressively smaller 3 wave
- Basic structure – 5 waves up followed by 3 waves
The rules governing each structure were not defined; they only stated that they were created using Elliott Wave principles, with no further detail given.
Four Elliott Wave patterns were recognized and successfully extracted using ANN, ANN with transfer function synthesized using analytic programming, and pseudo neural networks. This research concludes that the Elliott Wave theory is feasible and recognizable.5 Atsalakis et al.6 used neuro-fuzzy systems with Elliott Wave theory to predict stock market behaviour. Neural networks try to imitate the architecture of the human brain. Neural networks are efficient in modeling non-linear problems; they can learn by example and are extremely useful in pattern recognition. Neural fuzzy networks aim to solve the disadvantages of neural networks and fuzzy logic in isolation. Adding an Elliott Wave oscillator helps identify third waves, which are the easiest to track.
This approach outlined the following basic Elliott Wave rules:
- Wave 2 may not move beyond the start of wave
- Wave 3 may not be the shortest actionary wave of waves 1, 3 and
- Wave 4 may not overlap wave 1 pice
- Wave 3 is usually the strongest wave, although sometimes wave 5 may be
- Wave 5 usually moves beyond the end of wave
This system tested stock data for the National Bank of Greece over 400 trading days. The stock was bought when the system forecasted positive prices, and the position was closed when the forecast turned negative. Short positions were not considered. The data set included an extreme bear market beginning in October 2007 and yielded a positive return rate. The hit rate was 58.75% positive, and the return was 6.79% when the stock declined by 60.9% in value. The system considerably outperformed a buy-and-hold strategy.7
One benefit of neural networks in predicting complex dynamic systems is they have flexibility and make no presumption of the underlying data generation process.
Acknowledging that Elliott Wave theory is controversial, Jarusek et al.8 used the following Elliott Wave principles:
- A correction reaction follows each
- Five waves (1-2-3-4-5) in the direction of the trend are followed by three waves against the trend (A-B-C).
- A movement of five waves in one direction followed by three waves in the opposite direction terminates the cycle.
- Wave 2 commonly retraces 618 of wave 1.
Using neural networks, a predictive system was developed to estimate the direction of EURUSD in a 60-minute time window. A neural network was trained on EURUSD, USDJPY, CHFJPY, and EURCHF from 2015 to 2018, looking at one-minute intervals. This yielded 30,000 time windows, each of 256 minutes. The system was then tested on randomly selected periods of 256 minutes each from 2014 to 2020.
The neural network looked for price movements that had a 90% or greater similarity to Elliott Wave structures—however, rules given for Elliott Wave structures were not defined. A time period of 256 minutes was used to identify any Elliott Wave patterns. If one was found, the following 60 minutes were observed to determine if the predicted direction was true or false.
The system entered short and long based upon recommendations from the neural network. Stop losses and take profit orders were used within a money management system to identify where these should be placed.
A trading system using Elliott Wave was compared to one without Elliott Wave as a control. Results showed that the trading system without Elliott Wave yielded an average of 1.1 dollars per trade, whereas a system using Elliott Wave generated an average of 1.5 dollars per trade. Using Elliott Wave brought a prediction increase by 39%.
These three papers, particular to Elliott Wave, together show that Elliott Wave structures can be extracted from charts. Neural networks and fuzzy neural networks can be trained to identify Elliott Wave structures and make predictions in the future direction of price. These predictions have an accuracy rate which is greater than a coin toss and can create a profitable trading system. More important, that Elliott Wave is a valid theory that can predict future price direction and improve traders’ profits.
Methodology
Elliott Wave analyses that met all Elliott Wave rules9 were completed on daily charts of a range of representative markets. MACD, as an indicator of momentum, was used to assist with the Elliott Wave count, identifying higher momentum during the third and fifth waves. An experienced analyst manually labeled the Elliott Wave counts using Motive Wave software. Motive Wave software automatically checks wave counts for adherence to Elliott Wave rules as an additional safeguard.
Markets analyzed were two indices (The Dow Jones Industrial Average and the S&P 500), two commodities (WTI Crude Oil and Gold), two forex pairs (EUR/USD and USD/JPY), and two cryptocurrencies (BTC/USD and ETH/USD). These markets were chosen to represent each significant market type using examples with substantial volume, as is required for reliable Elliott Wave analysis. All charts were of daily time periods.
Date ranges for each data set were as follows:
- DJIA: 1963 to October 2023, 60
- S&P 500: 1966 to June 2023, 57
- WTI Crude (spot): 1989 to May 2023, 34
- Gold (spot): 1970 to May 2023, 53
- EUR/USD: 1985 to June 2023, 38
- USD/JPY: 1982 to October 2023, 41
- Bitcoin BTC/USD: 2010 to April 2023, 13
- Ethereum ETH/USD: 2015 to September 2023, 8
The following data points were recorded from each Elliott Wave count.
-
- Occurrence of each of the following corrective waves in positions 2, 4, A, and B
a. Single zigzag
b. Double zigzag
c. Triple zigzag
d. Regular flat
e. Expanded flat
f. Running flat
g. Contracting triangle
h. Barrier triangle
i. Expanding triangle
j. Double combination
k. Triple combination - Occurrence of each of the following motive waves in positions 1, 3, 5, A and C
- Impulse
- Diagonal
- Within each of the structures listed in data points 1 and 2 above, the length and depth of each wave were measured in ratios:
a. | 2 vs 1 | 10. C vs B |
4. | 3 vs 1 | 11. D vs C |
5. | 4 vs 3 | 12. E vs D |
6. | 5 vs 1 | 13. X vs W |
7. | 5 vs 3 | 14. Y vs W |
8. | B vs A | 15. X vs Y |
9. | C vs A | 16. Z vs Y |
The orthodox depth regarding combinations and triangles refers to the depth calculated at the terminus of each combination or triangle structure, that is the end of wave Y for a double combination or wave Z for a triple combination, and the end of wave E for a triangle. This depth will be different from the price extreme of the correction.
STATA software was used to transform raw data into mean values, generate 95% confidence intervals, provide percentage of occurrence for each Elliott Wave structure and perform t-Tests to test assumptions regarding wave lengths and depths.
Note: Confidence that the analyst’s wave count is correct may be had for obviously complete waves. For more recent or incomplete waves, such confidence may not be had, so it is acknowledged that some data points may need to be corrected. However, that would affect very few data points, as subdivisions within recent waves are likely to be correct, and data points affected may be only a few at larger degrees. Any incorrect data points should not materially affect the overall results within the total number of data points collected.
For price movement where the Elliott Wave structure was unclear at the daily time frame, the analyst left the structure unlabelled. Only structures that clearly fitted Elliott Wave rules were labeled.
Results
A total of 8,432 data points were recorded.
Results for wavelengths and depths are presented in ratios in order to be consistent over various markets that use different price metrics, such as points, pips, and dollars. Using ratios between waves also helped account for waves of varying timeframes.
Wave Lengths and Depth
Tables 1 to 4 give the mean length and depth of all observed Elliott wave structures.
Table 1: Mean Length and Depth of Motive Waves as Ratio (std. dev)
Table 1 notes the extreme length of waves 3 vs 1 for BTC/USD and ETH/USD; also, these markets exhibit mean lengths of waves 5 vs 1, which are greater than their mean lengths of waves 3 vs 1. Two markets, WTI Crude, and BTC/USD, exhibit fifth waves, which are, on average, longer than their third waves.
Table 2: Mean Length and Depth of Single and Multiple Zigzags as Ratio (std. dev)
Table 2 gives the mean length and depth of zigzags. On average, multiple zigzags have a more shallow depth than single zigzags, at 0.7319 for double zigzags and 0.6862 for triple zigzags, whereas single zigzags are, on average, 0.9073.
Table 3: Mean Length and Depth of Flats and Combinations as Ratio (std.dev)
Table 3 gives the mean length and depth of flats and combinations. Combinations tend to be more shallow overall at 0.5149 compared to regular and expanded flats at 0.6320 and 0.6159 respectively. The data point for running flats is based upon one sole observation from BTC/ USD.
Table 4: Mean Length and Depth of Triangles as Ratio (std. dev)
Table 4 gives the mean length and depth of triangles. Contracting and barrier triangles exhibit a similar orthodox depth at 0.3414 and 0.3547, respectively. As expected, considering the structure, expanding triangles exhibit a greater mean orthodox depth at 0.5126.
In Tables 1 to 4, mean and standard deviation values are given for raw data. Data is provided for each market analyzed, and all markets are combined to be transferrable to other markets.
Tables 5 to 8 give 95% confidence intervals of wavelengths.
Table 5: Length and Depth of Motive Waves: Range as 95% confidence interval (std. error)
In Tables 5 to 8, an asterisk * represents data skewed in either direction; these datasets were transformed to normal distribution to obtain 95% confidence intervals. First, the natural log (ln) was applied to the samples; next, the 95% confidence interval was determined, and then the natural anti-log (exp) was applied to this confidence spread to render actionable ratios.
Table 5 gives the 95% confidence intervals for the length and depth of motive waves. Note the differences between observed markets. For example, the 95% confidence interval for wave 2 vs. 1 for Gold is 0.528 to 0.567, whereas wave 2 vs. 1 for EUR/USD is 0.605 to 0.640. There is no overlap between these intervals. These markets exhibit significantly different depths of their second wave corrections.
The 95% confidence interval wave 3 vs. 1 for BTC/USD is 2.128 to 2.579, and for ETH/USD is.
2.051 to 2.574. There is no overlap between these intervals and the 95% confidence interval for all other markets. This is a measure of the extreme volatility of cryptocurrencies.
Table 6: Length and Depth of Single and Multiple Zigzags: Range as 95% Confidence Interval (std. error)
Table 6 gives 95% confidence intervals for single and multiple zigzag depths. Although the mean depth exhibits a clear difference (see table 2), there is an overlap in the 95% confidence interval between single zigzags at 0.712 to 0.747 and triple zigzags at 0.386 to 0.745. However, there is no overlap between single and double zigzags at 0.582 to 0.654.
Table 7: Length and Depth of Flats and Combinations: Range as 95% Confidence Interval (std. error)
Table 7 gives 95% confidence intervals for length and depth of flats and combinations. Although these are assumed to be shallow corrections, the 95% confidence intervals for all combined data are greater than 0.5, except for the orthodox depth of combinations.
Table 8: Length and Depth of Triangles: Range as 95% Confidence Interval (std. error)
Table 8 gives 95% confidence intervals for the length and depth of triangles. Note that the orthodox depth of all but expanding triangles is more shallow than the extreme of the structures, seen in wave B vs wave A. This is because the orthodox depth is measured from the terminus of the triangle at wave E.
The combination of our results, both for distributions and confidence intervals, indicate strong statistical significance for every market, particularly in more common structures like impulses, zigzags, and flats.
Charts 1 and 2 show raw data for Gold market length of wave 3 to length of wave 1. Chart 1 shows the arithmetic distribution, chart 2 shows the logarithmic function distribution. When a logarithmic function is applied to the data a distribution closer to normal is seen.
Charts 3, 4, 5, 6 and 7 show all markets combined results for wave lengths 2 to 1, 3 to 1 (log), 4 to 3, 5 to 1 (log), and C to A (log). All charts exhibit normal distribution with varying kurtosis.
Chart 8 shows the 95% confidence intervals of wave 2 vs 1 and wave 4 vs 3. The interval for wave 2 is greater than wave 4, and overall wave 2 vs 1 is deeper. The interval for wave 4 vs 3 is smaller and overall wave 4 vs 3 is more shallow. There is some overlap between the 95% confidence intervals of the two data sets.
Hypothesis Testing
Table 9: Test Results
The ratio of wave 2 to wave 1 in an impulse is assumed to be 0.618. This assumption is rejected with a p-value of 0.0036 and a t-statistic of -2.9115. Since the t-statistic is moderately below -1, the difference between the hypothesised mean and the sample mean is moderate.
Instead, the analysis indicates a ratio of approximately 0.600.
The ratio of wave 3 to wave 1 in an impulse is assumed to be 1.618. Due to the skewed nature of the ratio of wave 3 to wave 1, it was best analysed as a log transformation. The hypothesis test was conducted against the natural log of 1.618, which is approximately 0.48. The null hypothesis is confidently rejected, with a p-value of 0.0000 and a t-statistic of 17.0059. This result suggests that the hypothesised mean and the sample mean are significantly different, with observed ratios ranging from approximately 1.889 to 1.967.
The ratio of wave 4 to wave 3 in an impulse is assumed to be 0.382. This assumption is rejected with a p-value of 0.0000 and a t-statistic of -23.4833. The findings suggest that the actual ratio is significantly different from the hypothesised value, with observed values in the range of 0.320 to 0.329, indicating a strong consistency within this lower range.
The ratio of wave 5 to wave 1 is assumed to be 1. Due to the skewed nature of the ratio of wave 5 to wave 1, it was best analysed using a log transformation, with the hypothesis tested against the natural log of 1, which is 0. The null hypothesis is rejected with a p-value of 0.0000 and a t-statistic of 6.8787. The observed mean ratio is approximately 1.1, indicating a moderate deviation from the hypothesised value.
The ratio of wave C to wave A is assumed to be 1. Due to the skewness of the ratio, the hypothesis test was conducted against the natural log of 1, which is 0. The p-value for this test is 0.2918, indicating that there is insufficient evidence to reject the null hypothesis. Therefore, the data is consistent with the assumed ratio of 1. Additionally, the 95% confidence interval for the length of wave C to wave A is 0.991 to 1.032, suggesting that the ratio is likely close to the hypothesised value..
Frequency of Elliott Wave Structures Within The Basic Pattern
Table 10: Frequency of Structures in Each Wave Position Within the Basic Structure
The occurrence of zigzags in wave positions 1, 3, and 5 is due to leading and ending diagonals.
It is assumed that wave 2 will most commonly subdivide as a single or multiple zigzag. The data supports this assumption, all zigzag types total 65.3% of structures in a second wave position.
It is assumed that wave 4 will most commonly subdivide as a flat, combination, or triangle. This assumption is rejected; the most common type of structure in a fourth wave position was a single or multiple zigzag at 62.2%, barely less than a second wave position and only 3.1% less.
The wave position that exhibits the greatest variety in structure is wave B, with 59.5% a single or multiple zigzag and 40.5% a sideways structure such as a flat, combination, or triangle.
Conclusions
The Elliott Wave theory can be applied to a range of markets and produce Elliott Wave counts that meet all Elliott Wave rules.1
Additionally, this research demonstrates that Elliott Wave theory can reliably provide targets for waves using ratios within the basic Elliott Wave structure with normal distributions when adhering to Elliott Wave rules.
Most assumptions about wavelengths and depth of retracement are not supported by data in this research, with the sole exception of wave C being most commonly equal in length to wave
- Instead, this research has found statistically significant ranges for wavelengths and depths of retracement. These results are important in Elliott Wave analysis.
The second assumption found to be true is wave 2 will most commonly subdivide as a single or multiple zigzag structure.
Moving through the basic structure:
- Wave 1 will most commonly (74.3%) subdivide as an
- Wave 2 will most commonly (65.3%) subdivide as a zigzag and will correct from 0.592 to 0.607 of wave 1 with 95% confidence.
- Wave 3 will be 889 to 1.967 the length of wave 1 with 95% confidence.
- Wave 4 will most commonly subdivide as a single or multiple zigzag (62.2%) and will retrace from 0.320 to 0.329 the length of wave 3 with 95% confidence.
- Wave 5 will most commonly (75.5%) subdivide as an impulse and will be 1.067 to 1.124 the length of wave 1 with 95% confidence, or it will be 555 to 0.577 the length of wave 3 with 95% confidence.
Further, within each corrective structure, as it unfolds, the analyst may use the range of each wave length given in tables 6 to 8 as a guide to calculate targets for the end of each wave; this is expected to be particularly important for larger waves of higher degrees which unfold over many months or years.
With information based upon extensive data that has depth and breadth, the results of this research can be used to improve the accuracy of Elliott Wave analysis.
Discussion
This research found some differences in each market it analyzed. Therefore, it is expected that different markets, for example, different indices of other countries, would exhibit further differences.
A deeper database that is searchable is envisioned for Ellioticians to access and improve their Elliott Wave forecasting and target accuracy. A searchable database could be built to search for actionary wave lengths, reactionary wave depths, and probable corrective structures for different Elliott Wave degrees in different markets.
Elliott Wave has a great potential to completely change economic forecasting techniques. Combined with other methods, this could prove exceptionally accurate for any enterprise looking to manage risk and capital.
Next, these results could be applied to machine learning models and incorporated into other risk and capital strategies. Using neural networks or neural fuzzy networks, a system could be trained to perform in-depth Elliott Wave analysis from the Elliott Wave counts in this research. The system may then be able to complete Elliott Wave counts as future price unfolds, yielding Elliott Wave analysis with a high probability of accurate forecasting.
With Elliott Wave’s particular emphasis on herd mentality within a population, economists could combine Elliott Wave with other fundamental indicators to help prepare for stress tests and financial crises in critical markets.
Resources
Atsalkas, G. S., and Dimitrakakis, E. M., and Zopounidis G. D. 2011. Elliott Wave Theory and euro-fuzzy systems, in stock market prediction: The WASP system. Expert Systems with Applications 38 be (2011) 9196-9206.
Jarusek, R. And Volna, E. And Kotryba, M. 2021. FOREX rate prediction was improved by Elliott wave patterns based on neural networks. Neural Networks 145 (2022) 342-355.
Iriarte, L. 2021. Pure Elliott Wave. Surfari Press.
Volna, E and Kotyrba, M and Oplatkova Z K and Senkerik R. 2016. Elliott waves classification using neural and pseudo neural networks. Soft Compute (2018) 22:1803-1813.