JOURNAL OF
TECHNICAL ANALYSIS
Issue 70, Spring 2020
Editorial Board
Eric Grasinger, CFA, CMT
Managing Director, Portfolio Manager, Glenmade
Bruce Greig CFA, CMT, CIPM, CAIA
Director of Research, Q3 Asset Management
Private: Jerome Hartl, CMT
Vice President, Investments, Wedbush Securities
Ryan Hysmith DBA, CMT
Assistant Professor of Finance, Freed-Hardeman University
Cynthia A Kase, CMT, MFTA
Expert Consultant
Sergio Santamaria, CMT, CFA
University of Arkansas
Paul Wankmueller, CMT
Director of Business Development
CMT Association, Inc.
25 Broadway, Suite 10-036, New York, New York 10004
www.cmtassociation.org
Published by Chartered Market Technician Association, LLC
ISSN 2378-7295 (Print)
ISSN 2378-7341 (Online)
The Journal of Technical Analysis is published by the Chartered Market Technicians Association, LLC, 25 Broadway, Suite 10-036, New York, NY 10004.New York, NY 10006. Its purpose is to promote the investigation and analysis of the price and volume activities of the world’s financial markets. The Journal of Technical Analysis is distributed to individuals (both academic and practicitioner) and libraries in the United States, Canada, and several other countries in Europe and Asia. Journal of Technical Analysis is copyrighted by the CMT Association and registered with the Library of Congress. All rights are reserved.
Letter from the Editor
by Sergio Santamaria, CMT, CFA
As the new chair of the Editorial Committee, it is an immense pleasure to present the 70th issue of the Journal of Technical Analysis (JOTA). Continuing the exceptional work accomplished by prior editorial teams, the new JOTA editorial board will focus on producing a journal that advances the knowledge and understanding of the practice of technical analysis through the publication of well-crafted, high-quality papers in all areas of technical analysis and behavioral finance. While former MTA members were the natural audience when the journal was released in 1978, nowadays the readership reaches far beyond the CMT Association and includes both practitioners and academics worldwide.
This special edition contains the four most recent Charles H. Dow Award winning papers. First, Charles Bilello and Michael A. Gayed, previous winners in 2014 with “An Intermarket Approach to Beta Rotation: The Strategy, Signal and Power of Utilities,” discuss how a low volatility equity market environment is critical to benefit from leverage in their 2016 award-winning paper “Leverage for the Long Run.” Second, the 2017 Dow Award paper “Forecasting a Volatility Tsunami” by Andrew Thrasher suggests that the dispersion of volatility (measured by the VIX index) could predict future VIX spikes. Third, Gioele Giordano shows how, despite the unstoppable global shift towards passive investment strategies, a dynamic asset allocation using low-cost ETFs that cover US equities, international equities, US bonds, international bonds, real estate, natural resources and cash could beat the market on a risk-adjusted basis. His paper “Ranked Asset Allocation Model” achieved the 2018 Dow Award. Fourth, Christopher Diodato provides short and long-term capitulation oscillators to identify extreme market panics in his 2019 Dow Award paper “Making the Most of Panic – Exploring the Value of Combining Price & Supply/Demand Indicators.”
In addition to the four Dow Award papers, two other very interesting articles are included in this issue. Arthur Hill proposes an innovative use of the relative strength index (RSI) to discover the dream of any technical analyst: sustainable trends with robust momentum. His work, “Finding Consistent Trends with Strong Momentum – RSI for Trend-Following and Momentum Strategies,” will show the reader that the RSI might be more than a traditional reversal indicator. In “The Virtual Crowd: Measuring the Depth of Investor Sentiment with Normalized Relative Volume,” Jason Meshnick introduces the normalized relative volume, an original indicator that measures the depth of sentiment of market participants using volume.
Finally, I would like to thank enormously all the individuals that have contributed to a new edition of the JOTA. First, to the authors for sharing their knowledge and pioneering ideas with the broader investment community. Second, to my outstanding editorial board colleagues who provide their invaluable expertise to make sure that all the papers are subjected to a double-blind review process. Third, to the terrific staff at the CMT Association (especially to Chelsey Clevenger) for making the production and distribution process possible. If you are interested in sharing your ideas with the Journal of Technical Analysis readers, including about 5,000 CMT members in 137 countries, please feel free to contact me at journal@cmtassociation.org.
Sincerely,
Leverage for the Long Run
by Charlie Bilello, CMT
About the Author | Charlie Bilello, CMT
Charlie Bilello, who holds the Chartered Market Technician (CMT) designation, is the Director of Research at Pension Partners, LLC, where he is responsible for strategy development, investment research and communicating the firm’s investment themes and portfolio positioning to clients. Prior to joining Pension Partners, he was the Managing Member of Momentum Global Advisors, an institutional investment research firm. Previously, Charlie held positions as an Equity and Hedge Fund Analyst at billion dollar alternative investment firms, giving him unique insights into portfolio construction and asset allocation.
Mr. Bilello holds a J.D. and M.B.A. in Finance and Accounting from Fordham University and a B.A. in Economics from Binghamton University. In addition to his CMT designation, Charlie also holds the Certified Public Accountant (CPA) certificate.
by Michael Gayed, CFA
About the Author | Michael Gayed, CFA
Michael A. Gayed is Portfolio Manager at Toroso Investments, an investment management company specializing in ETF focused research, investment strategies and services designed for financial advisors, RIAs, family offices and investment managers.
Prior to Toroso Investments, Michael was the Co-Portfolio Manager and Chief Investment Strategist at Pension Partners, LLC, an investment advisor managing mutual funds and separate accounts.
He is the co-author of four award-winning research papers on market anomalies and investing. Michael is an active contributor to MarketWatch and has been interviewed on CNBC, Bloomberg, and Fox Business, as well as the Wall Street Journal Live for his unique approach to interpreting market movements. His analysis has also been featured by Marc Faber of the Gloom, Boom and Doom Report.
Michael earned his Bachelor of Science degree with a double major in Finance & Management at NYU Stern School of Business. Michael became a CFA Charterholder in 2008 and is a two-time winner of the Charles H. Dow Award (2014, 2016).
Abstract
Using leverage to magnify performance is an idea that has enticed investors and traders throughout history. The critical question of when to employ leverage and when to reduce risk, though, is not often addressed. We establish that volatility is the enemy of leverage and that streaks in performance tend to be beneficial to using margin. The conditions under which higher returns would be achieved from using leverage, then, are low volatility environments that are more likely to experience consecutive positive returns. We find that Moving Averages are an effective way to identify such environments in a systematic fashion. When the broad U.S. equity market is above its Moving Average, stocks tend to exhibit lower than average volatility going forward, higher daily average performance, and longer streaks of positive returns. When below its Moving Average, the opposite tends to be true, as volatility tends to rise, average daily returns are lower, and streaks in positive returns become less frequent. Armed with this finding, we develop a strategy that employs leverage when the market is above its Moving Average and deleverages (moving to Treasury bills) when the market is below its Moving Average. The strategy shows better absolute and risk-adjusted returns than a comparable buy-and-hold unleveraged strategy as well as a constant leverage strategy. The results are robust to various leverage amounts, Moving Average time periods, and across multiple economic and financial market cycles.
Introduction
Using leverage to magnify performance is an idea that has enticed investors and traders throughout history. The concept is simple enough: borrowing funds allows you to buy more of an asset than with cash alone, multiplying the effect of any gains and losses. The critical question of when to employ leverage and when to reduce risk, though, is not often addressed. Under academic theory, one cannot develop a strategy to time the use of leverage due to market efficiency and the randomness of security prices.
We find strong evidence to the contrary. Security prices are non-random and tend to exhibit trends over time as well as volatility regimes under which leverage is more or less beneficial. As such, one can combine these two concepts to create a strategy that employs the use of leverage only during periods which have a higher probability of success. In doing so, one can achieve higher returns with less risk than a comparable buy and hold strategy. This is the primary focus of our paper: systematically determining environments favorable to leverage and developing a strategy to exploit them. The idea that you can achieve a higher return with less risk stands in direct conflict with the Capital Asset Pricing Model (CAPM). Developed in the early-to-mid 1960s, the CAPM dictates that the expected return for a given security should be determined by its level of systematic risk, or Beta. A linear relationship is said to exist between Beta and return, which is represented in chart form as the Security Market Line (SML). The SML progresses linearly (up and to the right at a 45-degree angle) whereby the higher the Beta, the higher the expected return.
Though still widely regarded as one of the key tenets of Finance, the CAPM has been challenged by a number of studies over the years. Empirical research has shown that anomalies such as the small firm effect, the value effect, and the momentum effect cannot be explained by the CAPM.1
The low volatility anomaly has also called into question the presumed absolute relationship between risk and return. Low volatility stocks have exhibited above market performance with lower than market Beta, challenging the risk/return laws of the CAPM.2 Similarly perplexing is the tendency for high beta stocks to exhibit lower performance than predicted by their level of risk.3
In this paper, we propose an additional factor that is unexplained by traditional Finance theory: the volatility and leverage anomaly that allows for long-run outperformance using leverage. Key to any study which counters efficient markets is understanding what allows for the anomaly to exist. We propose that the combination of structural and behavioral conservatism in the use of leverage brings with it inefficiencies which are not easily arbitraged away.
In addition to facing margin requirements, certain institutional investors such as pension plans, mutual funds, and endowments are simply unable to borrow money to invest beyond their portfolio’s asset value based on stated mandates and regulatory requirements. For those institutional investors who do not face such restrictions, leverage brings with it new sets of risks, including “costs of margin calls, which can force borrowers to liquidate securities at adverse prices due to illiquidity; losses exceeding the capital invested; and the possibility of bankruptcy.”4 In the case of hedge funds, for example, the “fragile nature of the capital structure, combined with low market liquidity, creates a risk of coordinated redemptions among investors that severely limits hedge funds arbitrage capabilities. The risk of coordination motivates managers to behave conservatively [in their usage of maximum leverage].”5
Leverage aversion is also due to innate behavioral biases. The availability heuristic is a mental rule of thumb that argues that individuals will use the first immediate example that comes to mind when evaluating a topic, or making a decision. Often times, the most extreme negative events become the first things considered. The word leverage, or margin, makes most immediately think of historically catastrophic events, loss, or the risk of ruin, creating a natural aversion to using borrowed money to generate excess returns. Some of the most prominent examples that come to mind include:
- The 1929 stock market crash
- The 1987 stock market crash
- The 1998 Long-Term Capital Management blowup
- The 2007 Quant Quake
- The 2007-2009 Financial Crisis
Leverage aversion is understandable given these traumatic events, but it is not “rational” as Finance theory assumes. In theory, when presented with the option to construct an unleveraged portfolio or a leveraged one, mean-variance optimization views the two as equal so long as the expected return and volatility of the two portfolios remains the same.6
The fact that leverage is used becomes irrelevant, which means there should be no preference between the two portfolios. In practice, however, investors are more likely to avoid the leveraged portfolio despite having the same risk/return characteristics of the unleveraged one.
Prior studies on the use of leverage to enhance risk/return in a portfolio have primarily been centered on low beta stocks7 and risk parity.8 These studies suggest benefits to leveraging lower beta assets which investors, due to leverage aversion, are either unable or unwilling to do. To the best of our knowledge, however, there have not yet been studies using technical indicators which evaluate the potential timing benefits of using leverage purely on the stock market itself to not just increase return, but also generate higher risk-adjusted performance.
In this paper, we propose that using widely-referenced Moving Average indicators for evaluating stock market trends can greatly enhance not just return, but produce higher risk-adjusted performance beyond what the CAPM and Modern Portfolio Theory would argue is possible. To do this, however, we first need to dispel mistaken beliefs about leverage and Moving Averages independently to better understand exactly why a strategy which leverages or deleverages based on Moving Averages produces superior results over time.
Volatility and the Importance of Path
While the availability heuristic may result in thinking of leverage in terms of a constant source of significant risk, objective quantitative analysis can help us identify exactly what causes leverage to result in loss, and under what conditions leverage is beneficial. In this paper, we focus on daily re-leveraging of the multiplier (ex: tracking 1.25x, 2x, or 3x the S&P 500 daily total return), which is the most commonly used time frame in leveraged mutual funds and Exchange Traded Funds (ETFs).
One of the mistaken notions about daily re-leveraging is the idea that there is some form of natural decay. This is the belief that over time the cumulative returns from such rebalancing will end up moving towards zero or at the very least being considerably less than a constant buy and hold strategy. Going back to 1928, we find this is simply not the case.9 A daily leveraged buy and hold of the S&P 500 would have significantly outperformed the unleveraged strategy, by multiples in excess of the leverage factor.10 We observe this in Table 1, where the 3x leveraged cumulative return since 1928 is an astonishing 290 times that of the unleveraged S&P 500.
Table 1: S&P 500 vs. Daily Leveraged S&P 500, Growth of $1 (October 1928 – October 2015)
This clearly illustrates the point that there is no natural form of decay from leverage over time.
The idea that leverage is only suitable for very short-term trading is false when looking at how daily leveraging can perform over long-term cycles. That is not to say that leverage is without risk, just that the source of that risk does not come from some inherent decay.
What does cause significant problems for constant leverage over time is volatility. Daily re-leveraging combined with high volatility creates compounding issues, often referred to as the “constant leverage trap.”11 When the path of returns is not trending but alternating back and forth between positive and negative returns (seesawing action), the act of re-leveraging is mathematically destructive. The reason: you are increasing exposure (leveraging from a higher level) after a gain and decreasing exposure (leveraging from a lower level) after a loss, again and again. An example from recent history will make this point clearer.
In August 2011, the S&P 500 experienced extremely high volatility, where over a six-day period the annualized standard deviation was above 75%. The cumulative return for the S&P 500 over these six trading days was a positive 0.51%, but the leveraged returns fell far short of multiplying this return as we see in Table 2. Using 1.25x leverage, the total return was still positive but less than the unleveraged return at 0.46%. When 2x and 3x leverage was applied, the cumulative returns actually turned negative even though the unlevered return was positive.
Table 2: S&P 500 vs. Daily Leveraged S&P 500 (August 8, 2011 – August 15, 2011)
Why is this the case? After the 6.65% decline on August 8, instead of increasing leverage as would occur naturally from a decline in one’s equity, leverage is reset to the lower asset base. Exposure is effectively being reduced ahead of the gain of 4.74% on August 9. Next, after the gain on August 9, instead of decreasing leverage as would occur naturally from an increase in one’s equity, leverage is reset to the higher asset base. Exposure is now increasing ahead of the loss of 4.37% on August 10.
The more leverage that is applied, the more pernicious the constant leverage trap. This is why the 3x leveraged S&P 500 performs worse than the 2x leveraged and the 2x leveraged performs worse than 1.25x leveraged. Additionally, the higher the volatility in the path of returns, the more harmful such compounding issues become, as we will see in the next example.
High volatility and seesawing action are the enemies of leverage, while low volatility and streaks in performance are its friends. We can see this clearly in Table 3. With the same unleveraged cumulative return of 19.41%, the four paths illustrated have different leveraged returns. In both Path 1 and Path 2, the S&P 500 is positive for six consecutive days, but the lower volatility Path 1 achieves a higher return. Both Path 1 and Path 2 perform better than the leverage multiplier as the constant re-leveraging benefits from compounding. The opposite is true in Path 3 and Path 4, which have alternating positive and negative returns during the first five days. These paths fall directly into the constant leverage trap and the highest volatility Path 4 is hurt the most when leverage is applied.
Table 3: S&P 500 vs. Daily Leveraged S&P 500 – Path Dependency, Volatility and Leverage
Volatility and streakiness are related as we will show in the next section. The reason for this in our view is behavioral. High volatility environments tend to be characterized by investor overreaction, which is more prone to back-and-forth market movement. In contrast, low volatility environments are more consistent with investor underreaction which in turn results in more streaks or consecutive up days. The autocorrelation exhibited by stocks in low volatility regimes is an important precondition under which leveraged strategies perform well, as streaks present themselves and leverage best takes advantage of them.12 As we have shown in this section looking at only six trading days, different return scenarios have a large impact on how cumulative returns look. As such, during considerably longer stretches of time than those illustrated here, path dependency and volatility only heightens the disparity among path scenarios.
The conclusion here is that the popular belief that leveraging results in decay over time is a myth, as performance over time has nothing to do with time itself, but rather:
- The behavior of the underlying asset in its overall
- The path of daily returns (streaks versus seesawing action).
- Whether the regime under which leverage is utilized is high or low Given that higher volatility is the enemy of leverage precisely because of the constant leverage trap, we next examine a systematic way of identifying lower volatility regimes with higher streak potential.
The Trend is Your [Downside Protection] Friend
While smoothing out a data series in statistics may not seem like anything groundbreaking, in the world of investing not a day goes by where the market’s Moving Average isn’t referenced. The first analysis of Moving Averages in the stock market dates all the way back to 1930.13 In their seminal work, “Technical Analysis of Stock Trends,” Edwards and Magee refer to the Moving Average as a “fascinating tool” that “has real value in showing the trend of an irregular series of figures (like a fluctuating market) more clearly.”14 They go on to define “uptrends” as periods when the price “remains above the Moving Average Line” and “downtrends” as periods when the price “remains below the Moving Average.” As the saying goes, the trend is your friend until it ends, and Moving Averages are among the most popular ways of systematically identifying whether stocks are in an uptrend, or downtrend.15
Despite the intuitive logic that Moving Averages can help an investor make more money by participating in an uptrend, empirical testing suggests this view is not entirely accurate. A trading rule which buys the S&P 500 Index above its 200-day Moving Average and sells the S&P 500 Index (rotating into 3-month Treasury bills) below its 200-day Moving Average illustrates this point. If Moving Average were about outperforming on the upside, they should have resulted in significant returns during powerful bull markets like those experienced in the 1990s, 2002 through 2007, and 2009 through 2015.16
As shown in Table 4, using a simple 200-day Moving Average in these Bull Market periods indicates that such a strategy underperforms a buy and hold approach in strong periods of trending markets, despite the indicator’s purported use as a trend indicator. This analysis assumes no cost to execute. The differential between the two increases once commissions, slippage, and taxes are incorporated, suggesting the Moving Average strategy in practice would likely significantly underperform.
Table 4: S&P 500 vs. S&P 500 200-day Moving Average Rotation (Selected Bull Markets)
If it’s not about outperforming on the upside, what is the true value in using Moving Averages to “follow the trend?” As Jeremy Siegel notes in “Stocks for the Long Run,” the “major gain of the [Moving Average] timing strategy is a reduction in risk.”17 We can see this in Table 5 which shows how the same strategy performed during Bear Market periods. The outperformance here is substantial, indicating that the Moving Average is more about downside preservation than upside participation.
Table 5: S&P 500 vs. S&P 500 200-day Moving Average Rotation (Selected Bear Markets)
A Non-Random Walk Down Wall Street
Beyond being an effective risk management tool, Moving Averages also provide important clues about stock market behavior. If stock prices moved in a “random walk” as was asserted by Samuelson and others, trends would not persist and there would be no differentiation in behavior above and below Moving Averages.18 We find that is not the case, affirming the work of Lo and MacKinlay in a “Non-Random Walk Down Wall Street.”19
Chart 1 shows the annualized volatility of the S&P 500 Index above and below various popular Moving Average time frames, going back to October 1928. As confirmed by Monte Carlo simulations, irrespective of which Moving Average interval is used, the underlying finding remains the same: when stocks trade below their Moving Average, volatility going forward is considerably higher than when stocks trade above their Moving Average.20
Chart 1: S&P 500 Annualized Volatility Above/Below Moving Averages (October 1928 – October 2015)
We propose that there is a fundamental underpinning to this stark differentiation in behavior. Since 1928, the U.S. economy has experienced 14 recessions, spending approximately 19% of the time in contraction. In Chart 2, we see that during recessionary periods, the S&P 500 has traded below its 200-day Moving Average 69% of the time versus only 19% of the time during expansions. During expansionary periods, the S&P 500 has traded above its 200-day Moving Average 81% of the time versus only 31% of the time in recession. The uncertainty in growth and inflation expectations that accompanies periods of economic weakness is, in our view, what leads to investor overreaction and increased beta volatility.
Chart 2: Recession, Expansions and Moving Averages (October 1928 – October 2015)
This is important because high volatility and uncertainty have not typically been constructive for equity markets. We can observe this in Chart 3 which shows the significant disparity in S&P 500 returns between periods when it is above and below various Moving Averages.
Chart 3: S&P 500 Annualized Return Above/Below Moving Averages (October 1928 – October 2015)
Viewing the Moving Average as a volatility indicator more so than a trend identifier helps explain how Moving Average strategies can underperform in strong equity bull markets which have little to no volatility in hindsight. If the market is in an unrelenting up phase, a decline below the Moving Average results in a sell trigger which ends up being a false positive, resulting in missing out on subsequent returns for a moment in time. Over the course of a full economic and market cycle, however, where uptrends are interrupted by periods of volatility, the Moving Average can help limit equity exposure to environments which most favor return generation.
The purpose of showing how an unleveraged Moving Average timing strategy performed in strong bull markets (Table 4) is to separate out a strategy in implementation with an observation about market behavior in the signal itself. The key component to exploiting the Moving Average in strategy form is less about being exposed to equities above it, but rather in avoiding higher equity volatility below it.
More so than that, Moving Averages can help investors mitigate the potential for loss aversion to result in sub-optimal portfolio decision making. Chart 4 shows that historically, the worst 1% of trading days have occurred far more often than not below the Moving Average. Included in this list are the two worst days in market history, October 19th in 1987 and October 28th in 1929. Entering both of these historic days, the market was already trading below all of its major Moving Averages (10-day through 200-day). While not of use for true buy and hold investors with an infinite time horizon, to the extent that Moving Averages can help sidestep such extreme down days, the power of the indicator remains in mitigating downside more so than participating in the upside.
Chart 4: S&P 500, Worst 1% of Trading Days
Extreme down days are consistent with investor fear and overreaction, while gradual movements higher are characterized by investor underreaction, which is well documented in academic literature related to behavioral finance. Important to this dynamic is the likelihood of consecutive up days depending on whether the market is above or below its Moving Average. Chart 5 illustrates that the S&P 500 is much more likely to experience consecutive up days when it is above its Moving Average than below it.
Chart 5: S&P 500 Consecutive Positive Day Streaks (% of Time) (October 1928-October 2015)
This suggests that we should view the stock market as not simply being in a trend when above the Moving Average, but rather as being in an environment which favors lower volatility and higher potential for consecutive positive returns. These are the two characteristics which are of critical importance for a strategy which utilizes leverage.
The Leverage Rotation Strategy
We have established that leverage performs best when in a low volatility environment with a higher probability of positive performance streaks. By extension, it performs worst during periods of extreme volatility and choppier asset class behavior.
Our systematic Leverage Rotation Strategy (“LRS”) is as follows:
When the S&P 500 Index is above its Moving Average, rotate into the S&P 500 and use leverage to magnify returns.
When the S&P 500 Index is below its Moving Average, rotate into Treasury bills.
Before illustrating the results of the LRS, it is important to examine how unleveraged timing based on the Moving Average alone performs going back to 1928. The data is summarized in Table 6. Note that the Moving Average rule results in higher risk-adjusted performance and lower drawdowns versus buy and hold, generating significant positive alpha in all Moving Average periods examined (10-day through 200-day).
Table 6: Unleveraged Buy and Hold versus Unleveraged Moving Average Timing (October 1928 – October 2015)
The key finding here is not about absolute return but risk-adjusted performance. To further illustrate this point, the Moving Average rule beats buy-and-hold in absolute performance in only 49% of rolling 3-year periods.21 This is due to the fact that Moving Averages can whipsaw market participants in sideways or trendless periods. However, the true value here is in terms of risk-adjusted outperformance, where the Moving Average rule beats the market in 69% of rolling 3-year periods.22
This is of critical importance because of the unspoken flaw in buy and hold: that almost nobody holds through large drawdowns. Chart 6 illustrates drawdowns since 1928 in the S&P 500 (yellow line) and the 200-day timing strategy (blue line). During all large Bear Markets, the 200-day strategy significantly truncates the downside. This is the major value in the Moving Average: minimizing risk and increasing the likelihood of sticking with a strategy over time.
Chart 6: S&P 500 vs. 200-day Timing, Rolling Drawdown
While unleveraged strategies using the Moving Average as a timing indicator suggest one can achieve similar returns to buy-and-hold over time but with less volatility and drawdown, the question of how to generate higher returns using the market itself can only be answered with leverage.
From October 1928 through October 2015, a buy and hold strategy using leverage (with a 1% annual expense) significantly outperforms the unleveraged strategy. There is no free lunch here, though, as the annualized volatility is multiples of the unleveraged index, leading to inferior risk-adjusted performance and larger drawdowns (see Table 7).
Table 7: Unleveraged Buy and Hold versus Leveraged Buy and Hold
While an annualized return of 15.3% for the 3x strategy sounds irresistible, the reality is that few would have stuck with a return path that incurred multiple 50+% drawdowns over time. Leveraged buy and hold only magnifies the major flaw of buy and hold. We can see this more clearly in Chart 7.
Chart 7: S&P 500 vs. Leveraged S&P 500 – Drawdown
Alternatively, by incorporating the LRS, we can harness the power of leverage while increasing the odds of an investor sticking to the portfolio over time. Although shorter Moving Averages achieve similarly strong results, we will narrow our focus here to the 200-day Moving Average as it incurs the fewest transaction costs (average of 5 trades per year) and is most applicable in time frame to both traders and investors. We will also assume for the purposes of this section a leverage fee of 1% per year, which approximates the current expense ratio for the largest leveraged ETFs.23
As shown in Table 8, as compared to a buy and hold of the S&P 500 and leveraged buy and hold, the LRS achieves:
- improved absolute returns
- lower annualized volatility
- improved risk-adjusted returns (higher Sharpe/Sortino)
- lower maximum drawdowns
- reduced Beta
- significant positive alpha
Table 8: Unleveraged Buy and Hold versus Leveraged Rotation Strategies (Oct 1928 – Oct 2015)
Chart 8 displays the growth of $10,000 from October 1928 through October 2015. A buy and hold of the S&P 500 grows to over $19 million while the 1.25x, 2x and 3x LRS grow to over $270 million, $39 billion and $9 trillion respectively.
Chart 8: Growth of $10,000 – S&P 500 vs. LRS (October 1928-October 2015)
In Chart 9, we see that this outperformance is consistent over time and through multiple economic and financial market cycles. On average, the LRS outperforms the S&P 500 in 80% of rolling 3-year periods.
Chart 9: Rolling 3-Year Outperformance (LRS – S&P 500)
We also see in Table 9 that during the four worst bear markets in U.S. history, all of the Leverage Rotation Strategies have lower maximum drawdowns than an unleveraged buy and hold of the S&P 500.
Table 9: Maximum Drawdown during Bear Markets, S&P 500 vs. LRS
Another way to view this is the time it takes to reach new highs after a Bear Market. If we look at the worst Bear Markets in history, the LRS reaches new highs ahead of buy and hold in every example, from 1.25x through 3x. After peaking in September 1929, a buy and hold of the S&P 500 did not reach new highs until 1946, while the 1.25x and 2x LRS reached new highs ten years earlier, in 1936. Using constant leverage, the 3x S&P 500 without a rotation strategy did not reach new highs after both the 2000-02 and 2007-09 Bear Markets (N/A in Table 10), illustrating the strong need for timing leverage in different volatility regimes.
Table 10: Bear Markets and New High Dates, S&P 500 vs. LRS
Modern Day Implementation
Going back to 1928, executing the Leverage Rotation Strategy as outlined would have been hampered by higher transaction costs, increased slippage, and higher costs of leverage. In today’s market, all of these issues have been minimized. Transaction costs and slippage are considerably less and the cost of leverage has decreased substantially. Currently, the lowest cost and most efficient way to replicate the strategies discussed in this paper are through S&P 500 leveraged ETFs.
Conclusion
Since 1928, the highest source of real returns in any asset class by far has come from the stock market. This is true in spite of the Great Depression of 1929 through 1933 and in spite of the 13 recessions thereafter. Through wars, disasters and political turmoil, stocks have been the best vehicle to not only keep up with inflation but to far surpass it.
The tremendous wealth generation from stocks over this period, averaging over 9% annualized, makes a buy and hold strategy of the S&P 500 extremely difficult to beat. Beyond this, the Efficient Market Hypothesis maintains that it is impossible to consistently outperform the market while Random Walk theory asserts using technical indicators is futile. Finally, the CAPM states that the only way to achieve a higher return is to take more risk.
We challenge each of these theories in this paper. First, we illustrate that Moving Averages and trends contain important information about future volatility and the propensity for streaks in performance. Next, we show that using Moving Averages to time the market achieves a similar to higher return with less risk. Lastly, we show Leverage Rotation Strategies which use a systematic rule to consistently outperform the market over time.
The key to this outperformance is understanding the conditions that help and hurt leverage in the long term. We show that it is volatility and seesawing market action that is most harmful to leverage. On the other hand, low volatility periods with positive streaks in performance are most helpful. Moving Averages are one way of systematically identifying these conditions. When the stock market is in an uptrend (above its Moving Average), conditions favor leverage as volatility declines and there are more positive streaks in performance. When the stock market is in a downtrend (below its Moving Average), the opposite is true as volatility tends to rise.
We find that being exposed to equities with leverage in an uptrend and rotating into risk-free Treasury bills in a downtrend leads to significant outperformance over time. For investors and traders seeking a destination with higher returns who are willing to take more risk at the right time, systematic leverage for the long run is one way of moving there, on average.
Footnotes
- See Fama and French (1992).
- See Baker and Haugen (2012).
- See Blitz and Vliet (2007)
- See Jacobs and Levy (2012).
- See Tolonen (2014).
- See Jacobs and Levy (2013).
- See Frazzini and Pedersen (2012).
- See Asness, Frazzini, and Pedersen (2012).
- Source: S&P 500 Total Return Index (Gross Dividends) data from Bloomberg.
- We assume no cost to using leverage in this section but will introduce an assumed cost in the strategy section.
- See Trainor Jr. (2011).
- As referenced in Grinblatt and Moskowitz (2000), autocorrelation across various horizons is well documented throughout academic literature looking at market momentum and trend persistence.
- See Gartley (1930).
- See Edwards, Magee and Bassetti (2007).
- While there are various types of Moving Average (Simple, Exponential, Weighted, etc.), we limit our focus in this paper to the simplest and most frequently used form: the Simple Moving Average. A Simple n-day Moving Average is the unweighted mean of the prior n days. We use daily closing prices of the total return series to calculate the Moving Average for the S&P 500.
- All data and analysis presented is total return, inclusive of dividends and interest payments. Source for Treasury bill data: http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html.
- See Siegel (1998). See also Faber (2006) which notes a similar finding when Moving Average timing is applied to Tactical Asset Allocation.
- See Samuelson (1965).
- See Lo and MacKinlay (2002).
- We observe a similar phenomenon in other markets, including Small Cap stocks, Commodities and High Yield Bonds. The Russell 2000 Index has an annualized volatility of 14.7% above its 200-day moving average versus 26.5% below it (1979-2015). The CRB Commodities Index has an annualized volatility of 14.9% below its 200-day Moving Average versus 18.5% below it (1994-2015). The Merrill Lynch High Yield Index has an annualized volatility of 2.9% above its 200-day Moving average versus 7.2% below it (1990-2015).
- Average of outperformance using 10-day, 20-day, 50-day, 100-day and 200-day Moving Averages.
- Average of rolling alpha using 10-day, 20-day, 50-day, 100-day and 200-day Moving Averages.
- As of 11/30/15, the largest leveraged S&P 500 ETF with $1.9 billion in assets is the Proshares Ultra S&P 500 ETF (2x). It has an expense ratio of 0.89%. The second largest leveraged S&P 500 ETF with $1.1 billion in assets is the Proshares UltraPro S&P 500 (3x). It has an expense ratio of 0.95%.
Further Research
While outside the scope of this paper, our conclusions have important implications for further areas of research. These include:
- timing of leverage in asset classes outside of equities
- determining if volatility predictors in one asset class can be used to time leverage in another
- determining if other technical indicators which are predictors of volatility yield similar results. We look forward to exploring these issues in upcoming
References
Asness, Clifford S., Andrea Frazzini, and Lasse Heje Pedersen, 2012, Leverage Aversion and Risk Parity, Financial Analysts Journal.
Baker, Nardin L. and Robert A. Haugen, 2012, Low Risk Stocks Outperform within All Observable Markets of the World.
Blitz, David and Pim V. Vliet, 2007, The Volatility Effect: Lower Risk Without Lower Return, Journal of Portfolio Management.
Edwards, Robert D., John Magee, and W.H.C. Bassetti, 2007, Technical Analysis of Stock Trends, CRC Press.
Faber, Mebane T., 2007, A Quantitative Approach to Tactical Asset Allocation, The Journal of Wealth Management.
Fama, Eugene F. and Kenneth R. French, 1992, The Cross-Section of Expected Stock Returns, The Journal of Finance.
Frazzini, Andrea and Lasse Heje Pedersen, 2012, Betting Against Beta, Swiss Finance Institute Research Paper.
Gartley, H. M., 1930, Profits in the Stock Market, Lambert Gann Publishing.
Grinblatt, Mark, and Tobias J. Moskowitz, 2003, Predicting Stock Price Movements from Past Returns: The Role of Consistency and Tax-Loss Selling, The Journal of Financial Economics.
Jacobs, Bruce I. and Kenneth N. Levy, 2013, Introducing Leverage Aversion Into Portfolio Theory and Practice, The Journal of Portfolio Management.
Jacobs, Bruce I. and Kenneth N. Levy, 2012, Leverage Aversion and Portfolio Optimality, Financial Analysts Journal.
Jacobs, Bruce I. and Kenneth N. Levy, 2012, Leverage Aversion, Efficient Frontiers, and the Efficient Region, The Journal of Portfolio Management.
Samuelson, Paul A., 1965, Proof That Properly Anticipated Prices Fluctuate Randomly, Industrial Management Review.
Siegel, Jeremy J., 1998, Stocks for the Long Run, McGraw-Hill Education.
Tang, Hongfei, and Xiaoqing Eleanor Xu, 2011, Solving the Return Deviation Conundrum of Leveraged Exchange-Traded Funds, Journal of Financial and Quantitative Analysis.
Tolonen, Pekka, 2014, Hedge Fund Leverage and Performance: New Evidence From Multiple Leveraged Share Classes, Aalto University.
Trainor Jr., William J., 2011, Solving the Leveraged ETF Compounding Problem, The Journal of Index Investing.
Finding Consistent Trends with Strong Momentum
by Arthur Hill, CMT
About the Author | Arthur Hill, CMT
Arthur Hill, who holds the Chartered Market Technician (CMT) designation, is the Chief Technical Strategist at Trend Investor Pro. He takes a quantitative approach to trading using rule-based strategies that are tested in different market environments.
Arthur has written a book defining his process, Define the Trend and Trade the Trend. This book shows beginner and intermediate level chartists how to determine trend direction and find low-risk entry points within that trend. He has been featured in Stocks & Commodities Magazine, the CMT Association Webcasts, the Benzinga Premarket Prep and the Financial Sense News Hour, and been quoted in other financial publications.
Arthur received a B.S. in Political Science and Russian Studies from the University of Houston, and an MBA (Finance) from City University Business School in London. He is an active member of the CMT Association. When not immersed in the markets, he enjoys family outings, tennis and scuba diving.
Abstract
Investors and traders typically use the Relative Strength Index (RSI) to identify turning points in security prices. This strategy, however, discounts the true nature of the indicator and limits its potential. An RSI breakdown reveals that its power lies in its ability to identify consistent uptrends with strong momentum. Some practitioners use RSI ranges to identify existing trends and RSI extremes to signal momentum shifts. These approaches, however, do not quantify how long RSI should hold its range, how regularly RSI should reach a momentum milestone and, most importantly, if RSI range and momentum indications have predictive value.
The goal of this paper is to systematically test RSI range and momentum signals using stocks in the S&P 500. Moreover, this paper will show that the RSI range alone is inadequate because it does not always capture upside momentum. The RSI range measures trend consistency well, but a momentum component is needed to uncover the strongest uptrends. After quantifying and testing, this paper will show that signals combining RSI range and momentum can foreshadow sizable advances with good success rates. As such, these signals can be part of a successful investing strategy that combines trend-following and momentum.
Trend Theory and RSI
J. Welles Wilder Jr. developed the Relative Strength Index (RSI) and introduced it in his classic book, New Concepts in Technical Trading.1 Wilder used RSI as a momentum oscillator to identify turning points or reversals in security prices. While it is possible to time reversals using RSI, reversal strategies do not utilize the inherent strengths of the indicator.
Despite its label as a momentum oscillator, RSI is a natural trend indicator. RSI is bound between 0 and 100 with 50 as the mid-point. As the RSI formula reveals, RSI is above 50 when the Average Gain is greater than the Average Loss.2 Conversely, RSI is below 50 when the Average Gain is less than the Average Loss. Thus, prices are generally rising when RSI remains above 50 and generally falling when RSI remains below 50. The further above 50, the larger the Average Gain relative to the Average Loss, and the stronger the uptrend. Conversely, the further below 50, the larger the Average Loss relative to the Average Gain, and the stronger the downtrend.
Chart 1 shows Comcast (CMCSA) and 14-day RSI to illustrate a shift in RSI and price direction. Comcast moved lower from late February to early May and RSI remained below 50 for the most part (red). A shift occurred in early June as prices moved higher and RSI crossed above 50. Prices moved higher from early June to September and RSI remained largely above 50 (green).
Chart 1: CMCSA – RSI Directional Shift
The zone around 50 defines the battle for trend consistency. For an uptrend to be truly consistent, pullbacks should be limited and the trend should stay in motion.3 This is where the RSI range comes into play, especially the lower end of the range. Adding a cushion to the midpoint, declines are considered normal pullbacks within a larger uptrend as long as RSI holds above 40. Note that a buffer is needed to allow for normal pullbacks and reduce whipsaws. A price decline that pushes RSI below 40 indicates that the uptrend is losing consistency. Thus, RSI can be a good fit for trend-following strategies.
Research shows that time-series momentum, a form of trend-following, works in the U.S. stock market.4 Stocks showing positive returns over a twelve-month period are more likely to have positive returns in the future. As with trend-following, time-series momentum assumes that a trend in motion will stay in motion. In other words, the chances of further gains are higher when a security is already in an uptrend. D’Souza et al. found that “the existence of time-series stock momentum has been a persistent phenomenon in the U.S. equity markets throughout the 88-year period since 1927.”5
Some practitioners use RSI ranges to identify the price trend. Andrew Cardwell of Cardwell RSI Edge uses bullish and bearish ranges to define price trends.6 According to Cardwell, RSI typically ranges from 40 to 80 during an uptrend and 20 to 60 during a downtrend. RSI finds support and reverses in the 40-50 zone during a normal pullback. A move to 30 suggests something more than just a pullback. A break below the bullish range indicates a downtrend is starting, and it is time to apply the bearish-range rule.
Chart 2 shows Illumina (ILMN) trending higher as RSI ranged between 40 and 80 (green shading). The October break below 40 ended this bull range. Chart 3 shows BorgWarner (BWA) trending lower as RSI ranged between 20 and 60 (red shading).
Chart 2: ILMN – RSI Bull Range (40 to 80) Chart 3: BWA – RSI Bear Range (20 to 60)
Constance Brown asserts that momentum oscillators such as RSI and the Stochastic Oscillator do not “travel between 0 and 100.”7 Brown argues that RSI finds support in the 40-50 zone during an uptrend and resistance in the 80-90 zone. Conversely, RSI meets resistance in the 55-65 zone during a downtrend and finds support in the 20-30 zone. Brown suggests wider ranges than Cardwell: 40 to 90 for a bullish range and 20 to 65 for a bearish range.
Momentum Theory and RSI
RSI also has a clear momentum component. When analyzing past winners, it is easy to find stocks with strong momentum that outperformed for extended periods. More often than not, these stocks became overbought numerous times as their uptrends persisted. “Overbought” readings occur when RSI moves above 70. Stocks like Apple, Best Buy, Boeing, Mastercard, Nvidia, Tiffany and Valero more than doubled from June 2016 to June 2018, a mere two years. This diverse group of stocks had one thing in common: RSI became “overbought” by moving above 70 on a regular basis.
Chart 4 shows Boeing (BA) with 14-day RSI moving above 70 regularly from October 2016 to January 2018. The stock was indeed overbought by the traditional definition, but these overbought readings reflected strong upside momentum and the stock subsequently outperformed for an extended period.
Chart 4: BA – RSI Strong Momentum (above 70)
What exactly does overbought mean? Greg Morris notes that overbought and oversold are “the most overused and misunderstood terms when talking about the markets.”8 Indeed, overbought conditions present traders and investors with a paradox. Consider when RSI moves above 70. One investor might use an overbought signal to prepare for a reversal, while another could view strong upside momentum as a prerequisite for further gains.
RSI moves above 70 and becomes “overbought” when prices move higher, usually sharply higher. Such an advance signals strong upside price momentum and RSI quantifies this momentum by moving into the upper end of its range. Moreover, RSI values above 70 are relatively rare and show exceptional strength.
The distribution of RSI values confirms the 20 year uptrend in stocks and the uniqueness of values above 70. Threshold testing reveals that the majority of RSI values exceeded 50 for stocks in the S&P 500 from 7/1/1998 to 7/2/2018. In fact, Chart 5 shows that 56.7% of RSI values exceeded 50 during this twenty-year test period, while 43.3% were below 50.9 These numbers make sense because the S&P 500 Total Return Index advanced 247%, generating a 6.4% compound annual return, over this same period. The S&P 500 was also above its 200-day simple moving average 70% of the time.
Chart 5: Distribution of RSI Values between 0 and 100
Chart 5 also reveals that RSI values below 30 and above 70 are exceptional. RSI fell below 30 just 3.5% of the time, and exceeded 70 just over 6% of the time. Stocks with RSI values above 70 are in an elite group and they can be considered momentum leaders. Accordingly, RSI values above 70 can be used in momentum strategies that focus on the strongest stocks. On the flip side, stocks with RSI values below 30 can be viewed as serious underperformers.
Research shows that momentum strategies generate positive returns and consistently outperform. The momentum anomaly indicates that stocks that have outperformed tend to continue outperforming, while stocks that have underperformed tend to continue underperforming.10 Research from Jegadeesh and Titman showed that stocks that outperformed over three- to twelve-month time horizons continued to outperform.11 An updated paper by Jegadeesh and Titman showed that this momentum strategy still worked on U.S. stocks overall, though it suffered a setback in 2009.12
Practitioners have also found that high RSI levels can be bullish and lead to further price gains. In a presentation to the CMT Association, David Cox, CMT, CFA, showed how a surge above 70 can signal the start of a new uptrend.13 After an extended downtrend and RSI values below 30, an RSI surge above 70 reflects a clear shift in price dynamics. Using Apple (AAPL) as an example, Cox showed RSI moving from a deeply oversold condition to an overbought condition. Chart 6 shows this downtrend reversing to an uptrend when RSI surged above 70, which showed strong upside momentum. Chart 7 shows Eversource Energy (ES) becoming “oversold” as RSI dipped below 30 in February and June 2018. The sharp move from below 30 to above 70 signaled a clear shift in momentum and further gains followed.
Chart 6: AAPL – RSI Surge and Trend Reversal Chart 7: ES – RSI Surge and Trend Reversal
Testing, Methodology and Data
Wilder suggested 14 days when calculating RSI and this setting will be used throughout the paper.14 With 14-day RSI as the base indicator, this paper will test five signal groups: bull range, bear range, bull momentum, bear momentum and bull range-momentum.
- RSI Bull Range: RSI fluctuates between 40 and 100 over N days
- RSI Bear Range: RSI fluctuates between 0 and 60 over N days
- RSI Bull Momentum: highest high value of RSI is greater than 70 over N days
- RSI Bear Momentum: lowest low value of RSI is less than 30 over N days
- RSI Bull Range-Momentum: combination of 1 and 3 over N days
Each test group covers five lookback periods (N days): 25, 50, 75, 100 and 125 (trading days). For example, the 25-day RSI Bull Range test triggers a signal when 14-day RSI has fluctuated between 40 and 100 for at least 25 days; the signal ends when RSI moves below 40, breaking the range. The next test, 50-day RSI Bull Range, triggers a signal when 14-day RSI has fluctuated between 40 and 100 for at least 50 days; the signal ends when RSI moves below 40.
Tests used historical constituents in the S&P 500 to prevent survivorship bias. That is, the study used components in the index at the time of testing. For example, testing before 12/23/2013 did not include Facebook (FB) because it was not yet part of the S&P 500. Similarly, testing after 12/20/2013 did not include Teradyne (TER) because it was removed from the S&P 500 (replaced by Facebook). This also means signals ended when a stock was removed from the S&P 500. Stock data were adjusted for capital reconstructions, special dividends and ordinary dividends.
The testing period used daily price data extending from 7/1/1998 to 7/2/2018, encompassing twenty years and four market cycles. This period includes the bear markets in 2001-2002 and 2008-2009, and the bull markets of 2003-2007 and 2009-2018. The latter bull run also includes the flash-crash in May 2011, high volatility from July to October 2011 and an extended correction from July 2015 to February 2016.
For testing purposes, the system measures the percentage change in the stock price for the duration of the signal. Signals are generated on the close, while entries and exits are based on the next open. Commissions and slippage are not considered. The price change is simply the open price at entry less the open price at exit. This difference is then divided by the open price at entry to calculate the percentage change.
The performance metrics focus on the Success Rate, the Average Advance, the Average Decline and the Profit/Loss Ratio. A successful signal occurs when there is an advance after a bullish signal and a decline after a bearish signal. A failed signal occurs when there is a decline after a bullish signal and an advance after a bearish signal. The Success Rate is the percentage of successful signals (out of total signals).
The size of the subsequent advance or decline captures the magnitude of the success or failure, while the Profit/Loss Ratio measures the degree of success. An advance after a bullish signal is deemed a profit, while a decline is deemed a loss. Similarly, a decline after a bearish signal is deemed a profit, while an advance is deemed a loss. Thus, the Profit/Loss Ratio compares the magnitude of the successful signals to the magnitude of the failed signals.
The Success Rate and Profit/Loss Ratio should be considered together, not separately. The Success Rate shows how often the signals worked, while the Profit/Loss Ratio reflects the degree of success. Together, these two indicators provide insight on the true potential of the signals. Signals with relatively low Success Rates (below 35%) and low Profit/Loss Ratios (below 1.2) do not show potential, while signals with relatively high Success Rates (above 50%) and high Profit/Loss Ratios (above 2) show potential.
Testing RSI Bull and Bear Ranges
The first test analyzes the RSI Bull Range, which extends from 40 to 100. Though Cardwell and Brown set upper limits from 80 to 90, the true maximum is 100. The ability to fluctuate between 40 and 100 keeps RSI in the top 60% of its theoretical range.
Chart 8 shows F5 Networks (FFIV) with the 75-day RSI Bull Range indicator (red/green) and RSI (blue) with a gray line at 40. The RSI Bull Range indicator moves to +1 (green) when RSI fluctuates between 40 and 100 for at least 75 days. The indicator moves to -1 (red) when RSI moves below 40 and breaks the range. The green arrow shows the beginning signal and the red arrow shows the ending signal. FFIV advanced 29.9% after the bullish signal for a success (profit). In contrast, Chart 9 shows a failed bullish signal (loss) for TripAdvisor (TRIP), which fell 11.1% between signals.
Chart 8: FFIV – RSI Bull Range Chart 9: TRIP – RSI Bull Range
Source: AmiBroker and PremiumData
Table 1 shows results when applying RSI Bull Range signals to stocks in the S&P 500 over the test period. The Success Rate ranged from 34% to 37%, which means less than 40% of stocks advanced after triggering bullish signals. Despite these low Success Rates, the Profit/Loss Ratios ranged from 2.35 to 2.52, which are quite high. This means the average profit from a signal was more than twice the average loss. Thus, the RSI Bull Range signals show promise with high Profit/Loss Ratios, but merit caution because of Success Rates below 40%.
Table 1: Performance Metrics for RSI Bull Range Test
The second test analyzes the RSI Bear Range, which extends from 0 to 60. Again, this test uses the theoretical lower limit. RSI remains in the lower 60% of its range when it fluctuates between 0 and 60.
Chart 10 shows Dish Network (DISH) with the 75-day RSI Bear Range indicator and RSI with a horizontal line at 60. The RSI Bear Range indicator moves to -1 (red) when RSI does not move above 60 for a least 75 days. This bearish signal (red arrow) remains in play as long as RSI fails to break above 60. The indicator moves to +1 (green) when RSI moves above 60 and breaks the range (green arrow). DISH declined 30.4% for a successful bearish signal (profit). Chart 11 shows a failed bearish signal (loss) for Cerner (CERN), which subsequently advanced 17.7%.
Chart 10: DISH – RSI Bear Range Chart 11: CERN – RSI Bear Range
Source: AmiBroker and PremiumData
Table 2 shows results when applying RSI Bear Range signals to all stocks in the S&P 500 over the test period. The Success Rates ranged from 25% to 28%, which means less than 30% of stocks declined after the bearish indicator triggered a signal. The Profit/Loss Ratios were all above 1.2 as prices tended to fall after the RSI Bear Range signals, but the Profit/Loss Ratios fell from 2.11 to 1.28 as the lookback period extended. Thus, performance deteriorated significantly as time extended. Overall, the Profit/Loss Ratios are not high enough to justify the extremely low Success Rates.
Table 2: Performance Metrics for RSI Bear Range Test
Testing RSI Bull and Bear Momentum
Testing now turns from range signals to momentum signals. This third test analyzes performance when RSI exceeds 70 over different lookback periods. RSI values above 70 show strong upside momentum and suggest outperformance. The signal ends when RSI fails to exceed 70 over the lookback period, which suggests waning upside momentum.
Chart 12 shows PNC Financial Services (PNC) with the RSI Bull Momentum indicator and RSI with a horizontal line at 70. The RSI Bull Momentum indicator moves to +1 (green) when RSI exceeds 70 and remains at 1 as long as the highest high of RSI is above 70 over the lookback period, which is 75 days. The indicator moves to -1 (red) when the highest high of RSI fails to exceed 70 over a 75-day period. PNC advanced 15.4% for a successful bullish signal (profit). Chart 13 shows eBay (EBAY) declining 15% for a failed bullish signal (loss).
Chart 12: PNC – RSI Bull Momentum Chart 13: EBAY – RSI Bull Momentum
Source: AmiBroker and PremiumData
Table 3 applies RSI Bull Momentum signals to all stocks in the S&P 500 over the test period. The Success Rates jump out because they range from 52% to 58% and exceed 50% in every instance. Thus, more than 50% of stocks advanced after the signals. The Profit/Loss Ratios ranged from 1.37 to 2.11, and rose as the lookback period extended from 25 days to 125 days. Also, notice how the Average Advance increased as the lookback period extended. Overall, the RSI Bull Momentum signals show promise with Success Rates above 50% and sizable gains (Average Advances), but warrant caution because the Average Declines weigh on the Profit/Loss Ratios.
Table 3: Performance Metrics for RSI Bull Momentum Test
The fourth test analyzes performance when RSI moves below 30 over different lookback periods. RSI values below 30 reflect strong downside momentum and point to underperformance. The signal ends when RSI fails to move below 30 over the lookback period, which implies less downside momentum.
Chart 14 shows Henry Schein (HSIC) with the 75-day RSI Bear Momentum indicator and RSI with a horizontal line at 30. The indicator moves to -1 (red) when the lowest low of RSI is below 30 over the last 75 days. This triggers a bearish signal that lasts until RSI fails to move below 30 over a 75-day period. An ending signal occurs when the RSI Bear Momentum indicator moves to +1 (green). HSIC declined 14.3% for a successful bearish signal (profit). Chart 15 shows Occidental Petroleum (OXY) advancing 23.7% for a failed bearish signal (loss).
Chart 14: HSIC – RSI Bear Momentum Chart 15: OXY – RSI Bear Momentum
Source: AmiBroker and PremiumData
Table 4 shows the results of applying RSI Bear Momentum signals to all stocks in the S&P 500 over the test period. The Success Rates ranged from 31% to 39%, but fell as the lookback period extended.
Less than 40% of stocks declined after the bearish indicator triggered a signal. The Average Decline was larger than the Average Advance at every lookback period, but the Profit/Loss Ratios were barely above 1. The RSI Bear Momentum signals do not show potential because the Success Rates were low and the Profit/Loss Ratios were barely above 1.
Table 4: Performance Metrics for RSI Bear Momentum Test
A Combination with Predictive Value
Overall, the bullish indicators show potential, but the bearish indicators do not. The Bear Range signals suffer from low Success Rates (below 35%) and the relatively low Profit/Loss Ratios. The Bear Momentum signals have the lowest Profit/Loss Ratios and relatively low Success Rates. The Bull Range signals have high Profit/Loss Ratios (above 2.30), but low Success Rates (below 40%). In contrast, the Bull Momentum signals show high Success Rates (above 50%), but relatively low Profit/Loss Ratios.
The results suggest that RSI Bull Range and RSI Bull Momentum are complementary indicators that should be combined. The RSI Bull Range indicator places a momentum floor at 40 to contain price declines. The RSI Bull Momentum indicator, in contrast, focuses on price advances by ensuring they are strong enough to push RSI above 70 on a regular basis. The RSI Bull Range indicator takes care of trend consistency, while the RSI Bull Momentum indicator ensures strong upward momentum. It is a powerful combination.
Chart 16 shows Harris Corp (HRS) with the combined indicator and RSI with horizontal lines at 40 and The RSI Bull Range-Momentum indicator moves to 1 and turns gray when either the RSI Bull Range or Bull Momentum indicators trigger a signal. The indicator moves to 2 and turns green when both indicators have triggered signals. Thus, a move to 2 means the lowest low value of RSI did not dip below 40 over the last 75 days AND the highest high value of RSI moved above 70 over the last 75 days.
Chart 16: HRS – RSI Bull Range-Momentum Chart 17: KR – RSI Bull Range-Momentum
Source: AmiBroker and PremiumData
A move to 2 generates an entry signal that remains in effect until BOTH indicators reverse their signals. This means the lowest low value of RSI dipped below 40 and the highest high value of RSI did not exceed 70 over the 75-day lookback period (indicator turns red). The trend lost consistency because RSI broke the bull range AND stopped showing strong upside momentum because RSI failed to exceed 70. Chart 16 shows Harris Corp (HRS) advancing +26.8% for a successful bullish signal (profit). Chart 17 shows Kroger (KR) declining 16.7% for a failed bullish signal (loss).
Table 5 shows the results when applying RSI Bull Range-Momentum signals to all stocks in the S&P 500 over the test period. The Success Rates ranged from 45% to 58% and exceeded 50% when the lookback period was 75 days or longer. The Profit/Loss Ratios ranged from 1.95 to 2.4 and exceeded 2 when the lookback period was 75 days or longer. Overall, the results steadily improved as the lookback period extended. This suggests that longer lookback periods are better suited for trend- momentum strategies.
Table 5: Performance Metrics for RSI Bull Range-Momentum Test
Chart 18 shows a scatter plot with the results for each of the five tests. The Success Rates are shown on the y-axis and the Profit/Loss Ratios are on the x-axis. A horizontal line at 50% delineates the Success Rate, while a vertical line at 2 delineates the Profit/Loss Ratio. These lines create four performance quadrants that we can use to compare the test results.
The Bear Momentum results (BearM – red circle) appear in the lower-left quadrant because they have the lowest Profit/Loss Ratios and relatively low Success Rates. The Bear Range results (BearR – triangle) are also mostly in the lower left quadrant because of the low Success Rates (below 30%) and the relatively low Profit/Loss Ratios. Only the 25-day Bear Range signal showed a Profit/Loss Ratio above 2.
Chart 18: Scatter Plot for Success Rates and Profit/Loss Ratios
The Bull Range results (BullR – green oval) are in the lower-right quadrant because they have high Profit/Loss Ratios (above 2.30) and low Success Rates (below 40%). The Bull Momentum results (BullM – rectangle) are mostly in the upper-left quadrant because of the high Success Rates (above 50%) and low Profit/Loss Ratios. The 125-day Bull Momentum signal is the only result with a Profit/ Loss Ratio above 2.
The RSI Bull Range-Momentum results with lookback periods of 75 days or more show the most predictive value. The 25-day and 50-day Bull Range-Momentum signals ended up in the left quadrant with Success Rates just below 50% and Profit/Loss Ratios just below 2. Performance greatly improved as the lookback period increased from 75 to 125 days. These results landed in the upper right quadrant with high Success Rates and high Profit/Loss Ratios (green shading). Clearly, the longer RSI Bull Range-Momentum signals show potential for inclusion in a trend-momentum strategy. The 125-day RSI Bull Momentum results also landed in the upper right quadrant and show potential.
Conclusions
Even though RSI is widely used to signal price reversals, the formula and signal testing reveal that this momentum oscillator is well-suited for trend-following and momentum strategies, which research shows can be profitable and outperform.15 16 The RSI Bull Range indicator ensures trend consistency by requiring RSI to hold above 40 on pullbacks. The RSI Bull Momentum indicator captures upside leadership by requiring RSI to regularly exceed 70. Threshold testing in S&P 500 stocks over the twenty-year testing period shows that RSI exceeded 70 just 6.3% of the time. Thus, stocks with RSI values above 70 show exceptional upside momentum.
Taken together, the evidence shows that RSI range and momentum signals can foreshadow sizable advances with a good success rate, especially when the lookback period extends from 75 to 125 days.17 The RSI Bull-Momentum signals were less successful at the 25-day and 50-day lookback periods, which suggests, perhaps, that there is some short-term mean-reversion at work. The performance metrics improved dramatically as the lookback period extended from 75 days to 125 days, which covers periods from three-and-a-half to six months. These longer lookback ranges point to a trend-momentum sweet spot that investors can use to profit and outperform.
Test results show that bullish signals work better than bearish signals. This is partly due to the upward bias in the S&P 500 Index over the twenty-year testing period. Despite the upward bias, the index experienced significant declines along the way. As such, investors may also consider the broad market environment when implementing a trend-momentum strategy. Bilello and Gayed found that volatility and risk increase when the S&P 500 is below its 200-day moving average.18 They also showed that volatility declines and conditions favor outperformance when the S&P 500 is above its 200-day moving average. Thus, adding a market timing mechanism to this RSI strategy could, in fact, reduce drawdowns during broad market declines and enhance returns during broad market advances.
Footnotes
- See Wilder
- See Appendix 1 – Sample RSI Calculation
- See Treacy
- See Gray
- See D’Souza
- See Cardwell
- See Brown
- See Morris
- See Appendix 2 – Weighted Average Calculation
- See Fama and French
- See Jegadeesh and Titman (1993)
- See Jegadeesh and Titman (2011)
- See Cox
- See Wilder
- See D’Souza
- See Jegadeesh and Titman (2011)
- See Appendix 3 – Results Table
- See Bilello and Gayed
References
Bilello, C., and Gayed, M. “Leverage for the Long Run: A Systematic Approach to Managing Risk and Magnifying Returns in Stocks.” Charles H. Dow Award (2016). https://cmtassociation.org/association/awards/charles-h-dow-award
Brown, C. Technical Analysis for the Trading Professional (2nd Edition). McGraw-Hill Education (2012).
Cardwell, A. “Using RSI to Find Great Trades.” MoneyShow Interview (December 2012). Accessed November 2018. https://www.moneyshow.com/articles/fxbiwkly08-29902/
Cox, D. “Relative Strength Index (RSI): Making Advanced Use of a Simple Indicator.” CMT Association Educational Web Series (June 2014). Accessed November 2018. https://cmtassociation.org/video/relative-strength-index-rsi-making-advanced-use-of-a-simple- indicator/
D’Souza et al. “The Enduring Effect of Time-Series Momentum on Stock Returns over nearly 100 Years.” Asian Finance Association (AsianFA) Conference (2016). Accessed November 2018. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2720600 Fama, E., and French, K. “Dissecting Anomalies.” The Journal of Finance, Vol. 63, No. 4 (August 2008)
Gray, W. “Are Trend-Following and Time-Series Momentum Research Results Robust?”. Alpha Architect Blog (April 2018). Accessed November 2018. https://alphaarchitect.com/2018/04/27/are-trend-following-and-time-series-momentum- research-results-robust/
Jegadeesh, N., and Titman, S. “Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency.” The Journal of Finance (March 1993).
Jegadeesh, N., and Titman, S. “Momentum.” Annual Review of Financial Economics (December 2011)
Morris, G. Investing with the Trend. Bloomberg Financial Series (2014)
Murphy, J. Technical Analysis of the Financial Markets. New York Institute of Finance (1999) Treacy, E. Crowd Money. Harriman House (2013)
Wilder, W. New Concepts in Technical Trading Systems. Trend Research (1978)
End Notes
Thanks to Cesar Alvarez of AlvarezQuantTrading.com for consulting on the AmiBroker code and results. https://alvarezquanttrading.com
Performance metrics were generated with signal tests using AmiBroker 6.2 Professional Edition. https://www.amibroker.com
Data used in AmiBroker was provided by Premium Data. https://premiumdata.com
Appendix 1: Subset sample RSI Calculation for Biogen (BIIB)
*First 13 values use data prior to 1/16/18. Avg = Wilder EMA
Appendix 2: Sample Calculation of Weighted Average (subset)
*Symbols removed from S&P 500 show year and month of removal.
Appendix 3: Performance Metrics Overview
Forecasting a Volatility Tsunami
by Andrew Thrasher, CMT
About the Author | Andrew Thrasher, CMT
Andrew Thrasher, CMT is the Portfolio Manager for Financial Enhancement Group LLC and founder of Thrasher Analytics LLC. Mr. Thrasher holds a bachelor’s degree from Purdue University and the Chartered Market Technician (CMT) designation. He is the 2017 and 2023 winner of the Charles H. Dow Award from the CMT Association for his research paper, “Forecasting a Volatility Tsunami” and a finalist in the Best Equity Research and Best Commodity Research categories for the Technical Analysis Awards in 2022. Andrew resides in Noblesville, Indiana with his wife and daughter. His analysis has been cited by CNBC, Wall Street Journal, MarketWatch, Bloomberg, Fox Business, ValueWalk, Yahoo! Finance, Barron’s, U.S. News, TD Ameritrade Network and Opto.
Abstract
The empirical aim of this paper is motivated by the anecdotal belief among the professional and non-professional investment community that a “low” reading in the CBOE Volatility Index (VIX) or large decline alone are ample reasons to believe that volatility will spike in the near future. While the Volatility Index can be a useful tool for investors and traders, it is often misinterpreted and poorly used. This paper will demonstrate that the dispersion of the Volatility Index acts as a better predictor of its future VIX spikes.
Introduction
According to the United Kingdom’s National Oceanography Centre, tsunami waves can be as much as 125 miles in length and have resulted in some of the deadliest natural disasters in history. Fortunately, scientists have discovered warning signs of these massive waves, which are believed to be caused by shifts in the earth’s tectonic plates. One of the visible signs of a forthcoming tsunami is the receding of water from a coast line, exposing the ocean floor.1 This is often referred to as “the calm before the storm.” The same type of activity can also be found in financial markets, specifically when analyzing the CBOE Volatility Index (VIX). It is often believed that when volatility gets to a “low” level the likelihood of a spike increases. However, as this paper will show, there is a more optimal tsunami-like condition that takes place within the markets, providing a better indication of potential future equity market loss and Volatility Index increase.
Great importance is found in the study of market volatility due to the historically negative correlation the Volatility Index has had to U.S. equities. By knowing the warning signs of a tsunami wave of volatility, professional and non-professional traders can better prepare their portfolios for potential downside risks as well as have the opportunity to profit from advances in volatility and/or declines in equities.
The popularity of volatility trading has seen steady growth to over $4 billion with more than 30 index- listed Exchange Traded Products.2 Drimus and Farkas (2012) note that “the average daily volume for VIX options in 2011 has almost doubled compared to 2010 and is nearly 20 times larger than in the year of their launch, 2006.” We can also see the increase in interest surrounding the Volatility Index by looking at trends in online searches with regards to low levels within the VIX. As of September 20th, 2016 there were 423,000 Google search results for “low VIX” and 4,610 results for “historic low volatility.” Few investors would deny the importance of volatility when it comes to the evaluation of financial markets.
In this paper the author will provide a brief literature review concerning the history of the Volatility Index, important prior studies surrounding the topic of volatility followed by a discussion of alterative, yet ultimately suboptimal, methods of predicting large swings in the VIX. The paper will conclude with the description, analysis, and results based on the author’s proposed methodology for forecasting outsized spikes within the VIX Index and how this approach may be used from a portfolio management standpoint to help investors better prepare based on the “calm before the storm.”
Those that believe in the adage of buy-and-hold investing often mention that missing the ten or twenty best trading days has a substantially negative impact on a portfolio’s overall return. They then in turn reject the idea of attempting to avoid the worst days in the market and active management as a whole. However, as Gire (2005) wrote in an article for the Journal of Financial Planning, the best and the worst days are often very close in time to one another. Specifically, 50% of the worst and best days were no more than 12 days apart.3 Looking at the bull market in the S&P 500 between 1984 and 1998, the Index rose an annualized 17.89%. Gire found that by missing the ten best days the annualized return fell to 14.24%, the statistic often cited by the passive investing advocates. Missing the ten worst days increased the return to 24.17% and missing both best and worst days produced an annualized return of 20.31%, with lower overall portfolio gyration. With the negative correlation between the Volatility Index and the S&P 500, by having an ability to forecast large spikes in the VIX the author proposes the ability to potentially curtail an investor’s exposure to some of the worst performing days within the equity market.
History of the Volatility Index
To better research, test, and analyze a financial instrument, it’s important to understand its history and purpose. The CBOE Volatility Index was originally created by Robert E. Whaley, Professor of Finance at The Owen Graduate School of Management at Vanderbilt University. The Index was first written about by Whaley in his paper, “Derivatives on Market Volatility: Hedging Tools Long Overdue” in 1993 in The Journal of Derivatives. Whaley (1993) wrote, “The Chicago Board of Options Exchange Market Volatility Index (ticker symbol VIX), which is based on the implied volatilities of eight different OEX option series, represents a markets consensus forecast for stock market volatility over the next thirty calendar days.”
Whaley believed the Volatility Index served two functions; first, to provide a tool to analyze “market anxiety” and second, to be used as an index that could be used to price futures and options contracts. The initial function helped give the VIX its nickname of being the “fear gauge” which helps to provide a narrative explanation for why the Index can have such large and quick spikes as investor emotions flow through their trading terminals.4
The Chicago Board of Options Exchange (CBOE) eventually launched Volatility Index (VIX) futures and options in 2004 and 2006, respectively. The VIX in its current form, according to the CBOE, “measures the level of expected volatility of the S&P 500 Index over the next 30 days that is implied in the bid/ask quotations of SPX options.”5
Literature Review
Comparing Rising & Falling Volatility Environments
It is often stated in the financial markets community that volatility is mean-reverting, meaning that like objects affected by gravity, what goes up must come down. Many market professionals attempt to take advantage of the rising and falling trends within the volatility market by echoing Warren Buffett’s famous quote, “Buy when there’s blood in the streets,” using an elevated reading in the Volatility Index as their measuring stick for the level of figurative blood flowing down Wall Street. However, as Zakamunlin (2006) states, the median and average duration for rising and falling Volatility are not equal. In fact, Zakamunlin found that the timespan for declines in volatility surpass the length of rising volatility by a factor of 1.4 and the resulting impact on equity markets is asymmetric, with a perceived overreaction to rising volatility compared to declining volatility.6 This is important, as it tells us that there is less time for an investor to react to rising volatility than there is to react after volatility has already spiked. Thus, the resulting impact on stock prices is disproportionately biased with stocks declining in value more than they rise in value during environments of increasing and decreasing volatility, respectively.
Using Volatility to Predict Equity Returns
Much attention has been paid to the creation of investment strategies based on capturing the perceived favorable risk situation of elevated readings from the Volatility Index. Cipollini and Manzini (2007) concluded that when implied volatility is elevated, a clear signal can be discerned for forecasting future three-month S&P 500 returns contrasted with instances when volatility is low. When evaluating the Volatility Index’s forecasting ability at low levels, their research notes that, “On the contrary, at low levels of implied volatility the model is less effective.”7 Cipollini and Manzini’s work shows that there may be a degree of predictability when the VIX is elevated but that the same level of forecasting power diminishes when analyzing low readings in the Volatility Index. In a study conducted by Giot (2002), the Volatility Index is categorized into percentiles based on its value and modeled against the forward-looking returns for the S&P 100 Index for 1-, 5-, 20-, and 60-day periods. When looking at the tenth percentile (equal to 12.76 on the Volatility Index), which includes a sample size of 414 observations, the 20-day mean return was found to be 1.06%, however Giot observed the standard deviation of 2.18, and the minimum and maximum returns ranged from -6.83% to 5.3%.8 While Giot demonstrates a relationship between volatility and forward equity returns, the research also diminishes the confidence that can be had in the directional forecasting power of returns within intermediate time periods for the underlying equity index. We can take from this that while a low reading within the VIX has shown some value in predicting future volatility, the forecasting of the degree and severity of the predicted move is less reliable, as it has a suboptimal degree of variance.
Data Used
For purposes of crafting the methodology and charts used within this paper, data was obtained from several credible sources. CBOE Volatility Index data has been acquired from StockCharts.com, which curates its data from the NYSE, NASDAQ, and TSX exchanges.9 Data for the CBOE VIX of the VIX was obtained through a data request submitted directly to the Chicago Board Options Exchange.
Volatility Spikes
While some degree of gyration in stock prices is considered normal and acceptable by most of the investment management community, large swings in price are what catch many investors off guard. It’s these “fat tail” events that keep investors up at night, which are often accompanied by sudden spikes found in the Volatility Index. Fortunately, many of these spikes can be forecasted; however, first we must address what a “spike” is. While the parameters of defining a “spike” can vary, this author will use a 30% advance in closing price to a high achieved within a five-trading day period.
Chart 1 shows the Volatility Index between May 22, 2006 and June 29, 2016. Marked on the chart are instances where the VIX has risen by at least 30% (from close to the highest high) in a five-day period when a previous 30+% advance had not occurred in the prior ten trading days. There have been 70 such occurrences of these spikes in the above-mentioned time period.
Chart 1: Spikes of 30+% in the Volatility Index, Daily Data
While previous studies have been conducted on forecasting future volatility, through a search on the SSRN it does not appear published analysis has been conducted specifically on forecasting spikes in volatility. From an asset management perspective, whether the reader is a professional or non-professional, a volatility spike, and with it a decline in stocks, impact on an equity portfolio is a more frequent risk than that of a bear market. Historically, the S&P 500 averages four 5% declines every year but we’ve only had 28 bear markets (20% or more decline from peak to trough) since the 1920s.10
Methods Of Volatility Forecasting
The traditional thought process that low volatility precedes higher volatility, a topic Whaley addresses in his 2008 paper when he states that, “Volatility tends to follow a mean-reverting process: when VIX is high, it tends to be pulled back down to its long-run mean, and, when VIX is too low, it tends to be pulled back up”11 is true, in a general sense, although this concept does not act as the best predictor of quick spikes in the VIX. Chart 2 provides an example of this, as it shows the occurrences where the daily close of the Volatility Index is at a four-week low. The four-week period is not based on optimization but was chosen as an example time period of roughly one month. What can also be observed is the large sample size that is produced, with 100 signals in the roughly ten-year period. The author realizes that by expanding the four-week time window, the sample size would lessen but the same basic result would still be reached – a greater sample size of occurrences than of previously-defined spikes in the VIX. The trouble this causes for the investor is an over-reaction each time volatility reaches a new four week low, as the VIX many times continues its trend lower, not resulting in a spike higher. This shows that simply because the VIX has fallen to a multi-week low, it does not necessitate a forthcoming spike within the underlying Index.
Chart 2: Lowest Volatility Index Close in Four Weeks, Daily Data
One could also argue that, because of the nature for the Volatility Index to mean-revert, volatility becomes overly discounted after a large decline, which is reason enough that it should then spike higher. This can be measured by looking for instances where the VIX has fallen by at least 15% in a three-day period, as shown by markers in Chart 3. While forgiving the occurrences that take place immediately after a spike within the VIX, looking at periods where volatility has fallen by a large amount in a short period of time increases the predictability of future large increases in the Volatility Index. However, while the sample size decreases to 53, there are still quite a few occurrences that produce false signals in preceding VIX spikes. It is this author’s opinion that neither of these methods (a four-week low or 15+% decline), provide an optimal warning to an investor of a heighted risk of forthcoming elevated volatility.
Chart 3: 15+% decline in three days in the Volatility Index, Daily Data
Volatility Dispersion Methodology
J.M. Hurst was one of the early adopters of trading bands, according to his book The Profit Magic of Stock Transaction Timing, drawing envelopes around price and a specified Moving Average. According to John Bollinger, CFA, CMT, Marc Chaikin was next to improve upon the practice of using bands within trading, using a fixed percentage around the 21-day moving average.12 Ultimately, in the 1980s, Bollinger built upon the work of Hurst and Chaikin by shifting the outer bands to incorporate volatility of the underlying market or security through the use of standard deviation above and below the 20-period moving average. Bollinger chose to use a 20-period moving average as “it is descriptive of the intermediate-term trend.”12 Bollinger notes that by applying analysis to the width of the bands, “a sharp expansion in volatility usually occurs in the very near future.” This idea of narrowing bands as a measure of contraction in the dispersion of a security is the topic this paper will focus on going forward.
While financial markets are never at complete rest per se, the closest they come is by trading in a very narrow range. This range can be observed in several ways, whether using Bollinger Bands®, an average true range indicator, or by simply calculating the standard deviation of price. Just as the seas become calm and the tide pulls back from the shore before the striking of a violent tsunami, the movement of the VIX often declines, sending the index’s dispersion to extremely low levels prior to the Index spiking higher. Chart 4 shows the CBOE Volatility Index and its 20-day standard deviation. While it is outside the scope of this paper, the lookback period used for the standard deviation could be optimized to better suit the timeframe and risk appetite of the investor; however, this author has chosen a period of 20 days in accordance with the timeframe used by Bollinger for his Bollinger Bands. While the VIX and its 20-day standard deviation move in lock-step with one another, additional forecasting ability can be achieved by applying further analysis to the dispersion measurement.
Chart 4: The Volatility Index and 20-day standard deviation, Daily Data
In order to find an appropriate threshold with forecasting spikes in the Volatility Index, the daily standard deviation readings were ranked by percentile for the time period of May 2006 through June 2016. As a result, the fifteenth percentile allowed a sizable sample size of 373 to be obtained.
The fifteenth percentile standard deviation during the abovementioned timeframe for the Volatility Index is 0.86. Chart 5 shows the scatter plot of the data observed for the 20-day standard deviation for the VIX and the resulting three-week maximum change in the Index, which was calculated by using the highest high in the subsequent fifteen trading days for each data point. By looking at the maximum change in the VIX we can begin to see that the largest spikes within a three-week period occur when price dispersion is extremely low; as the three-week maximum change in the VIX diminishes the larger the dispersion becomes.
Chart 5: Scatter plot of the 20-day standard deviation and 3-week maximum change, Daily Data
To provide a graphical representation of the threshold being met, Chart 6 shows the daily Volatility Index marked with occurrences of standard deviation being at or below 0.86 when a prior reading of at or below 0.86 has not occurred during the prior ten trading days. The ten-day lookback is used to avoid clusters of occurrences and to better show the initial signal of the threshold being met, which leaves 52 signals in the sample. The sample size with the standard deviation threshold diminishes significantly compared to the previously mentioned prediction method of the VIX being at a four-week low as well as improved foreshadowing of eventual spikes in volatility compared to 15+% declines in the VIX.
Chart 6: Volatility Index with Standard Deviation Signal Markers, Daily Data
A spike was defined previously as a rise of 30+% in a five-day period. Chart 7 displays volatility spikes but also includes the standard deviation signal markers to show that the majority of spikes that have taken place in the Index occur after the dispersion of the VIX has fallen below the specified threshold. In fact, based on this ten-year data period, very few instances of the threshold being met were not followed by a 30+% spike in volatility. As the seas become calm and the tide pulls back in the ocean before a massive wave, so too does volatility’s dispersion narrow before an eventual spike higher. While not every defined spike is preceded with volatility’s standard deviation declining to a low level, only a handful of signals are not followed by large increases in VIX readings. In other words, not every spike follows a signal but nearly every signal is followed by a spike.
Chart 7: Volatility Index with Standard Deviation and Spike Signal Markers, Daily Data
Because standard deviation is essentially a measure of volatility in and of itself, by using it to analyze the VIX we are, in essence, evaluating the volatility of the Volatility Index. Fortunately, the CBOE also has created a tool for measuring the volatility of the Volatility Index, called the VIX of the VIX (VVIX). This type of tool can be useful as the scope of this paper is focused on not just forecasting future volatility but specifically spikes in volatility, which can be improved by the incorporation of VVIX.
The CBOE summarizes VVIX as “an indicator of the expected volatility of the 30-day forward price of the VIX. This volatility drives nearby VIX option prices.”13 Park (2015) notes that the VVIX acts as a better measurement of tail risk due to the VIX options market having larger trading volume, a lower bid-ask spread, and more liquidity compared to the S&P 500 options market.14 This allows for the capability to be potentially more accurate with the forecasting ability of volatility’s dispersion.
By applying the same level of analysis to the VVIX as we did with the VIX we can find that the fifteenth percentile 20-day standard deviation for the VIX of the VIX is 3.16. Chart 8 plots the Volatility Index with markers denoting the instances when VVIX standard deviation is at or below 3.16 Similar to the previously discussed dispersion of the VIX, the dispersion for the VVIX has a small sample size of 54 over the studied time period. However, similar to the suboptimal method of using large declines in the VIX as a predictor of future spikes, the VVIX dispersion threshold has many false signals that are not followed by volatility spikes.
Chart 8: Volatility Index with VVIX Standard Deviation Signal Markers, Daily Data
In order to continue to improve upon the idea that volatility dispersion is an optimal predictor of future VIX spikes, a simple system can be created using both the VIX and VVIX. This is accomplished by testing when both the VIX and the VVIX have readings of their respective 20-day standard deviation at or below their defined thresholds. Chart 9 shows where the combination of the two signals (red square markers) is met as well as just the VIX signal (green triangle markers) in order to show the differences and overlap of the two methods. As to be expected, the sample size decreases when the two volatility measurements thresholds are combined into a single signal. While the VIX alone produces more triggers of low dispersion, it appears the combination of the VIX and VVIX are timelier in their production of a signal before spikes within the Volatility Index.
Chart 9: Volatility Index with VIX and Combined Signal Markers, Daily Data
Up to this point only a visual representation of the signals has been shown, but next we shall look at the numerical changes that occur in the VIX following the methods previously discussed in this paper along with the superior method outlined in the section above.
Table 1 shows the three week change in the VIX, utilizing the maximum and minimum average and median. We can see that the previously discussed methods of using a low in the VIX (lowest close in four weeks) and large declines (15+% decline in three days) do not produce an ‘edge’ over the average three week change in all VIX readings. However, we do see a much larger maximum and smaller minimum when using the VIX, VVIX, and combined signal.
In fact, the VIX signal has an average three-week maximum that is 54% greater than that of the large VIX drop with the minimum change being smaller by 49%. Not only does the VIX rise on average by a greater degree for the VIX, VVIX, and combined signal, the VIX declines less after a signal has been produced as well. This increase in ‘edge,’ with the previously discussed decrease in sample sizes, produces a more manageable signal generation with more accurate forecasting ability than the discussed alternative methods of VIX spike forecasting.
Table 1
Maximum and minimum change is calculated using the highest high and lowest low relative to the close VIX reading on the day of signal over the subsequent fifteen trading days, daily data.
Conclusion
This paper provides an argument for using the dispersion of the VIX, through the use of a 20-day standard deviation, as a superior tool in forecasting spikes within the Volatility Index. While not every trader has a specific focus on the Volatility Index within their own respective trading styles or strategies, Munenzon (2010) shows that the VIX has important implications for return expectations for many different asset classes such as bonds, commodities, and real estate. Although the Volatility Index itself cannot be bought or sold directly, by knowing how to properly evaluate volatility, an investor can better prepare his or her portfolio, whether from a standpoint of defense (raising cash, decreasing beta, etc.) or offense (initiating a trading position to capitalize on the expected rise in volatility through the use of ETNs, futures and/or options). With Charts 6 through 9, it has been shown that the evaluation of the dispersion within the VIX and VVIX act as accurate barometers for future large advances in the Index. Table 1 provides evidence that the VIX rises more and declines less after a signal has be established through dispersion analysis over more commonly used methods applied to volatility. While the scope of this paper is not to create a standalone investment strategy, the concept discussed within can be taken and utilized in a broad scope of investment paradigms and timeframes.
It is believed by the investment community that by having the VIX at relatively low levels or following large declines, its nature to mean-revert would carry the Index immediately higher, snapping like a rubber band to elevated levels. This line of thinking produces signals with sample sizes much greater than most traders would likely be able to act upon or monitor, and as Table 1 shows, forecasts on average sub-par future changes within the VIX. While the parameters used within this paper to analyze the dispersion of the Volatility Index were not optimized, the author believes further research can be done to better hone the forecasting ability of analysis when the VIX and VVIX trade in narrow ranges prior to spikes in the underlying Index.
With relative confidence, the author believes dispersion of price, as measured by the daily standard deviation of the VIX and VVIX, acts as a more accurate and timely method of forecasting spikes, as defined in this paper, in the Volatility Index. This method provides an early warning signal of a potential oncoming “volatility tsunami” that can have large negative implications for an investment portfolio and allows for the potential to profit from the rising tide of the VIX.
Footnotes
- See National Oceanography Centre 2011
- See Whaley 2013
- See Gire 2005
- See Whaley 2008
- See CBOE 2016
- See Zakamunlin 2006
- See Cipollini & Manzini 2007
- See Giot 2002
- Stockcharts.com
- See Hulbert 2016
- See Whaley 2008
- See Bollinger
- See CBOE
- See Park 2015
References
Bollinger, John. “Bollinger’s Brainstorm.” Bollinger Bands. Bollinger Capital Management, Inc., n.d. Web. 12 Oct. 2016. http://www.bollingerbands.com/services/bb/
CBOE Volatility Index FAQs, Chicago Board of Options Exchange, n.d. Web. 4 Nov. 2016. http://www.cboe.com/micro/vix/faq.aspx#1
CBOE VVIXSM Index, Chicago Board of Options Exchange, n.d. Web. 4 Nov. 2016. http://www.cboe.com/micro/vvix/
Cipollini, Alessandro Paolo Luigi and Manzini, Antonio, Can the VIX Signal Market’s Direction? An Asymmetric Dynamic Strategy (April 2007). Available at SSRN: https://ssrn.com/abstract=996384 or http://dx.doi.org/10.2139/ssrn.996384
Data and Ticker Symbols, StockCharts.com, Web. 4 Nov. 2016. http://stockcharts.com/docs/doku.php?id=data
Drimus, Gabriel G. and Farkas, Walter, Local Volatility of Volatility for the VIX Market (December 10, 2011). Review of Derivatives Research, 16(3), 267-293, (2013). Available at SSRN: https://ssrn.com/abstract=1970547 or http://dx.doi.org/10.2139/ssrn.1970547
Giot, Pierre, Implied Volatility Indices as Leading Indicators of Stock Index Returns? (September 2002). CORE Discussion Paper No. 2002/50. Available at SSRN: https://ssrn.com/abstract=371461 or http://dx.doi.org/10.2139/ssrn.371461
Gire, Paul J. (2005) Missing the Ten Best. Journal of Financial Planning.
How a Tsunami Wave Works. National Oceanography Centre, 11 Mar. 2011. Web. 27 Oct. 2016. http://noc.ac.uk/news/how-tsunami-wave-works
Hulbert, Mark. “Bear Markets Can Be Shorter Than You Think.” The Wall Street Journal, 06 Mar. 2016. Web. 08 Oct. 2016. http://www.wsj.com/articles/bear-markets-can-be-shorter-than-you-think-1457321010
Munenzon, Mikhail, 20 Years of VIX: Fear, Greed and Implications for Alternative Investment Strategies (April 29, 2010). Available at SSRN: https://ssrn.com/abstract=1597904 or http://dx.doi.org/10.2139/ssrn.1597904
Park, Yang-Ho, Volatility-of-Volatility and Tail Risk Hedging Returns (May 18, 2015). Journal of Financial Markets, Forthcoming. Available at SSRN: https://ssrn.com/abstract=2236158 or http://dx.doi.org/10.2139/ssrn.2236158
Whaley, R. E. Derivatives on market volatility: Hedging tools long overdue, Journal of Derivatives 1 (Fall 1993), 71-84.
Whaley, R. E. (2009). Understanding the VIX. The Journal of Portfolio Management, 35(3), 98-105. doi:10.3905/jpm.2009.35.3.098
Whaley, Robert E., Trading Volatility: At What Cost? (May 6, 2013). Available at SSRN: https://ssrn.com/abstract=2261387 or http://dx.doi.org/10.2139/ssrn.2261387
Zakamulin, Valeriy, Abnormal Stock Market Returns Around Peaks in VIX: The Evidence of Investor Overreaction? (May 1, 2016). Available at SSRN: https://ssrn.com/abstract=2773134 or http://dx.doi.org/10.2139/ssrn.2773134
The Virtual Crowd
by Jason Meshnick, CMT
About the Author | Jason Meshnick, CMT
Jason Meshnick, CMT, is the Director of Product Management at Markit Digital, a division of IHS Markit. There, he creates well-known market analytics including the CNN Business Fear & Greed Index. His past career included work as a principal trader, market maker, and hedger. He was once an active participant in Sports Car Club of America racing but spends more time these days on two wheels, racing bicycles in his hometown of Boulder, Colorado.
Abstract
Before Facebook, Twitter, and Snapchat, there was the New York Stock Exchange. The 20th Century NYSE trading floor was one of the world’s most important early social networks. Social networks and social media-based investing have become commonplace today. Many investment strategies even seek to use investor sentiment from these platforms to determine the future direction of a security’s price. While a generation of investors thinks that this is new, investors have been actively managing money using market sentiment for decades. This paper introduces a unique and innovative indicator that measures the depth of sentiment of today’s virtual crowd of traders and investors for a security using readily available trading volume. It is called Normalized Relative Volume (NRV) and will help active managers generate alpha through security selection and portfolio weighting.
The 20th Century NYSE floor was structured as an auction market that brought buyers and sellers together to determine the fair value for a security. Interested auction participants formed crowds that ebbed and flowed based on market moving information, as investors evaluated their portfolios and sought to own the best performing securities. Astute traders learned that crowd size was an important indicator. Being early to a growing crowd meant opportunity to profit from increasing price momentum. Alternatively, a small crowd of participants might mean that an ignored stock was mispriced.
During the last twenty years, electronic trading and competition from new exchanges have caused traders to move upstairs. As a result, the crowds that made up each auction on the NYSE floor are no longer viable as an indication of investor sentiment. However, today’s virtual crowd is as valuable as ever, for those who know how to read it.
Academic and practitioner researchers have found that volume should be normalized to effectively provide investment signals. Normalizing volume is typically done by adjusting it to a moving average or to shares outstanding and makes volume more easily comparable to its own history or to other securities. However, researchers find an important shortcoming with volume: the most and least active stocks rarely move into other deciles. One group of researchers uses this to their advantage, determining that liquidity could be considered an investment style, akin to value and growth investing. They find that low liquidity securities perform best.
This paper builds upon that research, as well as the historical relationships between traders on the floor, and introduces Normalized Relative Volume (NRV). This indicator solves the problems of other liquidity measures. First, all stocks undergo periods of high and low volume, relative to their own history. NRV makes this actionable so that every stock offers tactical opportunity. Second, NRV uses volume data that is available freely to any investor. Last, it is the only measure that can be effectively used at any data frequency, from intraday, to daily, weekly, or monthly. As a result, NRV is an effective proxy for the size of the virtual crowd trading a security and is an important new investment factor.
This paper shows the value of NRV through three tests across stocks and ETFs. First, it compares 170 observations of high NRV-ranked stocks to 170 observations of stocks ranked by simple trading volume and finds that the high NRV ranking is better at finding event-driven trading opportunities. High NRV stocks are more volatile, and exhibit greater instances of corporate events as well as abnormally high volume levels. By comparison, the stocks on the most active list tend to have significant daily overlap—there were only 36 unique stocks across the 170 observations—and many stocks failed to achieve volume that was two times their daily average. It is believed that High NRV stocks are undergoing revaluation by a larger than normal group of investors and that these stocks will underperform in subsequent periods.
Next, the paper tests S&P 500 constituents from 2000-2013 and determines that stocks with a high NRV in the first month tend to underperform other stocks in the second month. However, stocks with low NRV in the first month tend to outperform other stocks in the second month. This indicates that stocks which trade on low NRV are being priced less efficiently by the market and will readjust in the second month. While many investors focus on high NRV stocks to find trading ideas, they would do better to look for opportunity among the stocks with smaller virtual crowds and lower NRV. Finally, this paper offers a method for weighting a portfolio using NRV. The results show that a low NRV weighted index outperforms high NRV, equal weighted, and S&P 500 indexes over both bull and bear markets and generates alpha. It is likely that an active manager could use this weighting methodology on a well selected portfolio of stocks to generate additional outperformance. This paper’s three tests confirm the ability of NRV to provide information about securities pricing. As a new indicator, it can be used by active investment managers either on a discretionary or quantitative basis.
Fear and greed drive the sentiment for investors, especially among crowds of traders. This paper proves that larger crowds of virtual traders are more effective at pricing securities than smaller crowds. Although the physical crowd is gone, NRV provides a way to measure the modern-day virtual crowd and increase portfolio returns.
Part 1: Introduction
Before Facebook, Twitter, and Snapchat, there was the New York Stock Exchange. The 20th Century NYSE trading floor was the original social network. Unlike Facebook, this social network connected investors who interacted to determine the fair value of a company. On the floor of the NYSE, there were boundless opportunities for traders skilled at reading the depth and direction of crowd sentiment. As markets became electronic, the crowd moved off the floor and was no longer seen as an effective sentiment indicator. However, today’s virtual crowd is as valuable as ever, for those who know how to read it.
This paper presents a unique and innovative indicator called Normalized Relative Volume (NRV). NRV uses trading volume to model the size of the virtual crowd as well as the depth, or commitment, of crowd sentiment. The results will show that the virtual crowd remains an effective indicator for traders. Most importantly, this paper adds to the body of investment knowledge by proving that securities with low Normalized Relative Volume are mispriced and present opportunity to generate alpha.
History of the Crowd
From its beginnings until the early 21st Century, trading on the New York Stock Exchange (NYSE) was a continuous auction system that brought human traders face-to-face in a centralized location. Trading in each stock was managed by a specialist tasked with maintaining a fair and orderly market. Brokers gathered and formed crowds of interested buyers and sellers. Crowd size became a measure of sentiment. Larger crowds were considered to have greater depth of sentiment. With more traders vying for a piece of the action, a deeper crowd was more committed.
With the rise of electronic trading and new exchanges, markets are decentralized, and the crowd of floor traders is largely disbanded. Today’s crowd is virtual and unmeasured. However, the crowd remains as valuable today as it was during the heyday of the NYSE floor. What has changed is the ability to measure it. While it is impossible to know the number of traders in today’s virtual crowd, their trading volume is an effective proxy for their level of interest in a security.
Volume is the quantity of an item that changed hands during a period. For a stock, it is the number of shares that moved from one owner to another. Volume measures money flow and is a proxy for crowd size and depth of sentiment.
Price, on the other hand, measures the sentiment of the quality of the company. If investors believe that earnings will grow and outpace those of rivals, their demand for shares will cause an increase in the share price. Alternatively, the price may fall when investor sentiment is negative towards a company’s earnings quality. Price sentiment is either bullish, bearish, or neutral. Together, price and volume make up the two components of supply and demand and paint a complete picture of investor sentiment and the crowd of traders.
Traditional Uses of Volume
Review of Prior Research
Academic researchers and practitioners have long studied volume as an important factor driving returns. The CMT Association’s Charles H. Dow Award has been granted for two papers related to volume. Buff Dormeier, CMT won for his work on the Volume Price Confirmation Indicator. It monitors supply and demand to see if volume supports the price trend. George Schade, CMT provided a history of On Balance Volume and the showed that volume has been an important investment factor for many years. The concepts in both papers combine reported volume with price changes to create supply and demand indicators.
Steve Woods developed the concept of float turnover (volume/shares in float) to measure supply and demand for individual stocks. He, too, defined buy and sell patterns. Woods’ methodology normalizes volume in a way that is consistent with the turnover metric used by academic researchers.1 Andrew Lo and Jiang Wang published “Stock Market Trading Volume” in 2001. They describe ten measures of volume often used in academic literature (Appendix A). These include shares traded, turnover (shares traded/shares outstanding), dollar turnover (dollar volume/market capitalization), number of trades, and even number of trading days per year. One of their conclusions is that, “if the focus is on the relation between volume and equilibrium models of asset markets, turnover yields the sharpest empirical implications and is the most natural measure.“2
Lo and Wang also find that “there is some persistence in turnover deciles from week to week—the largest– and smallest-turnover stocks in one week are often the largest- and smallest-turnover stocks, respectively, the next week.“3
In 1994, Stickel and Verrecchia published “Evidence that Trading Volume Sustains Stock Price Changes.” They also use turnover as a measure of volume and break the market into two types of traders. Informed traders act on research and uninformed traders trade on liquidity. They find that, “as volume increases, the probability that the price change is information driven increases.“4
Jian Wang published “A Model of Competitive Stock Trading” in 1994 that also uses turnover to measure informed vs. uninformed trading. Informed traders are event driven and trade based on valuation. Uninformed traders are asset allocators rebalancing a portfolio. In his review of several papers, Wang finds that “a high return accompanied by high volume implies high future returns if the first component (informed traders) dominates and low future returns of the second component (uninformed traders) dominates.“5
In “Trading Volume and Stock Investments,” Brown, Crocker, and Foerster find that volume is correlated with returns. They conclude, “portfolios of S&P 500 Index and large-capitalization stocks sorted on higher trading volume and turnover tend to have higher subsequent returns (holding periods of 1-12 months) than those with lower trading volume.“6
In their review of prior research, they also find:
- Volatility of liquidity is inversely correlated with returns
- Historical volume is predictive of future price momentum
- Short-term mean reversion occurs after large price changes on high volume
- Unusual volume often leads to price increases
- Stocks with traditionally high volume tend to overreact to news events whereas stocks with low volume tend to underreact
Yale professor and CIO of Zebra Capital Management, Roger Ibbotson, along with academic and practitioner researchers Chen, Kim, and Hu, find that liquidity is an investment style, akin to size, value, and momentum. In “Liquidity as an Investment Style” they define turnover as “the sum of 12 monthly volumes divided by each month’s shares outstanding“7 and find that a liquidity factor “added significant alpha to all the Fama-French factors when expressed either as a factor or as a low-liquidity long portfolio.“8 As with other research, they find that liquidity is stable and that “69.23% of the stocks stayed in the same quartile.“9
Research shows that volume is a useful factor for predicting future returns of securities, especially when normalized. Additionally, there is some persistence, especially in higher volume stocks. Last, price changes that occur on higher normalized volume tend to be persistent.
Problems with Volume
Volume has varied dramatically over time due to systemic changes and investor emotions. Data from the World Bank clearly show that aggregate turnover (shares traded as a percent of market cap) is not stable, rising into the 2008 Financial Crisis, before falling to a 20-year low (see Chart 1). Changes to the rules for NASDAQ dealer trade counting and the rise of high frequency market making both impacted market microstructure and trade volume. Trader emotions were also a factor, pushing volume to new highs during the Dotcom bust and Financial Crisis.
Chart 1: Stocks trades, turnover ratio of domestic shares (%), (1984-2017)Source: World Bank
Volume patterns fluctuate with seasonality. Volume is low during holidays and the summer vacation season. It is high during quarterly earnings periods, as new information drives asset repricing. Volume cannot be compared across securities. As Lo and Wang reported, stocks in the top and bottom deciles rarely moved to other groups.
Finally, daily volume is cumulative and, using traditional methods, can only be analyzed at the end of the trading day.
As these issues highlight, raw volume is neither comparable across time nor across a universe of securities. Therefore, volume analysis should be performed on normalized volume.
Introduction to Normalized Relative Volume
As discussed above, researchers normalize volume to compare it across time and across securities. Two popular methods include turnover and calculating a ratio of volume to its average. Both methods can be compared across stocks as well as historically. However, they cannot be calculated using intraday data.
Volume may also be compared to a volume benchmark. The benchmark is typically the total volume on an exchange or sum of volume across the constituents of an index. This indicates the percentage of the volume for that universe which is attributable to a single stock. It can also be thought of as a measure of the virtual crowd of investors interested in this stock.
Normalized Relative Volume (NRV) combines these two methodologies into a single, unique market indicator that measures the size and change of the virtual crowd. Importantly, NRV can be calculated using intraday data for high frequency studies or for longer term investing using daily, weekly, or monthly data.
Normalized Relative Volume is calculated as:
Where:
- Security’s Volume is the number of shares traded during the period
- Benchmark Volume is the number of shares traded on an exchange or the sum of volume for index constituents
- Security’s Average Volume is the average volume for the selected period
- Benchmark Average Volume is the average volume for that benchmark
This can be thought of as:
Normalized Relative Volume quantifies the virtual crowd of traders. It measures activity relative to benchmark volume as well as historically, in the same way that a broker on the NYSE floor would monitor crowd sizes for opportunity.
For example, there are ten traders on a fictitious exchange that trade two stocks. On an average day, stock ABC trades 100 shares and stock XYZ trades 400 shares. The ten traders would normally split based on volume with an average of two traders in the crowd for ABC and eight traders in the crowd for XYZ. This morning, ABC announced earnings and opened on abnormally high volume of 100 shares, equal to its average daily volume. XYZ has no news today and opens on its normal opening volume of 100 shares.
Normalized Relative Volume for each stock is:
Based on the equal volume in the two stocks, the ten floor traders have split so that each crowd includes five traders. However, ABC’s crowd is 2.5 times larger than usual while XYZ’s crowd is smaller.
ABC, with its earnings release, is attracting greater interest than usual as traders update growth targets for the company’s earnings and decide whether to liquidate or add to holdings.
This dynamic occurred every day on the historic NYSE floor. Today, that activity happens electronically, where the crowd is neither seen nor heard.
Normalized Relative Volume is a simple but sophisticated algorithm that helps investors measure the depth of crowd sentiment. The rest of this paper will share three ways that Normalized Relative Volume can be used on its own as well as the advantages of Normalized Relative Volume over reported volume.
Part 2: Short-Term Idea Generation Using Normalized Relative Volume
Most news websites publish daily lists of the most actively traded securities. Investors use the lists to find interesting investment ideas. However, Normalized Relative Volume, as a proxy for crowd size, is better at this task.
To test the value of NRV and its ability to generate better ideas than the most actively traded stocks, two lists were created from S&P 500 constituents. One was a “most actives” list that ranked stocks by volume. The second ranked stocks by NRV. The observation period included 17 trading days, from October 31, 2017 through November 22, 2017. The top 10 stocks from each list were compared. Nearly every stock in the most actives list would be recognizable to readers of the financial press. It included Apple, General Electric, and Bank of America.
Summary statistics for the most actives show that, of the 170 observations, there was significant daily overlap. Only 36 unique securities appeared during the observation period and 70% of the securities from one day would appear on the next day’s list. In fact, four stocks, AMD, BAC, GE, and T, appeared in all 17 days. Although these stocks were the most actively traded, few exhibited abnormal volume. Just 55% of observations were above their 50-day average volume and only 24% exhibited a volume spike of at least 2 times the average. Only 40% of observations occurred with market-moving news, like earnings results, management changes, mergers, or corporate actions. These stocks were actively traded, but not very interesting.
NRV was more effective at finding interesting stocks. Using a 50-day average for both the stock’s and market’s volume, 100 unique stocks appeared during the 17 trading days. Of those, only 26% would appear on the next day’s list and only three, CBS, HSIC, and TWX, appeared more than five times total. Moreover, those three stocks released significant news: a reorganization (CBS), disappointing earnings (HSIC), and merger talks (TWX). They weren’t the only stocks with interesting news. 73% included news like mergers, reorganizations, earnings disappointments, and corporate actions. Last, every single security chosen for its high Normalized Relative Volume offered reported volume for that day which was at least as great as the 50-day average volume and 91% of observations were at least two times the 50-day average.
NRV can be used by investors to find a diversified list of stocks with higher than normal intraday trading activity and a larger potential to coincide with market moving news. As a result, the stocks tend to attract larger than normal crowds.
Comparing the returns from both lists, high NRV stocks tended to be more volatile on the event day, with a higher standard deviation of returns. Returns at the 80th and 20th percentiles show a spread of 8.23 percentage points vs. 3.16 percentage points for the Most Actives list. The absolute return is 1.72 percentage points higher for High NRV stocks, too.
Table 1
Long-term investors, acting on fundamentals, along with event-driven traders swell the virtual crowds of high NRV stocks, quickly adjusting the valuations for these stocks. The next section will show that high NRV stocks tend to underperform lower NRV stocks over the following month.
Part 3: Analysis of Monthly NRV
Introduction
To test the idea that high NRV stocks underperform low NRV stocks, monthly frequency data is used to measure forward month returns for stocks in five different NRV buckets.
Data
This test was run using the Optuma software platform and the historic S&P 500 constituent database. This database accounts for survivorship bias to test how the strategy would have performed in real-time. The test was run on monthly frequency data from 9/1/2000 through 2/1/2013 and included two bull and bear markets. The statistics exclude the top and bottom 0.5% which contained unrealistic outliers and potential data errors.
Methodology
Normalized Relative Volume was calculated for each stock vs. the S&P 500 constituent total volume. Stocks were sorted into five buckets representing lowest to highest NRV and returns were measured over the following month.
Results
Table 2 (colors represent ranking, green highest to red lowest)
The results support the hypothesis that high NRV stocks have already priced in their prior month events. They tend to have weaker mean and median returns and distributions that are less skewed. This is also observed at the 80th percentile. Volatility rises for outliers at both the lowest and highest NRV observations.
This test built upon the analysis in part 2, which demonstrated a consistent tendency for high NRV stocks to be more volatile than a similar list of most actively traded stocks. Traders and investors reacted to important events, buying and selling shares based on new growth assumptions. Part 3 proved that in the month following a high NRV event, those stocks tended to underperform other stocks that did not have high NRV. Low NRV stocks offered higher returns but also the highest volatility, indicating that investors and traders adjusted their growth assumptions for these stocks in the following month.
Part 4: Normalized Relative Volume Indices
In the study by Ibbotson, Chen, Kim, and Hu, they find that volume is an investable factor which investors can take advantage of by volume weighting a portfolio’s constituents.10 This section will show that volume weighting outperforms buy and hold of the S&P 500 and that low relative volume weighting even beats an equal weighted portfolio of sector ETFs and generates alpha.
Data
The nine original State Street Global Advisors’ Select Sector SPDR ETFs were used to create three indexes. The selected ETFs track Global Industry Classification Standard (GICS) sectors and began trading in 1998. These ETFs were selected because of their long history, their high level of trading volume, and their effectiveness at tracking their chosen benchmarks. Together, they offer a broad look at the market and can be easily compared to the S&P 500. Although the portfolios underlying these funds are market cap weighted, it is unlikely that this impacted the results. Price data came from Yahoo! Finance and was adjusted for distributions. The State Street Global Advisors’ SPDR S&P 500 ETF Trust (SPY) ETF was used as a benchmark in order to remain consistent with ETF usage in the other indexes and because it is investable.
Data frequency was weekly, spanning July 4, 2005 through June 18, 2018, in order to include the Financial Crisis in 2008, market declines in 2010, 2011, and 2015, as well as all bull markets during those periods.
Data was segmented into 26-week bull and bear market periods in order to analyze performance in different market environments. See appendix for a table of bull and bear market periods including returns.
Methodology
Index descriptions:
- Equal Weighted Index (EW Index): weekly returns were weighted equally across all 9 ETFs
- High Normalized Relative Volume Index (Hi NRV): Weekly returns were weighted by prior week NRV (Higher NRV = Higher Weight)
- Low Normalized Relative Volume Index (Lo NRV Index): Weekly returns were weighted using inverted prior week NRV (Lower NRV = Higher weight)
- S&P 500 Index (SPY Index): Like the others, it was indexed to 100 on the start date
Results
Each constituent in an index or portfolio using NRV weighting will have approximately equal weight over long periods of time, although the weight for any one period will fluctuate. For example, in the Lo NRV index, each of the nine constituents displayed an average weekly weighting of 11.11%. At the 80th percentile, weekly weights rose to 13.48% and fell to 8.18% at the 20th percentile. (See appendix B for table.) This is an improvement over turnover, where stocks tend to stay in the same quartile. NRV, through tactical weighting, offers additional potential to generate alpha. Both the Hi NRV Index and Lo NRV Index are strongly correlated with the SPY Index and the EW Index. This is expected as the underlying ETFs can be aggregated to closely approximate the S&P 500 Index’s returns and implies that all differences in returns are due to portfolio construction.
Table 3
The Lo NRV Index generates a positive alpha against both the EW Index and SPY Index, whereas the Hi NRV Index generates a positive alpha against the SPY Index but a negative alpha against the EW Index (table 3). Alpha is a statistical measure of risk adjusted performance that measures excess return beyond expectations set within the Capital Asset Pricing Model. The positive alpha generated by the Lo NRV Index prove that this methodology successfully finds mispriced securities.
Chart 2: Index Price History
Returns bear this out. During the test period, the Lo NRV index rose by 243.13% (9.93% annualized), easily beating the Hi NRV index, the EW Index, and the SPY Index. The Hi NRV Index gained 201.17%, beating only the SPY Index.
Table 4
Chart 3: Drawdown Comparison
Drawdown was comparable across all three indices, however, the Lo NRV index spent fewer weeks in drawdown. It was later to decline and, in the case of the financial crisis, quicker to rebound and set a new all-time high.
Bull and Bear Markets
In addition to the drawdown analysis, it is important to compare performance from the indices in bull and bear markets to see if Lo NRV outperforms Hi NRV in both market environments. During the measurement period, there was only one bear market lasting more than one calendar year. However, there were nine non-overlapping twenty-six-week periods where the S&P 500 was down. Therefore, a bear market was defined as any 26-week period that the S&P 500 fell and a bull market was defined as any 26-week period that the S&P 500 rose. (See appendix C for Bull and Bear market dates and index returns).
Table 5
The Lo NRV Index performed best, averaging 5.43% during all 26-week periods. However, it had the highest standard deviation and the weakest single period. The 27.48% decline occurred between June 30, 2008 and December 29, 2008. The magnitude of this decline, relative to the others, can be partially explained by the Lo NRV’s relatively strong performance prior to this period. In the 52 weeks before June 30, 2008, the Lo NRV index had dropped just 8%, versus the Hi NRV index’s decline of nearly 15%. The Lo NRV stocks fell further and caught up to the market, possibly due to selling from margin calls. Removing the 2008 financial crisis from the test shows that the Lo NRV index offered the smallest 6-month drawdown and lowest volatility.
Bull Market Performance
Table 6
Low Normalized Relative Volume stocks outperform in bull markets, although with a narrow lead. The spread in 26-week returns between the four indices falls to just 0.41%, with the Lo NRV Index performing best, gaining 10.90%, and the Hi NRV Index returning 10.49%. Lo NRV, Hi NRV, and EW Indexes offered similar volatility that was almost 10% above that of the SPY Index.
Bear Market Performance
Table 7
During the observation period, there were nine bear markets. During these 26-week periods where the SPY Index fell, Low Normalized Relative Volume stocks outperformed. They tended to lose the least, falling an average of 4.90% and offered positive returns during two periods.
Volatility was highest for the Lo NRV Index. However, excluding 2008, the Lo NRV Index wins in every category. Standard deviation improves to 3.28% from 9.12% and the worst percentage decline easily beats the other indexes at -6.24%.
The Lo NRV Index outperforms because it is weighted towards securities that are less efficiently priced. The Hi NRV stocks attract a large virtual crowd of event driven traders who use new data to reprice securities. Those trades become crowded. The Lo NRV securities’ virtual crowds are smaller than normal, resulting in mispricing of the securities. All stocks will have periods of high NRV and low NRV, because NRV is normalized to the market and historical relationships. Therefore, NRV is superior to reported volume and can be used to measure crowd activity and the depth of investor sentiment.
Conclusion
This paper introduces a unique and innovative proxy for today’s virtual trading crowd called Normalized Relative Volume and proves that volume measures the depth of investor sentiment. NRV normalizes volume across time and across an aggregate benchmark and improves upon other liquidity tools because it uses easily available volume data and can be calculated intraday.
This paper’s three tests confirm the ability of NRV to provide information about securities pricing. First, the paper shows that NRV-ranked stocks are better at finding event driven trading opportunities than most active stocks lists. Second, stocks with high NRV are proven to be more accurately priced at the time of measurement and tend to have lower returns in the following month. Stocks with low NRV offer higher forward returns. Third, weighting a portfolio towards low NRV securities generates alpha and suggests that investors could use this methodology to generate excess returns on their own portfolios.
Fear and greed drive the sentiment for investors, especially among crowds of traders. This paper proves that larger crowds of virtual traders are more effective at pricing securities than smaller crowds. Although the physical crowd is gone, NRV provides a way to measure the modern-day virtual crowd and increase portfolio returns.
Footnotes
- Woods, 2002
- Lo and Wang 2001, p.7
- Lo and Wang 2001, p.2
- Stickel and Verrecchia 1994, p.57
- Wang 1994, p.152
- Brown, Croker, and Foerster 2009, p.67
- Ibbotson, Chen, Kim, and Hu 2013, p.31
- Ibbotson, Chen, Kim, and Hu 2013, p.40
- Ibbotson, Chen, Kim, and Hu 2013, p.41
- Ibbotson, Chen, Kim, and Hu 2013
References
Brown, Jeffrey H., Crocker, Douglas K., and Foerster, Stephen R., 2009, Trading Volume and Stock Investments, Financial Analysts Journal, vol. 65, no. 2 (Mar.-Apr., 2009), 67-84
Wang, Jiang, 1994, A Model of Competitive Stock Trading Volume, Journal of Political Economy, vol.102, no. 1, 127-168
Stickel, Scott E., and Verrecchia, Robert E., 1994, Evidence that Trading Volume Sustains Stock Price Changes, Financial Analysts Journal, vol. 50, no. 6 (Nov. – Dec., 1994), pp. 57-67
Lo, Andrew W. and Wang, Jiang, Stock Market Trading Volume, First Draft, Sept, 2001 State Street SPDRs: https://us.spdrs.com/en
World Bank: https://data.worldbank.org/indicator/CM.MKT.TRNR?locations=US
Woods, Steve, Float Analysis: Powerful Technical Indicators Using Price and Volume, John Wiley and Sons, April, 2002
Gann, William D., New Stock Trend Detector: A Review Of The 1929-1932 Panic And The 1932-1935 Bull Market, The Richest Man in Babylon, March, 2008
Ibbotson, Roger G., Chen, Zhiwu, Kim, Daniel Y.-J., and Hu, Wendy Y., 2013, Liquidity as an Investment Style, Financial Analysts Journal, vol. 69, no. 3 (May – June, 2013), 30-44
Appendix A: Volume measures presented by Lo and Wang, 2001.
Appendix B: NRV Weighted Index Statistics
Appendix C
Bull and Bear Markets are defined by whether the SPY Index was up or down during a 26-week period. Bear markets are highlighted in red.
Making the Most of Panic
by Christopher Diodato, CMT, CFA | 2019 Charles H. Dow Award Winner
About the Author | Christopher Diodato, CMT, CFA | 2019 Charles H. Dow Award Winner
Chris Diodato became a student of technical analysis at a young age, enrolling in the CMT program during his freshman year of college. Since then, he attained his CMT and CFA charters and worked in various research and portfolio management roles. Currently, Chris is the Senior Portfolio Manager at Cantilever Wealth Management in Pittsburgh, PA where he leads the development and implementation of cutting-edge investment offerings. He maintains a holistic investment philosophy, incorporating technical, fundamental, and quantitative methods in his work and is always hungry to learn more about financial markets. He is especially appreciative of his mentors, family, and friends who have supported him on his journey to become a financial professional.
Since the dawn of investing, practitioners understood the value of contrarianism. All too often, emotions, not fundamentals, become the driving force in stock and cause dislocations. Extremes in sentiment – whether that be excessive optimism or pessimism – have been associated with market dislocations such as the housing bubble, Bitcoin’s behavior in 2017, and the stock market crash of 1987 (known as Black Monday). For those willing to buck the herded animal spirits associated with investors’ mentality during extreme events, there are profits, and often very large profits, to be had. Technical analysts, many of whom are contrarian by nature, understand this and have endeavored to identify periods of panic and capitulation in an effort to buy securities at deeply discounted prices. In the Far East, candlestick charting tried to identify panic over three-hundred years ago. In the early 1900s, Richard Wyckoff had helped coin the phrase “selling climax” while conceptualizing the idealized “market bottom” price pattern.
The quest to profit from panic continued. In the spirt of contrarian thinking, analysts created oscillators to uncover “overbought” and “oversold” conditions in the 1950s onward to help determine if an issue had moved too far, too fast. Shortly following the rise of the various “price oscillators,” some analysts chose to look at supply and demand factors – namely volume and breadth. Some of the latest successful attempts to identify periods of panic and major reversals using supply and demand came in the form of “90% upside and downside days” from Paul Desmond and Martin Zweig’s “breadth thrust indicator.”
Supply/demand analysts had worked under a generally unfamiliar, but important premise: panic can be measured using breadth and volume instead of price momentum (oscillators). Additionally, one could use such indicators to identify buying opportunities which price momentum indicators may not have found. In this report, we will quickly examine the track record of one of the oldest oscillators, and then propose some new methods to use jointly with price oscillators to identify periods of panic.
I. Short-Term Indicators
Price-Only Oscillators
With the proliferation of oscillators over the past seventy years, it comes as little surprise that many charting software packages include well over 100 oscillators to help identify overbought and oversold conditions. For our purposes here, we will study just one of the most well-known oscillators, the “slow stochastic,” popularized by George Lane. The slow stochastic, like many price momentum indicators, compares a security’s current price to past prices over a pre-specified time period. Overbought readings occur when the current price is near the top of that range over that time period, and oversold readings appear when the current price is near the bottom of that range. One can see an example of the slow stochastic in the appendix. For this paper, oversold conditions will correspond to indicator readings of 20 or lower, and overbought conditions will be associated with readings of 80 or higher. Below, we’ll see if buying an oversold issue delivers better returns than a non-oversold issue. The tests will be run across daily data for the following data sets:
- The S&P 500 Index and NASDAQ Composite
- The current five largest stocks by market capitalization in the Dow Jones Industrial Average (Apple, Microsoft, JPMorgan Chase, Johnson & Johnson, and ExxonMobil)
- The continuous commodity futures price indexes for gold, crude oil, and copper
Table 1: Stochastic Test #1: Returns Following Overbought/Neutral Readings (1/1990-10/2018)
Now, we can look at returns following oversold readings to determine if buying oversold issues is a more profitable strategy.
Table 2: Stochastic Test #2: Returns Following Oversold Readings (1/1990-10/2018)
As conventional wisdom would dictate, purchasing an oversold security will generally deliver better returns versus buying a non-oversold security over a four-week period. However, in about 17% of cases illustrated here, this was not true, and purchasing a non-oversold security led to superior performance as opposed to making purchases when oversold conditions were present. This begs the question: what are the key strengths and weaknesses of price oscillators? Below are a few.
Price Oscillator Strengths & Weaknesses
We’ll focus on the last weakness: an oversold signal may not register during a market panic, leading to missed opportunities. With this in mind, using supply/demand indicators could help identify opportunities which would have otherwise been missed.
Supply/Demand Indicators – Short Term
The first supply/demand indicator I propose is based on the 3-DMA of declining issues (as a % of total issues traded) for the NYSE. Prior to the removal of fractional share pricing in April 2001, such an indicator would be unsuitable, as a relatively high number of unchanged issues would skew the data. With this in mind, we run our tests from 2001 onward. The results of these tests showed that when the 3-DMA of NYSE declining issues exceeded 65%, returns over the next four weeks were significantly higher relative to the historical average. These returns would generally continue to improve as the percentage increased further above the 65% mark. See below for test results from April 2001 to October 2018 using the S&P 500.
Table 3: S&P 500 Average Returns Following 3-DMA % Decline Triggers: 4/2001-10/2018
As evident in the table above, using such a simple indicator could greatly help investors identify periods of panic in which to execute purchases. This indicator could also help identify periods of panic and short-term oversold conditions at times when traditional price oscillators would not.
A Short-Term Capitulation Oscillator
Continuing to find methods of identifying investor capitulation, one can create an oscillator of % up volume and % advancing issues to fish for market bottoms. I propose the following formulas.
The average value of this indicator since April 2001 is 103, and the first quintile of indicator values corresponds roughly to 90. We’ll call that an oversold threshold. Again, we can plot the average S&P 500 return following a reading of 90 or below and compare it against all returns.
Table 4: Short-Term Capitulation Oscillator Returns Following Declines Below Trigger Levels: 4/2001-10/2018
As one can see, depressed readings in the short-term capitulation oscillator are generally followed by a period of strong market performance. In the next section, we will extend this indicator’s use to longer-term timeframes.
Don’t Leave Money On The Table
The objective of this paper was not to propose a suite of indicators which is “better” than traditional price oscillators in use today – it was to address a key weakness of price oscillators. Namely, the weakness is that oversold conditions are often not reached prior to a significant low, meaning no buy signal is registered. Incorporating supply/demand indicators can help prevent investors from missing out on points for new purchases. To drive this point home, here’s a slightly confusing chart.
Chart 1: Supply/Demand vs. 14-Day Stochastic Oversold Signals: 10/2013-10/2018
This graph notes when oversold signals occurred over the last five years from the 14-day stochastic (gray), the short-term capitulation oscillator (orange), and the 3-DMA declining issues (blue). Notice that signals from the 14-day slow stochastic are often not accompanied by signals from our supply/demand indicators. This again shows the value in using both price oscillators and supply/demand indicators together to identify market bottoms.
When All The Stars Align – Short Term
One might argue that the best buying opportunities could exist when both a price oscillator and a supply/demand indicator register an oversold condition. Let’s test this idea out. For a buy signal to be registered, the 14-day stochastic needs to be oversold and there needs to be at least one of the two supply/demand indicators we covered at an oversold level.
See the results in Table 5
Table 5
Again, another profitable result, though one with relatively few signals to take advantage of.
II. Long-Term Capitulation Oscillator
For those curious about modifying the short-term capitulation oscillator for longer-term uses, read below. We could simply extend the time periods used in the formula presented previously (shown below again for reference), but this will cause significant lags in signals because of the nature of moving averages.
Over the years, this issue has been remedied using a variety of methods. Exponential moving averages are probably the most ubiquitous solution. I propose something different, if only for the ease of explanation: an average of averages.
How does this work? Here’s an example. We’ll take a 10-day moving average (DMA) of say, closing prices, and front weight it by adding the 6- and 4-DMAs to it. Once we add those averages together, we’ll take the average of those averages.
This way, the most recent prices in the DMA are weighted three times – once in the 4-DMA, once in the 6-DMA, and once in the 10-DMA. In contrast, the oldest four values are only weighted once – in the calculation of the 10-DMA.
We apply this concept to create the formula for our long-term capitulation oscillator.
Long Term Capitulation Oscillator = 100*(Long Term Volume Band + Long Term Breadth Band)
Because this is a very long-term indicator meant to help identify major bottoms, we will test back to 1961 for the S&P 500. The test will be to see if making purchases at various levels below the tenth percentile of indicator values (roughly 950 and below) give outsized 6-month (125 days), 12-month (250 days), and 24-month (500 days) forward returns.
Table 6: Long-Term Capitulation Oscillator – S&P 500 Returns Following Declines Below Trigger Levels: 1/1961-10/2018
Low readings in this indicator during a bull market identify attractive intermediate-term buying opportunities. For instance, levels of 931 and 918 corresponded with the correction lows in October 2011 and February 2016. Similarly, one can use this indicator to identify periods of extreme panic during bear markets and potential bear market bottoms. See below for indicator levels at prior bear market lows.
Table 7: Indicator Levels at Prior Bear Market Lows
There are two ways an investor can approach this indicator. Longer-term investors may be best served buying below certain threshold levels, understanding they may need to endure additional market volatility and losses before a sustainable low is in place. Investors wishing to take a more tactical approach can use 90% days and/or breadth thrusts to fine-tune their entry.
On a final note, a topic which deserves further research is whether the breadth and volume bands should be weighted according to their relative value. For instance, in some cases the breadth band failed to drop as sharply at market bottoms as the volume band, suggesting it may be less useful than its volume counterpart. One example of this occurrence includes the 3/11/2003 market bottom (often considered the “test” of the 10/9/2002 bear market low). At this market trough, the raw long term volume band value was at the 5th percentile of all values since 1960, which can be handily considered a panic level. The long term breadth band, however, was closer to the 30th percentile, which is quite a distance from levels associated with panic (relatively strong small cap breadth in much of 2002 likely caused this). This topic deserves further research for both the long and short term capitulation oscillators, and I invite others to experiment with the optimal weights to assign to each band.
Bringing It All Together
Buying during periods of investor panic has been an established tenet of not only technical analysis, but of investing for centuries. Technical analysts were some of the first to try to quantify investor panic – first with price patterns, then oscillators, and then supply/demand indicators. This paper establishes that using a set of price-based indicators alone will inevitably lead to missed opportunities, and proposes a more holistic approach to identify periods of market panic. The studies in this report apply to both shorter-term and longer-term timeframes. Armed with this knowledge, missing out on profit opportunities should be significantly lessened, and the accuracy of identifying new uptrends enhanced.
Appendix:
Charts of Various Indicators
Chart 2: Crude Oil ($WTIC) Plotted with its 14-Day Slow Stochastic
Chart 3: Long-Term Capitulation Oscillator from 1961-1979
Chart 4: Long-Term Capitulation Oscillator from 1980-2000
Chart 5: Long-Term Capitulation Oscillator from 2000-2018
Chart 6: Short-Term Capitulation Oscillator from 2007-2012
Chart 7: Short-Term Capitulation Oscillator from 2013-2018 Work Cited
Works Cited
“The Charting Tools and Resources You Need to Invest.” StockCharts.com, stockcharts.com/.
Lane, George. LANE’S STOCHASTICS. Technical Analysis of Stocks & Commodities, LANE’S STOCHASTICS.
Pruden, Hank. The Three Skills of Top Trading: Behavioral Systems Building, Pattern Recognition, and Mental State Management. John Wiley & Sons, 2007.
“Money.net.” Money.net, Money.net.
“Yahoo Finance – Business Finance, Stock Market, Quotes, News.” Yahoo! Finance, Yahoo!, finance. yahoo.com/.
Ranked Asset Allocation Model
by Gioele Giordano, CSTA, CFTe | 2018 Charles H. Dow Award Winner
About the Author | Gioele Giordano, CSTA, CFTe | 2018 Charles H. Dow Award Winner
Gioele Giordano is a student at the University of Modena and Reggio Emilia at the Department of Economics Marco Biagi. Gioele served as Financial Analyst for Market Risk Management s.r.l (MRM), a leading firm in independent financial advisory for institutional and private clients, based in Milan. As an analyst, he wrote reports on the main asset classes and developed quantitative investment models. Gioele holds the Certified Financial Technician (CFTe) designation, he is a member of the Italian Society of Technical Analysis (SIAT), and at 21 he won the SIAT Technical Analyst Award (2016). Gioele is the 2018 Charles H. Dow Award winner for his paper, “Ranked Asset Allocation Models.”
Abstract
Passive management over the last years has attracted greater attention than active management. Bloomberg reports that, only in the first half of 2017, flows out of active into passive funds reached nearly $500 billion compared to almost $300 billion in 2016. This migration, encouraged by the spread of ETFs, concerns not only retail investors but also institutions and financial advisers. This paper aims to demonstrate how the allocation of a portfolio designed for passive management can represent the foundation of an actively managed portfolio through a non-discretionary quantitative strategy that can outperform the market.
ETFs can be seen as one of the most successful financial innovations in the last decades: they have allowed investors to diversify their investments in more affordable ways.1 Once the asset classes and ETFs have been selected, the investor has to choose between active or passive portfolio management. Some academics argue that investors should adopt passive management to exploit lower operating costs. Passive investing is based on the Efficient Market Hypothesis (EMH):
- Information is available to all market participants
- Market participants act on this information
- Market participants are rational
Therefore, all news or data are reflected in a security’s current price; so there is virtually no benefit to security analysis, or managers actively building portfolios. On the other hand, active management is based on the assumption that markets are not fully efficient: as the studies of Behavioral Finance have shown, the market particpants are not always rational, so there are opportunities for skilled managers to capitalize on inefficiencies through dynamic exposure to selected assets. The spread of ETFs, the decreasing volatility of most asset classes and the underperformance of Hedge Funds compared to benchmarks has caused a progressive migration of investment flows from active funds to passive funds.
Chart 1: Net flows into U.S.-based passively managed funds and out of active funds in the first half of each year.
Source: Bloomberg
According to the writer’s opinion, the success of passive funds is closely linked to the low levels of volatility2 and range of this cycle, distorted by Central Banks’ Quantitative Easing operations. A loop has been created, where low volatility has encouraged passive management, whose inflows have led to a further decrease of market volatility.
Chart 2: Volatility Index (VIX). Daily data, from 1990 to 2017.
Source: Yahoo! Finance data
Chart 3: Volatility Index (VIX) distribution histogram, Daily Data, from 1990 to 2017.
Source: Yahoo! Finance data
What might seem like a “new normal” is actually a “temporary normal,” which has already occurred in the past, in different forms, exacerbated by the presence of leveraged ETFs on VIX.
Chart 4: VIX: number of days with close<10, rolling 6m windows, from 1990 to 2017.
Source: Cit Research
The loop is supposed to continue until an exogenous event occurs that leads to a systemic return to the mean in the markets. An active asset allocation, due to a dynamic exposure to its constituent assets, is able to adapt to possible sudden changes of scenario. The purpose of this paper is to demonstrate how an allocation originally designed for a passive fund can be a fundamental starting point for an active quantitative portfolio. The paper uses the 7Twelve Portfolio as starting passive base: it consists of 12 mutual funds (in this case, 12 ETFs) from 7 different asset classes. The dynamic selection of assets and their weightings is managed by a revised version of the Flexible Asset Allocation, supported with new contribution factors and new proprietary indicators in order to improve overall efficiency.
I. Background and Methodology
This paper originates from considerations about the studies of different authors, providing a link between different concepts and methods through personal implementations. It is worth mentioning the most influential authors, with reference to their contribution:
- Graig L. Israelsen, for the conception of the 7Twelve Portfolio and selection of its components;
- Wouter Keller and Hugo S. van Putten, for their contribution in the definition of a new quantitative strategy (Flexible Asset Allocation – FAA) based on new momentum factors beyond the traditional ones;
- Robert Engle and Tim Bollerslev, for the development of methods of analysis of economic and historical series with dynamic volatility over time;
- Sébastien Maillard, Thierry Roncalli, Jérôme Teiletche, for their contribution in defining the Risk Parity methodology, using volatility as a component in determining the allocation;
- Welles Wilder, for technical studies on breakout, range and trend concept
In particular, the paper focuses on the construction and backtesting of an allocation model based on the following pillars:
- Absolute Momentum, to determine the assets’ momentum (M);
- Volatility Model, to calculate the volatility through a generalized autoregressive model (V);
- Average Correlation Momentum, to determine a portfolio’s diversification component (C);
- ATR Trend/Breakout System, primary trend identification algorithm (T);
- Ranking Model, to select assets based on the weighting of the contribution’s factors to the strategy described
The paper consists of three parts. The first part covers the illustration of proprietary models and algorithms that determine the mentioned components. The second part explains how these components define the Ranking Model and, consequently, the asset allocation. The third part shows the results of a model backtesting, illustrated through monthly performances from July 2004 to November 2017.
II. The 7Twelve Portfolio
The 7Twelve is a multi-asset balanced portfolio developed by Craig L. Israelsen in 2008. Unlike a traditional two-asset 60/40 balanced fund, the 7Twelve balanced strategy uses multiple asset classes to improve performance and reduce risk. The Portfolio consists of 12 different mutual funds or ETFs from 7 core asset classes: US Equities, non-US Equities, Real Estate, Resources/Commodities, US Bonds, non-US Bonds and Cash. Diversification is already in the products, as each ETF represents a low cost indexed passive fund.
Table 1: 7Twelve Portfolio: Asset Allocation
In its core version, each ETF has a weight of 8.3% and the allocation does not change according to market conditions.
Table 2: 7Twelve Portfolio: list of selected ETFs
Chart 5: SPDR S&P500 ETF and 7Twelve Portfolio – performances comparison. Monthly data, from July 2004 to November 2017
Rather than using the Core 7Twelve portfolio, it’s possible to adjust it based on the investor’s age as well. Age can be thought of both as chronological age and allocation age. The allocation age is then determined based on ability to take risks based on life situation.
Table 3: Portfolio Allocations for the 7Twelve Core Model and Age Based Models. Source: 7twelveportfolio.com
In this paper, the Core 7Twelve Portfolio is the foundation of the portfolio but the signals, weightings and allocation are managed by the Ranking Model, whose main components are described below.
III. Volatility Model
In many studies volatility is mentioned without resorting to a univocal definition. This report deals with Realized Volatility using the Volatility Model, a modified version of the Generalized AutoRegressive Conditional Heteroskedasticity Model (GARCH), introduced in 1986 by the econometrist Tim Bollerslev, in order to overcome the limitations of the AutoRegressive Conditional Heteroskedasticity (ARCH) model. The GARCH Model assumes that variance is defined as the combination of a given number of square yields with a number of conditional yet delayed variances. The Volatility Model optimizes the GARCH model using the RiskMetrics database of J. P. Morgan, through daily variance estimations (λ=0.94).3 The Volatility Model uses OHLC Daily data for calculation.
Chart 6: S&P500 Volatility (VIX) and Volatility Model on S&P500 comparison. Daily data, from July 2004 to November 2017
Chart 7: S&P500, Volatility Model and Smoothed Volatility Model. Daily data, from July 2004 to November 2017
IV. ATR Trend/Breakout System
Having defined the instrument for measuring asset volatility, it’s necessary to define a model that determines the profitability and directionality phases. Trend Following strategies are the basis of many asset allocation models; this paper analyzes a proprietary algorithm for trend definition, called ATR Trend/Breakout System. This indicator uses a breakout technique based on price and volatility. The model varies in the session following the one in which the signal occurred: if a given day’s high is higher than the Upper Band, the following day the model will go Long (=2); on the opposite, if a given day’s low is lower than the Lower Band, the following day the model will go Neutral/Short (=-2).
Chart 8: ATR Trend/Breakout System on Vanguard Materials ETF (VAW). Daily data, from July 2004 to November 2017
Similar models use volatility as a variable of deviation bands, adding it to the Upper Band and subtracting it from the Lower Band, allowing a greater spread between entry and exit prices during vulnerability phases.4 In the ATR Trend/Breakout System, the Lower Band, consisting of market session highs, is summed up and not subtracted from volatility, defined by a 42-periods Average True Range. This means that the greater market volatility is, the more responsive the model is to signals. This different approach is due to the fact that the ATR Trend/Breakout System doesn’t determine the entrance or exit of assets in the portfolio, but represents a contribution factor to the Ranking Model through its coexistence with Absolute omentum (M), Volatility Model (V) and Average Correlation Momentum (C). Volatility measures the deviations of an historical series but is blind compared to the trend; a model that considers the volatility of an asset but not its trend can overweight assets with a price and volatility downtrend.
V. Ranking Model
The Ranking Model consists of the following components:
- (M) Absolute Momentum: 4 months momentum (ROC – Rate of Change) on daily
- (V) Volatility Model: volatility measure calculated with a generalized auto-regressive model. A 10-day smoothed variant will be The algorithm is calculated on daily OHLC data.
- (C) Average Relative Correlations: 4 months average correlation across the ETFs on daily returns. As shown by Varandi, the diversification of a portfolio has improved through the selection of assets with low average correlations.
- (T) ATR Trend/Breakout System: trend identification algorithm, able to capture periods of great and low directionality of assets, avoiding Black Swan and significant
Although the algorithm’s application is daily, classification is done on a monthly basis, taking the last value of the month. Each asset, with the exception of Cash (iShares 1-3 Year Treasury Bond ETF – $SHY), is ranked from 1 to 11 depending on the monthly values of Absolute Momentum, Volatility Model and Average Relative Correlations. ETFs are ranked from 1 to 11 according to the monthly Absolute Momentum values in ascending order. This means that the greater the momentum of an asset is, the greater the profitability and the rank.
Chart 9: Absolute Momentum: Monthly data, from January 2008 to February 2009
Chart 10: Ranked Absolute Momentum: ranked variant of the Absolute Momentum, from 1 to 11. Monthly data, from January 2008 to February 2009
ETFs are ranked from 1 to 11 based on the monthly Volatility Model values in descending order. The lower the volatility of an asset, the lower the risk, and the higher its ranking.
Chart 11: Volatility Model: Monthly data, from January 2008 to February 2009
Chart 12: Ranked Volatility Model: ranked variant of the Volatility Model, from 1 to 11. Monthly data, from January 2008 to February 2009
ETFs are ranked from 1 to 11 based on the monthly Volatility Model values in descending order. The lower the volatility of an asset, the lower the risk, and the higher its ranking.
Chart 13: Average Relative Correlation. Monthly data, from January 2008 to February 2009
Chart 14: Ranked Volatility Model: ranked variant of the Relative Average Correlation, from 1 to 11. Monthly data, from January 2008 to February 2009
Once the ranks of the assets have been determined based on the Absolute Momentum (A), Volatility Model (V)and Average Relative Correlations (C) the Total Rank is calculated.
Rank(M) = is the ranking from 1 to 11 of the asset based on the Absolute Momentum (Ranked Absolute Momentum)
Rank(V) = is the ranking from 1 to 11 of the asset based on the Volatility Model (Ranked Volatility Model)
Rank(C) = is the ranking from 1 to 11 of the asset based on the Average Relative Correlation (Ranked Average Correlation)
T = ATR Trend/Breakout System
wM = % weight assigned to Rank(A) for Total Rank evaluation wV= % weight assigned to Rank(V) for Total Rank evaluation wC = % weight assigned to Rank(C) for Total Rank evaluation
x = value assigned to the Absolute Momentum to avoid equal ranks
Only the 5 ETFs with the lowest Total Rank will be taken in consideration for the upcoming allocation. For each of the ETFs, if it has a positive Absolute Momentum, then it will be included in the final asset allocation, otherwise its weighting will be replaced with Cash. In an extreme case where all 5 of these ETFs have a negative Absolute Momentum, Cash will assume a 100% weighting.
VI. Application And Empirical Tests
The model works by applying the algorithms discussed in the previous paragraphs. The database is end-of-day and it is downloaded from Yahoo! Finance. Where necessary, interpolations have been made with consistent historical series in order to achieve temporal homogeneity. Data interpolation was performed on RStudio; Absolute Momentum, Volatility Model, Average Relative Correlation and ATR Trend/Breakout System indicators were programmed on Metastock; classification and Ranking Model were programmed on Excel. The test was performed on a USD Portfolio, consisting mainly of ETFs, to ensure maximum plausibility. Daily and monthly returns are used. Simulation results are from July 2004 through November 2017. No transaction costs are included, all results are gross of any transaction fees, management fees, or any other fees that might be associated with executing the models in real-time.
The current allocation of the Portfolio is determined by the Ranking Model of the previous month. The Ranking Model in the last session of the current month determines the allocation of the following month. To assess the effectiveness of the proposed strategy, the performance of the Ranked Asset Allocation Model was compared to the Salient Risk Parity Index,5 managed by a Risk Parity portfolio with 10% Volatility Targeting, Core 7Twelve Portfolio and SPDR S&P 500 ETF.
Chart 15: Ranked Asset Allocation Model (RAAM), Salient Risk Parity Index, SPDR S&P500 ETF and 7Twelve Portfolio, performance comparison. Monthly data, from July 2004 to November 2017
Table 4: Ranked Asset Allocation Model (RAAM), historical returns. Monthly data, from July 2004 to November 2017
Table 5: Ranked Asset Allocation Model (RAAM) and Salient Risk Parity Index – summary statistics
Table 6: RAAM: Asset Allocation updated to 11/28/2017
Chart 16: Ranked Asset Allocation Model: allocation across time. Monthly data, from July 2004 to November 2017
Chart 17: Ranked Asset Allocation Model: asset classes – weightings across time. Monthly data, from July 2004 to November 2017
VII. Conclusion
In this paper I’ve focused on the creation of indicators, useful to measure components such as Momentum (M), Volatility (V), Correlation (C) and Trend (T). These indicators have been applied to an automatic asset allocation model (“Ranked Asset Allocation Model – RAAM”), able to rank assets and calculate their weightings within the portfolio according to market conditions. The non-discretionary quantitative model was applied to a strategy originally designed for a passive portfolio (“The 7Twelve”). We have shown how the combination of two seemingly irreconcilable strategies, passive and active, has led to the creation of a model capable of outperforming the benchmarks and the market in a constant way. The signals and results have been confirmed by all evaluation methods and seem solid to avoid any chance.
VIII. Footnotes
1 Pwc, ETF 2020 Preparing for a new horizon, 2014, passim
2 Philips B., Kinniry M., Walker D., The Active-Passive Debate: Market Cyclicality and Leadership Volatility, 2014
3 Zangari P., RiskMetrics Technical Document, 1996, 75-100
4 Lim Andrew M., The Handbook of Technical Analysis, 2015, 125-171
5 Source: http://www.salientindices.com/risk-parity.html
IX. References
Bollerslev T, 1986, Generalized Autoregressive Conditional Heteroskedasticity, Journal of Econometrics 31, 307–327
Bollerslev T, 1987, A Conditional Heteroskedastic Time Series Model for Speculative Prices and Rates of Return, The Review of Economics and Statistics 69, 542–547
Engle R., 1982, Autoregressive Conditional Heteroskedasticity With Estimates of the Variance of U.K. Inflation, Econometrica 50, 987-1008
Engle R., Ng V. and Rothschild M., 1990, Asset Pricing with a Factor ARCH Covariance Structure: Empirical Estimates for Treasury Bills, Journal of Econometrics 45, 213-237.
Faber M., 2007, A Quantitative Approach to Tactical Asset Allocation, The Journal of Wealth Management
Hubbard D., 2009, The Failure of Risk Management: Why It’s Broken and How to Fix It (Wiley) Israelsen Craig L., 2010, 7Twelve: A Diversified Investment Portfolio with a Plan, (Wiley)
Keller Wouter J., van Putten Hugo S., 2012, Generalized Momentum and Flexible Asset Allocation (FAA): An Heuristic Approach, SSRN
Lee W., 2000, Theory and Methodology of Tactical Asset Allocation, (Wiley) Lim Andrew M., 2015, The Handbook of Technical Analysis (Wiley)
Pring M., 2014, Technical Analysis Explained, Fifth Edition: The Successful Investor’s Guide to Spotting Investment Trends and Turning Points (McGraw-Hill Education)
Wilder J. Welles, 1978, New Concepts in Technical Trading Systems (Trend Research)
Quantamentals
by Christopher Cain, CMT
About the Author | Christopher Cain, CMT
Christopher Cain, CMT, is the U.S. Quantitative Equity Strategist for Bloomberg Intelligence, a division of Bloomberg LP, and is based in New York, NY. Christopher provides analysis and tactical strategy on equity factor investing and other quantitative, model-driven equity market topics.
He is the 2020 Charles H. Dow Award-Winning author of “Quantamentals – Combining Technical and Fundamental Analysis in a Quantitative Framework for Better Investment Results”.
Prior to joining Bloomberg, he was a quantitative analyst focused on building alpha generating trading strategies for institutional investors. He is also a former fixed income market maker, managing a 500MM trading book consisting of cash rates products (US Treasuries, US Agencies and SSA Bonds) and US Interest Rate derivatives.
Christopher is passionate about systematic/quantitative investing, equity factor modeling, trading system design and testing, Python programming and behavioral finance. He holds a bachelor’s of science degree in Finance from The Pennsylvania State University and holds the Chartered Market Technician designation.
by Laurence Connors
About the Author | Laurence Connors
Laurence Connors is Chairman of The Connors Group (TCG), and the principal executive officer of Connors Research LLC. TCG is a financial markets information company that publishes daily commentary and insight concerning the financial markets and has twice received an award by the Entrex Private Company Index for being one of the 10 fastest growing private companies.
He has over 30 years of experience working in the financial markets industry. He started his career in 1982 at Merrill Lynch as an Investment Advisor, and later moved on to become a Vice President with Donaldson, Lufkin, Jenrette (DLJ), where he worked with the Investment Services Group from October 1990 to March 1994.
Mr. Connors is widely regarded as one of the leading educators in the financial markets industry. He has authored over 20 books on market strategies and volatility trading, including Short-Term Trading Strategies That Work, and Street Smarts (with Linda Raschke). Street Smarts was selected by Technical Analysis of Stocks and Commodities magazine as one of “The Classics” for trading books written in the 20th century. His most recent book The Alpha Formula: Beat the Market with Significantly Less Risk (with Chris Cain, CMT) is now available.
Mr. Connors has been featured and quoted in the Wall Street Journal, New York Times, Barron’s, Bloomberg TV & Radio, Bloomberg Magazine, Dow Jones Newswire, Yahoo Finance, E-Trade Financial Daily, Technical Analysis of Stocks and Commodities, and many others. Mr. Connors has also been a featured speaker at a number of major investment conferences over the past two decades.
Abstract
Fundamental and Technical analysis are often thought of as separate, competing ideologies. We regard this world view as short-sighted. Our research shows that combining fundamental and technical analysis in a quantitative, rules-based framework leads to greatly improved performance. Furthermore, combining known factors whether of the fundamental or technical variety offers significant benefits and leads to market-beating performance over the last 16+ years.
Introduction
Factor investing has taken the investment management industry by storm. In this paper, we show that traditional factor investing can be greatly enhanced by combining factors in an intelligent way. More specifically, combining fundamental factors with technical factors leads to large increases in performance.
These different disciplines, fundamental and technical analysis, have rarely been combined throughout history. Fundamental investors often view technical analysis with a large degree of skepticism. Technicians are often of the opinion that “price is the only thing that pays.” Technicians believe that making investment decisions based on market generated signals such as price trends, overbought/oversold oscillators, and volatility is the most efficient way to generate outperformance and manage risk.
We regard these rather narrow world views as incomplete. Our research shows there are large advantages to incorporating both technical and fundamental factors intelligently into an investment process. Combining these disciples in a quantitative, rules-based way has synergistic effects and gives an investor the best of both worlds.
Part 1: Meet the Factors
Fundamental Factors
We define fundamental factors as the metrics found on a firm’s quarterly balance sheet, income statement or other financial reports.
Value
Value, perhaps the most famous factor, is the tendency for relatively cheap stocks to outperform relatively expensive stocks over time. This factor has been around for almost a century, beginning with the “father of value investing,” Columbia Professor Benjamin Graham. His groundbreaking works “Security Analysis” (1934) and “The Intelligent Investor” (1949) laid the groundwork for the value investing philosophy.
The traditional academic definition of the value factor is to sort stocks based on their book-to-market ratio. The Fama–French three-factor model was introduced in the 1992 paper “The Cross Section of Expected Stock Returns” by Eugene Fama and Kenneth French. The Fama–French three-factor model uses the value factor, along with the market and size factors, to describe stock returns. 1
Quality / Profitability
A more recent fundamental factor identified by academics and used by practitioners long before its publication is the profitability or quality factor. This is the observation that investing in highly profitable firms has led to significantly higher returns compared to firms of lower profitability. Common metrics to measure a firm’s profitability include gross profitability, return on equity and return on invested capital.
The quality factor takes this idea one step further and shows that not only does profitability drive excess returns but also other metrics of strong financial standing do as well. Common metrics to measure a firm’s financial standing include low debt-to-asset ratios, the stability of earnings and low accruals.
The seminal academic work on the profitability premium comes from Robert Novy-Marx and his 2013 paper, “The Other Side of Value: The Gross Profitability Premium.” His work looked at gross profits, defined as sales minus the cost of goods sold divided by current assets, over the period of 1962 to 2010. Novy-Marx found that the most profitable firms earned returns of 0.31% more per month compared to the least profitable firms.2 Furthermore, he found that accounting for profitability dramatically increased the performance of value-based strategies, an insight we will utilize in the model we will build in this paper.
Technical Factors
We define technical factors as those that are derived from the market itself. These factors either use price directly or a derivative of price, such as volatility.
Cross-Sectional Momentum
Cross-sectional momentum is the tendency for assets that have had the strongest performance in the recent past to continue to outperform going forward. On the other hand, assets that have had the weakest performance over the recent past tend to continue to underperform. This type of momentum, sometimes called relative strength momentum or simply relative strength, compares the performance of an asset to a larger universe of assets. This is slightly different than time-series momentum, which looks at an asset’s performance compared to its own past history.
The seminal study on the momentum effect was conducted by Narasimhan Jegadeesh and Sheridan Titman in their 1993 paper – “Returns of Buying Winners and Selling Losers: Implications for Stock Market Efficiency.” This paper formed long/short portfolios based on past performance (momentum), showing large outperformance that was unable to be explained by existing academic models.3
The traditional academic definition of momentum, especially as it is applied to individual equities, measures the total returns over the last 12 months, skipping the most recent month. Skipping the most recent month is to account for the well-known mean reversion effect over shorter time frames.
Low Volatility / Low Beta
The low volatility factor is the empirical observation that “defensive” stocks (low-volatility, low-risk) have delivered both higher returns and higher risk-adjusted returns compared to “aggressive” stocks (high-volatility, high-risk).
This is a large blow to the original academic pricing model – the capital asset pricing model (CAPM). CAPM states that there is a positive relationship between risk and return, as a “rational” investor should demand a higher return to compensate for accepting more risk. Not only does this fail to hold up in the real world, but the empirical results are the opposite. Low volatility (risk) stocks tend to lead to higher returns. The outperformance is even more dramatic when returns are viewed on a risk-adjusted basis.
Academic evidence for the low volatility premium includes the 2016 study “Understanding Defensive Equity” written by Robert Novy-Marx. Novy-Marx ranked stocks by quintiles of either volatility or market beta, and showed that the most volatile/highest beta quintile dramatically underperformed the rest of the stocks.4
Further evidence was provided by Andrea Frazzini and Lasse Heje Pedersen in their 2014 paper “Betting Against Beta.” The professors formed portfolios that went long low beta stocks (leveraged to a beta of 1) and shorted high-beta stocks (deleveraged to a beta of 1). This market neutral portfolio realized a Sharpe ratio of 0.78 from 1926 to 2012. Frazzini and Pedersen then expanded this research to not only include the US but investigated 10 international equity markets and found similar results.5
Time-Series Momentum
Time-series momentum is using a security’s own past performance to dictate long or short positions. An example time-series momentum rule would be to go long if the security is trending higher and getting out, or going short, if the security is trending lower. Time-series momentum is also called absolute momentum or simply trend following.
Many technical analysis techniques can be utilized to measure time-series momentum. Some examples include a simple rate of change, the security’s price compared to its moving average, dual moving averages, or the slope of a linear regression line, to name a few.
Time-series momentum is perhaps the oldest and most successful investment style in the hedge fund industry. Trend following is synonymous with the CTA industry, a category of hedge funds. These firms typically utilize trend following on a diversified group of futures markets to deliver uncorrelated returns. The performance of these hedge funds is particularly strong in crisis periods such as 2008. As such, they are useful as part of a large institutional investor’s portfolios.
Academics were a few decades behind practitioners in studying the effects of time-series momentum. In recent years, however, time-series momentum has been widely studied and proved to be a robust and statistically significant driver of performance. One recent study is “A Century of Evidence on Trend-Following Investing” by Brian Hurst, Yao Hua Ooi and Lasse H. Pedersen. In the paper, portfolios were constructed by simply taking equally weighted one month, three months and twelve month total returns. The model went long assets that have shown positive recent performance and shorted assets that have shown negative recent performance.
This simple strategy was applied to 67 markets across four major asset classes (29 commodities, 11 equity indices, 15 bond markets, and 12 currency pairs). The time period for this study was an amazing 1880 to 2013. The results were nothing short of spectacular, showing remarkable consistency throughout the decades and delivering an annualized return of 14.9% with 9.7% volatility!6 Furthermore, returns were positive every decade and showed virtually no correlation to traditional equity or fixed income markets.
Part 2: Performance of Single Factors
We have established the factors we will be utilizing in our model, along with some quantitative evidence provided by academia regarding their effectiveness. We will now investigate some of the factors individually. We will then go on to show that combining the fundamental factors with the technical factors in an intelligent and quantitative, rules-based way leads to greatly enhanced performance.
Data and Investment Universe
All historical tests will cover the time period of January 2003 to September 2019. Data and analytics are provided by Quantopian.com. The universe for all tests run is Quantopian’s “Q500US” universe, which contains the 500 most liquid US stocks based on trailing 200-day average dollar volume. This universe is reconstituted each month, avoiding survivorship bias.
Quality/Profitability Factors
We will begin by inspecting the results of buying stocks that have high quality/strong profitability. The metrics we will utilize are sourced from academic papers or books.
As mentioned previously, the seminal research on the quality factor was conducted by Robert Novy- Marx in his 2013 paper “The Other Side of Value: The Gross Profitability Premium.” In the paper, the author used gross profitability, as defined by revenues minus cost of goods sold divided by assets. We use this as a quality/profitability metric.
Other common quality/profitability metrics involve how efficiently a firm utilizes its capital. A metric used by the highly successful hedge fund manager Joel Greenblatt in his book “The Little Book That Beats The Market” was return on invested capital (ROIC).7 We will investigate this measure as well. Similar to ROIC, we will also include the popular metric return on equity (ROE), another measure of how efficiently a firm utilizes its capital.
For these tests, we will create two long-only portfolios, one that buys the top 50 stocks based on our three quality metrics and one that buys the bottom 50. These portfolios will be rebalanced once a month.
Table 1: Quality Factor Test
As you can see, separating firms by profitability/quality metrics produces large spreads between the highest and lowest quality firms. Furthermore, higher-quality firms display markedly lower volatility, which when combined with higher returns results in significantly higher Sharpe ratios.
Value Factors
Next, we will inspect the performance of some stand-alone value factors. For our value metrics, we will utilize a couple of measures that incorporate both top line and bottom line statistics.
We start with EBIT (earnings before interest and taxes) to Enterprise Value (the value that a private investor would be forced to pay to buy a company, including equity and debt). This measure was again popularized by Joel Greenblatt, and this is the other metric utilized in his work “The Little Book That Beats The Market.”
Next, we will include a top-line measure for value: price to sales. This metric was popularized by Jim O’Shaughnessy in his book “What Works on Wall Street” amongst others.8 Price-to-sales is calculated by taking a company’s market capitalization and dividing it by the company’s total sales or revenues over the past twelve months.
Notice we include both top-line (sales/revenues) and bottom line (earnings before interest and taxes) statistics to measure value. We again create long-only portfolios, separate firms into the 50 cheapest and 50 most expensive by these metrics. Our portfolios are rebalanced monthly.
Table 2: Value Factor Tests
While not as dramatic as the quality factor for the last 16+ years, you can see the cheapest firms outperformed the most expensive firms based on our value metrics.
Cross-Sectional Momentum
We now move on to test cross-sectional momentum. The traditional academic definition of cross- sectional momentum measures the total return of a stock over the last 12-months and excludes the most recent month.
The exclusion of the most recent month in academic research is to account for the well-known tendency of stocks to mean revert over this shorter-term time frame. To remain consistent, we will skip the most recent month as well.
For our tests, we will inspect both 12-month and 6-month momentum lookbacks. We again form long-only portfolios, investing in the 50 firms with the highest and lowest momentum readings with a monthly rebalance.
Table 3: Momentum Factor Tests
Consistent with academic research, we see stocks with strong recent performance outperforming stocks with weak recent performance over our time period.
Low Volatility
To inspect the low volatility factor, we form long-only portfolios of the 50 stocks with the highest and lowest historical volatility with a monthly rebalance. We utilize two lookback windows, 100 days and 200 days, for our volatility calculations.
Table 4: Volatility Factor Tests
The spread between the least and most volatile stocks is dramatic in both the 100-day and 200-day lookbacks. Not only do the lower volatility stocks produce significantly higher returns, but they do so with less volatility and drawdown, leading to a significant jump in Sharpe ratios.
Time-Series Momentum
To inspect time-series momentum, we will apply two simple time-series momentum rules to the overall US stock market, represented by the ETF “SPY.” We will utilize both raw total return momentum (rate of change) signals as well as price vs. moving average signals.
In our simple moving average tests, we will go long SPY if its price is above its moving average and switch to SHY (1-3yr US Treasury bonds) if its price is below its moving average. For our total return momentum (ROC) tests, we will go long SPY if its total return over the lookback period is positive and switch to SHY if the total return over the lookback period is negative. This signal is checked once a month, at the end of the month.
We will inspect the use of both 100-day and 200-day moving averages and 6-month and 12-month momentum.
Table 5: Time Series Momentum Tests
Moving Average Rule: Is Price above the Moving Average
Momentum Rule: Is Total Return Positive
Risk Off Asset: SHY (1-3yr US Treasuries)
Results here are typical of applying time-series momentum rules, most notably the addition of these rules allows us to sidestep large bear markets. Consistent with our prior research, applying time-series momentum results in marked decreases in volatility and max drawdown compared to buy and hold alone.9
Summary of Individual Factor Tests
In this section, we tested both Fundamental and Technical factors in isolation. We witnessed the anticipated results with high quality beating low quality, cheap beating expensive, high momentum beating low momentum and low volatility beating high volatility. We also observed the risk-reducing nature of applying simple time-series momentum rules.
Part 3: Combining the Factors and Building Our Model
We will now move on to the heart of the paper, the fact that the combination of fundamental and technical analysis in a quantitative, rules-based way leads to dramatic performance increases.
We will begin by combining the two fundamental factors – quality and value. We will then move on to combine these fundamental factors with technical factors: low volatility, time-series momentum and finally cross-sectional momentum.
Combining the Fundamental Factors – Quality and Value
Research has shown that combining quality and value has synergistic effects. Robert Nozy-Marx, in his aforementioned paper “The Other Side of Value: The Gross Profitability Premium” showed that this combination resulted in increased performance. The combination of quality and value also allows an investor to avoid the so-called “value trap,” firms that look cheap but have little chance of a turnaround.
The combination of value and quality also explains much of Warren Buffett’s spectacular success throughout the years. After all, Mr. Buffett’s philosophy isn’t simply to buy cheap stocks as some naively believe. He instead opts for quality companies that are trading at relatively cheap valuations.
Finally, Joel Greenblatt also combines quality and value metrics in his work, opting for companies that have high ROIC and high EBIT/EV ratios. Mr. Greenblatt is essentially sorting for companies that are both highly profitable and trading at relatively cheap valuations.
For the rest of this paper, we will stick with ROIC to represent quality and EBIT/EV to represent value, just as Mr. Greenblatt did. This combination has been in the public domain for many years.
In the first step in building our Quantamental model, we combine the value (EBIT/EV) and quality (ROIC) metrics. We first rank our 500 stocks by the value metric, with 500 being the cheapest and 1 being the most expensive. We then rank our 500 stocks by the quality metric, with 500 being the firm with the highest quality and 1 being the firm with the lowest quality. We then simply sum the rankings, rebalancing into the 50 firms that have the highest combined rank: firms with both high quality and low valuations. To stay consistent with the tests previously run, we will rebalance our portfolio monthly.
Table 6: Quality + Value
This combination results in annual returns of 11.3%, volatility of 21.9%, a Sharpe ratio of 0.60 and a significant drawdown of -64.60%. We will now demonstrate how adding technical factors greatly improves these results.
Performance of Combining Value, Quality, and Low Volatility
As we already witnessed, simply buying stocks with the lowest historical volatility has been a great strategy over the last 16 years. These stocks have delivered higher absolute returns and much higher risk-adjusted returns compared to the overall market. We will utilize this insight in our evolving model. In addition to the quality and value metrics, we will now add low volatility.
The methodology here is the same. We will rank each stock by quality and value, the same as before, but this time add in a ranking for low volatility. We will apply a 100-day lookback to calculate volatility, sticking with the standard deviation of daily percent returns over this time frame. The stock with rank 500 will be the lowest volatility stock and the stock with rank 1 will be the highest volatility stock We will then simply sum our three rankings – quality, value, and low volatility, and invest in the top 50 stocks with the highest combination rank. We again rebalance the portfolio monthly.
Table 7: Quality, Value and Low Volatility
We witness a significant increase in performance by adding the low volatility factor. Annualized returns went from 11.3% to 11.9%, volatility went from 21.9% to 14.5% and max drawdown went from -64.6% to -40.3%. This resulted in a large jump in the Sharpe ratio, going from 0.60 to 0.85.
Performance of Combining Value, Quality, Low Volatility, and Time-Series Momentum
The next factor we will add to our model is time-series momentum or trend following. We will do this in the form of a trend following “regime filter.” This well-known technique will first check if the overall market is trending higher. Only then will the model take new entries in our monthly rebalance. If the overall market is trending lower, no new entries are taken.
We now have to introduce a “risk-off” asset, something to rotate into when the trend of the overall market is down and we aren’t buying more stocks. For this purpose, we will simply use SHY (1-3yr US Treasuries). Other “risk-off” assets such as longer duration US Treasuries or US aggregate bonds can also be utilized and will help historical test results.
There are many ways we can measure if the overall market is trending higher or lower. Instead of looking for the optimal way to measure this, we will instead use a simple moving average with a 100- day lookback. We will use the ETF “SPY” to represent the market.
If the price of SPY is above its 100-day moving average, we will conclude that the market is trending higher and our model will take new entries. If the price of SPY is below its 100-day moving average, we will conclude that the trend of the market is lower and our model will not take new entries.
A note on the logic here – if SPY is below its 100-day moving average, we sell any stocks that fell out of our top 50 based on our value, quality and low volatility factors. These stocks are not replaced, with that capital instead allocated to SHY. If SPY is below its 100-day moving average and the stock remains in the top 50, it is held.
Table 8. Quality, Value, Low Volatility and Time-Series Momentum
Adding time-series momentum has the anticipated effect, significantly lowering the volatility and drawdown. The volatility now decreased from 14.5% to 10.9% and the max drawdown went from -40.3% to -25.60%. Our Sharpe ratio is now 0.99.
Adding Cross-Sectional Momentum – Double Sort
The last step in our model is to incorporate cross-sectional momentum. We will utilize this in a “double sort.”
We first rank our stocks by our quality, value and low volatility factors, taking the top 50 stocks with the highest combined metrics. We will then rank those top 50 stocks using cross-sectional momentum. Our momentum measure will be the trailing 6-month total returns, skipping the last month. We will then take the top 20 stocks with the highest momentum scores. We are left with 20 stocks, which will be our final portfolio. We continue applying the time-series momentum rule, only taking new entries if the price of SPY is above its 100-day moving average.
Table 9: Quality, Value, Low Volatility, Time-Series Momentum and Cross-Sectional Momentum
Our final rule, adding cross-sectional momentum, increases the returns by 2.6% per year while decreasing the max drawdown from -25.6% to -23.0%. Our Sharpe ratio is now 1.10 over the last 16+ years.
Improvement Every Step of the Way
Notice how the model results improved with each factor we added. We started with fundamental factors only, which were quality and value. We then added a technical factor: low volatility. We then moved on to add a time-series momentum rule in the form of a trend-following regime filter. This had the desired effect: a reduction of risk and max drawdowns. Finally, we added a cross-sectional momentum rule via a double sort, leading to higher returns.
Our Sharpe Ratio nearly doubled by combining fundamental and technical factors in a thoughtful way, going from 0.60 for the fundamental factors only to 1.10 in our complete model. The following table displays model results for every new factor added.
Table 10: Incremental Improvements
Our Complete Quantamental Model
Going step by step and combining known factors, we built a complete trading model incorporating both fundamental and technical factors. Our model utilizes quality, value, low volatility, time- series momentum and cross-sectional momentum into a complete trading strategy. Furthermore, we observed that combining the fundamental and technical factors leads to greatly improved performance, nearly doubling our Sharpe Ratio.
Summary of the rules:
- We start with a universe of the 500 most liquid US This is Quantopian’s “Q500US” universe. This universe is derived by taking the 500 US stocks with the largest average 200-day dollar volume, reconstituted monthly, capped at 30% of the equities derived from one sector.
- We then rank our stocks 1-500, based on the quality, value and low volatility factors. The stock ranked 500 would be the most attractive of these attributes and the stock ranked 1 would be the least
- Quality – Rank stocks by ROIC, the higher the better
- Value – Rank stocks by EBIT/EV, the higher the better
- Volatility – Rank stocks by trailing 100-day standard deviation, the lower the better
- Add up the three rankings and take the top decile. We are now left with 50 stocks that have a combination of high quality, low valuation, and low volatility.
- Of our 50 stocks, take the top 20 based on cross-sectional momentum. This is measured by stocks with the highest 6-month total return, skipping the last
- Every month we rebalance our portfolio, selling any stock we currently hold that didn’t make the top 20 list based on the logic above and buying stocks that have since made the list. Stocks are equally weighted.
- We only take new entries if our time-series momentum regime filter is passed. For our time-series momentum regime filter, we simply use SPY’s price compared to its 100-day moving average. If the price of SPY is above its 100-day moving average, we take new entries. If the price of SPY is below its 100-day moving average, no new entries are
- Any capital not allocated to stocks gets allocated to SHY (1-3yr US Treasuries).
- Assumptions include a beginning portfolio balance of $1,000,000 and commissions of $0.005 per share with a minimum trade ticket cost of $1. This models the real-life commission schedule of Interactive
Performance
Table 11: Performance of Quantamentals Model vs SPY, 2003-2019
Chart 1: Quantamental Model vs SPY, Cumulative Equity Growth, 01/2003-09/2019
Table 12: Quantamental Model Monthly/Yearly Total Returns
Robustness Check
For a robustness check, we will look at different variations of our model using various techniques for our time-series momentum regime filter. We will also inspect various lookback periods for our cross- sectional momentum sort.
Find four versions of our model in the following table using various techniques for our time-series momentum regime filter. Holding all other parameters constant, we inspect the results using a 100-day moving average, a 200-day moving average, 6-month total return momentum (ROC) and 12-month total return momentum (ROC).
Table 13: Time-Series Momentum Robustness Tests
The performance of our model holds up against different variations of our time-series momentum filter as can clearly be seen. We actually see a bump in performance by using a 6-month total return momentum filter instead of a moving average.
Find four versions in the following table using various lookbacks for our cross-sectional momentum sort. Holding all other parameters constant, we inspect the results using a 3-month, 6-month, 9-month, and 12-month cross-sectional momentum lookback. To remain consistent, the most recent month is skipped in all of these tests.
Table 14: Cross-Sectional Momentum Robustness Tests
Our model remains robust to different cross-sectional momentum lookbacks as well, with the 9-month lookback producing the best results.
Conclusion
In this paper, we first reviewed known drivers of unexplained returns, commonly known as factors: quality, value, low volatility, cross-sectional momentum, and time-series momentum. We reviewed the academic literature around these factors and ran tests examining the performance of these factors on a stand-alone basis. While each stand-alone factor delivered the expected result, the performance often came with high risks, steep drawdowns, and other drawbacks.
We then combined these factors in a thoughtful manner, showing improved performance every step of the way. We especially observed a bump in performance when the fundamental factors (quality and value) were combined with the technical factors (low volatility, time-series momentum and cross-sectional momentum).
We built our complete Quantamentals model from the ground up. We observed that the combination of fundamental and technical analysis in a quantitative, rules-based way leads to significant outperformance. The end result is a robust model that greatly exceeds the performance of the market benchmark.
Footnotes
- Fama, Eugene, Kenneth French, 1992, The Cross Section of Expected Stock Returns , The Journal of Finance
- Novy-Marx, Robert, 2012, The Other Side of Value: The Gross Profitability Premium, Journal of Financial Economics
- Jegadeesh, Narasimhan, Sheridan Titman, 1993, Returns of Buying Winners and Selling Losers: Implications for Stock Market Efficiency, The Journal of Finance
- Novy-Marx, Robert, 2014 , Understanding Defensive Equity, NBER Working Papers
- Frazzinim Andrea, Lasse Heje Pedersen, 2014, Betting Against Beta, Journal of Financial Economics
- Hurst, Brian, Yao Hua Ooi and Lasse H. Pedersen, 2014 , A Century of Evidence on Trend-Following Investing, The Journal of Portfolio Management
- Greenblatt, Joel, 2005, The Little Book That Beats The Market, Wiley
- O’Shaughnessy, Jim, 1997 , What Works on Wall Street, McGraw-Hill
- Cain, Christopher and Larry Connors, 2019, The Alpha Formula, TradingMarkets Publishing
References
Fama, Eugene, Kenneth French, 1992, The Cross Section of Expected Stock Returns, The Journal of Finance
Novy-Marx, Robert, 2012, The Other Side of Value: The Gross Profitability Premium, Journal of Financial Economics
Jegadeesh, Narasimhan, Sheridan Titman, 1993, Returns of Buying Winners and Selling Losers: Implications for Stock Market Efficiency, The Journal of Finance
Novy-Marx, Robert , 2014 , Understanding Defensive Equity, NBER Working Papers
Frazzinim Andrea, Lasse Heje Pedersen, 2014, Betting Against Beta, Journal of Financial Economics
Hurst, Brian, Yao Hua Ooi and Lasse H. Pedersen, 2014 , A Century of Evidence on Trend-Following Investing , The Journal of Portfolio Management
Greenblatt, Joel, 2005, The Little Book That Beats The Market, Wiley O’Shaughnessy, Jim, 1997, What Works on Wall Street, McGraw-Hill
Cain, Christopher and Larry Connors, 2019, The Alpha Formula, TradingMarkets Publishing