Technically Speaking, April 2017

What's Inside...

FORECASTING A VOLATILITY TSUNAMI

ABSTRACT

The empirical aim of this paper is motivated by the anecdotal belief among the professional and non-professional investment community,...

Read More

WHY DOES A TRADER NEED EDUCATION?

I feel very blessed in my position as the founder and architect of Optuma, that I get to travel the...

Read More

THE MOST OBVIOUS MACRO TRADE EVERYONE SEEMS TO BE IGNORING

Editor’s note: Stephen Duneier is among the presenters at the Annual Symposium in April.  Below is a reprint of his...

Read More

ALPHANUMERIC FINANCIAL CHARTS

Editor’s note: Richard Brath is among the presenters at the Annual Symposium in April. Below is a reprint of his...

Read More

HOW DO STOP-LOSS ORDERS AFFECT TRADING STRATEGY PERFORMANCE

Editor’s note: Tucker Balch is among the presenters at the Annual Symposium in April. Below is a reprint of his...

Read More

MiFID II SOLUTIONS

Editor’s note: The Markets in Financial Instruments Directive (MiFID) is the EU legislation that regulates firms who provide services to...

Read More

VISUALIZING THE ANXIETY OF ACTIVE STRATEGIES

Editor’s note: Cory Hoffstein is among the presenters at the Annual Symposium in April. This post was originally published at...

Read More

WHY MULTIPLY BY SQRT (252) TO COMPUTE THE SHARP RATIO

Editor’s note: this article was originally posted at AugmentedTrader.com.

This question comes up every time I teach Computational Investing. Here’s...

Read More

CHART OF THE MONTH: CDS DATA OFTEN LEADS EQUITY PRICS

Since ending 2016 at a price of 72.23, Target shares have broken technical support and dropped by 24%. One quantitative...

Read More

FORECASTING A VOLATILITY TSUNAMI

ABSTRACT

The empirical aim of this paper is motivated by the anecdotal belief among the professional and non-professional investment community, that a “low” reading in the CBOE Volatility Index (VIX) or large decline alone are ample reasons to believe that volatility will spike in the near future.  While the Volatility Index can be a useful tool for investors and traders, it is often misinterpreted and poorly used. This paper will demonstrate that the dispersion of the Volatility Index acts as a better predictor of its future VIX spikes.

INTRODUCTION

According to the United Kingdom’s National Oceanography Centre, tsunami waves can be as much as 125 miles in length and have resulted in some of the deadliest natural disasters in history. Fortunately, scientists have discovered warning signs of these massive waves, which are believed to be caused by shifts in the earth’s tectonic plates. One of the visible signs of a forthcoming tsunami is the receding of water from a coast line, exposing the ocean floor. 1 [See National Oceanography Centre 2011]  This is often referred to as “the calm before the storm.” The same type of activity can also be found in financial markets, specifically when analyzing the CBOE Volatility Index (VIX). It is often believed that when volatility gets to a “low” level the likelihood of a spike increases. However, as this paper will show, there is a more optimal tsunami-like condition that takes place within the markets, providing a better indication of potential future equity market loss and Volatility Index increase.

Great importance is found in the study of market volatility due to the historically negative correlation the Volatility Index has had to U.S. equities. By knowing the warning signs of a tsunami wave of volatility, professional and non-professional traders can better prepare their portfolios for potential downside risks as well as have the opportunity to profit from advances in volatility and/or declines in equities.  The popularity of volatility trading has seen steady growth to over $4 billion with more than 30 index-listed Exchange Traded Products.2 [See Whaley 2013] Drimus and Farkas (2012) note that “the average daily volume for VIX options in 2011 has almost doubled compared to 2010 and is nearly 20 times larger than in the year of their launch, 2006.” We can also see the increase in interest surrounding the Volatility Index by looking at trends in online searches with regards to low levels within the VIX. As of September 20th, 2016 there were 423,000 Google search results for “low VIX” and 4,610 results for “historic low volatility.” Few investors would deny the importance of volatility when it comes to the evaluation of financial markets.

In this paper the author will provide a brief literature review concerning the history of the Volatility Index, important prior studies surrounding the topic of volatility followed by a discussion of alterative, yet ultimately suboptimal, methods of predicting large swings in the VIX. The paper will conclude with the description, analysis, and results based on the author’s proposed methodology for forecasting outsized spikes within the VIX Index and how this approach may be used from a portfolio management standpoint to help investors better prepare based on the “calm before the storm.”

Those that believe in the adage of buy-and-hold investing often mention that missing the ten or twenty best trading days has a substantially negative impact on a portfolio’s overall return.  They then in turn reject the idea of attempting to avoid the worst days in the market and active management as a whole. However, as Gire (2005) wrote in an article for the Journal of Financial Planning, the best and the worst days are often very close in time to one another. Specifically, 50% of the worst and best days were no more than 12 days apart.3 [See Gire 2005]  Looking at the bull market in the S&P 500 between 1984 and 1998, the Index rose an annualized 17.89%. Gire found that by missing the ten best days the annualized return fell to 14.24%, the statistic often cited by the passive investing advocates. Missing the ten worst days increased the return to 24.17% and missing both best and worst days produced an annualized return of 20.31%, with lower overall portfolio gyration. With the negative correlation between the Volatility Index and the S&P 500, by having an ability to forecast large spikes in the VIX the author proposes the ability to potentially curtail an investor’s exposure to some of the worst performing days within the equity market.

HISTORY OF THE VOLATILITY INDEX

To better research, test, and analyze a financial instrument, it’s important to understand its history and purpose. The CBOE Volatility Index was originally created by Robert E. Whaley, Professor of Finance at The Owen Graduate School of Management at Vanderbilt University.  The Index was first written about by Whaley in his paper, “Derivatives on Market Volatility: Hedging Tools Long Overdue” in 1993 in The Journal of Derivatives. Whaley (1993) wrote, “The Chicago Board of Options Exchange Market Volatility Index (ticker symbol VIX), which is based on the implied volatilities of eight different OEX option series, represents a markets consensus forecast for stock market volatility over the next thirty calendar days.”

Whaley believed the Volatility Index served two functions; first, to provide a tool to analyze “market anxiety” and second, to be used as an index that could be used to price futures and options contracts. The initial function helped give the VIX its nickname of being the “fear gauge” which aids to provide a narrative explanation for why the Index can have such large and quick spikes as investor emotions flow through their trading terminals.4 [See Whaley 2008]

The Chicago Board of Options Exchange (CBOE) eventually launched Volatility Index (VIX) futures and options in 2004 and 2006, respectively. The VIX in its current form, according to the CBOE, “measures the level of expected volatility of the S&P 500 Index over the next 30 days that is implied in the bid/ask quotations of SPX options.”5 [See CBOE 2016]

LITERATURE REVIEW

Comparing Rising & Falling Volatility Environments

It is often stated in the financial markets community that volatility is mean-reverting, meaning that like objects affected by gravity – what goes up must come down. Many market professionals attempt to take advantage of the rising and falling trends within the volatility market by echoing Warren Buffett’s famous quote, “Buy when there’s blood in the streets,” using an elevated reading in the Volatility Index as their measuring stick for the level of figurative blood flowing down Wall Street. However, as Zakamunlin (2006) states, the median and average duration for rising and falling Volatility are not equal. In fact, Zakamunlin found that the time span for declines in volatility surpass the length of rising volatility by a factor of 1.4 and the resulting impact on equity markets is asymmetric, with a perceived over-reaction to rising volatility compared to declining volatility.6 [See Zakamunlin 2006]  This is important, as it tells us that there is less time for an investor to react to rising volatility than there is to react after volatility has already spiked. Thus, the resulting impact on stock prices is disproportionately biased with stocks declining in value more than they rise in value during environments of increasing and decreasing volatility, respectively.  

Using Volatility to Predict Equity Returns

Much attention has been paid to the creation of investment strategies based on capturing the perceived favorable risk situation of elevated readings from the Volatility Index. Cipollini and Manzini (2007) concluded that when implied volatility is elevated, a clear signal can be discerned for forecasting future three-month S&P 500 returns contrasted to when volatility is low. When evaluating the Volatility Index’s forecasting ability when at low levels, their research notes that, “On the contrary, at low levels of implied volatility the model is less effective.”7 [See Cipollini & Manzini 2007]  Cipollini and Manzini’s work shows that there may be a degree of predictability when the VIX is elevated but that the same level of forecasting power diminishes when analyzing low readings in the Volatility Index. In a study conducted by Giot (2002), the Volatility Index is categorized into
percentiles based on its value and modeled against the forward-looking returns for the S&P 100 Index for 1-, 5-, 20-, and 60-day periods. When looking at the tenth percentile (equal to 12.76 on the Volatility Index), which includes a sample size of 414 observations, the 20-day mean return was found to be 1.06%, however Giot observed the standard deviation of 2.18, and the minimum and maximum returns ranged from -6.83% to 5.3%.8 [See Giot 2002]  While Giot demonstrates a relationship between volatility and forward equity returns, the research also diminishes the confidence that can be had in the directional forecasting power of returns within intermediate time periods for the underlying equity index. We can take from this that while a low reading within the VIX has shown some value in predicting future volatility, the forecasting of the degree and severity of the predicted move is less reliable, as it has a suboptimal degree of variance.

DATA USED

For purposes of crafting the methodology and charts used within this paper, data was obtained from several credible sources. CBOE Volatility Index data has been acquired from StockCharts.com, which curates its data from the NYSE, NASDAQ, and TSX exchanges.9 [Stockcharts.com]  Data for the CBOE VIX of the VIX was obtained through a data request submitted directly to the Chicago Board Options Exchange.

VOLATILITY SPIKES

While some degree of gyration in stock prices is considered normal and acceptable by most of the investment management community, large swings in price are what catch many investors off guard. It’s these “fat tail” events that keep investors up at night, which are often accompanied by sudden spikes found in the Volatility Index. Fortunately, many of these spikes can be forecasted; however, first we must address what a “spike” is. While the parameters of defining a “spike” can vary, this author will use a 30% advance in closing price to a high achieved within a five-trading day period. Chart 1 shows the Volatility Index between May 22, 2006 and June 29, 2016. Marked on the chart are instances where the VIX has risen by at least 30% (from close to the highest high) in a five-day period when a previous 30+% advance had not occurred in the prior ten trading days. There have been 70 such occurrences of these spikes in the above-mentioned time period.

While previous studies have been conducted on forecasting future volatility, through a search on the SSRN it does not appear published analysis has been conducted specifically on forecasting spikes in volatility. From an asset management perspective, whether the reader is a professional or nonprofessional, a volatility spike, and with it a decline in stocks, impact on an equity portfolio is a more frequent risk than that of a bear market. Historically, the S&P 500 averages four 5% declines every year but we’ve only had 28 bear markets (20% or more decline from peak to trough) since the 1920s.10 [See Hulbert 2016]

METHODS OF VOLATILITY FORECASTING

The traditional thought process that low volatility precedes higher volatility, a topic Whaley addresses in his 2008 paper, stating that, “Volatility tends to follow a mean-reverting process: when VIX is high, it tends to be pulled back down to its long-run mean, and, when VIX is too low, it tends to be pulled back up”11 [See Whaley 2008] is true, in a general sense, although this concept does not act as the best predictor of quick spikes in the VIX. Chart 2 provides an example of this, as it shows the occurrences where the daily close of the Volatility Index is at a four-week low. The four-week period is not based on optimization but was chosen as an example time period of roughly one month. What can also be observed is the large sample size that is produced, with 100 signals in the roughly ten-year period.  The author realizes that by expanding the four-week time window, the sample size would lessen but the same basic result would still be reached – a greater sample size of occurrences than of previously-defined spikes in the VIX. The trouble this causes for the investor is an over-reaction each time volatility reaches a new four week low, as the VIX many times continues its trend lower, not resulting in a spike higher. This shows that simply because the VIX has fallen to a multi-week low, it does not necessitate a forthcoming spike within the underlying Index. 

One could also argue that because of the nature for the Volatility Index to mean-revert, that volatility becomes overly-discounted after a large decline, which is reason enough that it should then spike higher. This can be measured by looking for instances where the VIX has fallen by at least 15% in a three-day period, as shown by markers in Chart 3. While forgiving the occurrences that take place immediately after a spike within the VIX, looking at periods where volatility has fallen by a large amount in a short period of time increases the predictability of future large increases in the Volatility Index. However, while the sample size decreases to 53, there are still quite a few occurrences that produce false-signals in preceding VIX spikes. It is of this author’s opinion that neither of these methods (a four-week low or 15+% decline), provide an optimal warning to an investor of a heightened risk of forthcoming elevated volatility.

VOLATILITY DISPERSION METHODOLOGY

J.M. Hurst was one of the early adopters of trading bands according to his book The Profit Magic of Stock Transaction Timing, drawing envelopes around price and a specified Moving Average. According to John Bollinger, CFA, CMT, Marc Chaikin was next to improve upon the practice of using bands within trading, using a fixed percentage around the 21-day moving average. Ultimately, in the 1980s, Bollinger built upon the work of Hurst and Chaikin by shifting the outer bands to incorporate volatility of the underlying market or security through the use of standard deviation above and below the 20-period moving average. Bollinger chose to use a
20-period moving average as “it is descriptive of the intermediate-term trend.”12 [See Bollinger]  Bollinger notes that by applying analysis to the width of the bands, “a sharp expansion in volatility usually occurs in the very near future.” This idea of narrowing bands as a measure of contraction in the dispersion of a security is the topic this paper will focus on going forward.  

While financial markets are never at complete rest per se, the closest they come is by trading in a very narrow range. This range can be observed in several ways, whether using Bollinger Bands®, an average true range indicator, or by simply calculating the standard deviation of price. Just as the seas become calm and the tide pulls back from the shore before the striking of a violent tsunami, the movement of the VIX often declines, sending the index’s dispersion to extremely low levels prior to the Index spiking higher. Chart 4 shows the CBOE Volatility Index and its 20-day standard deviation. While it is outside the scope of this paper, the lookback period used for the standard deviation could be optimized to better suit the timeframe and risk appetite of the investor; however, this author has chosen a period of 20 days in accordance with the timeframe used by Bollinger for his Bollinger Bands. While the VIX and its 20-day standard deviation move in lock-step with one another, additional forecasting ability can be achieved by applying further analysis to the dispersion measurement.

In order to find an appropriate threshold with forecasting spikes in the Volatility Index, the daily standard deviation readings were ranked by percentile for the time period of May 2006 through June 2016. As a result, the fifteenth percentile allowed a sizable sample size of 373 to be obtained. The fifteenth percentile standard deviation during the above-mentioned timeframe for the Volatility Index is 0.86. Chart 5 shows the scatter plot of the data observed for the 20- day standard deviation for the VIX and the resulting three-week maximum change in the Index, which was calculated by using the highest high in the subsequent fifteen trading days for each data point. By looking at the maximum change in the VIX we can begin to see that the largest spikes within a three-week period occur when price dispersion is extremely low; while the three-week maximum change in the VIX diminishes the larger the dispersion becomes. 

To provide a graphical representation of the threshold being met, Chart 6 shows the daily Volatility Index marked with occurrences of standard deviation being at or below 0.86 when a prior reading of at or below 0.86 has not occurred during the prior ten trading days. The ten-day lookback is used to avoid clusters of occurrences and to better show the initial signal of the threshold being met, which leaves 52 signals in the sample. The sample size with the standard deviation threshold diminishes significantly compared to the previously mentioned prediction method of the VIX being at a four-week low as well as improved foreshadowing of eventual spikes in volatility compared to 15+% declines in the VIX.

A spike was defined previously as a rise of 30+% in a five-day period. Chart 7 displays volatility spikes but also includes the standard deviation signal markers to show that the majority of spikes that have taken place in the Index occur after the dispersion of the VIX has fallen below the specified threshold. In fact, based on this ten-year data period, very few instances of the threshold being met were not followed by a 30+% spike in volatility. As the seas become calm and the tide pulls back in the ocean before a massive wave, so too does volatility’s dispersion narrow before an eventual spike higher. While not every defined spike is preceded with volatility’s
standard deviation declining to a low level, only a handful of signals are not followed by large increases in VIX readings. In other words, not every spike follows a signal but nearly every signal is followed by a spike.

Because standard deviation is essentially a measure of volatility in-and-of-itself, by using it to analyze the VIX we are in essence evaluating the volatility of the Volatility Index. Fortunately, the CBOE also has created a tool for measuring the volatility of the Volatility Index, called the VIX of the VIX (VVIX). This type of tool can be useful as the scope of this paper is focused on not just forecasting future volatility but specifically spikes in volatility, which can be improved by the incorporation of VVIX.

The CBOE summarizes VVIX as “an indicator of the expected volatility of the 30-day forward price of the VIX. This volatility drives nearby VIX option prices.”13 [See CBOE]  Park (2015) notes that the VVIX acts as a better measurement of tail risk due to the VIX options market having larger trading volume, a lower bid-ask spread, and more liquidity compared to the S&P 500 options market.14 [See Park 2015]  This allows for the capability to be potentially more accurate with the forecasting ability of volatility’s dispersion.

By applying the same level of analysis to the VVIX as we did with the VIX we can find the fifteenth percentile 20-day standard deviation for the VIX of the VIX is 3.16. Chart 8 plots the Volatility Index with markers notating the instances when VVIX standard deviation is at or below 3.16. Similar to the previously discussed dispersion of the VIX, the dispersion for the VVIX has a small sample size of 54 over the studied time period. However, similar to the suboptimal method of using large declines in the VIX as a predictor of future spikes, the VVIX dispersion
threshold has many false-signals that are now followed by volatility spikes.

In order to continue to improve upon the idea that volatility dispersion is an optimal predictor of future VIX spikes, a simple system can be created using both the VIX and VVIX. This is accomplished by testing when both the VIX and the VVIX have readings of their respective 20-day standard deviation at or below their defined thresholds. Chart 9 shows where the combination of the two signals (red square markers) is met as well as just the VIX signal (green triangle markers) in order to show the differences and overlap of the two methods. As to be expected, the sample size decreases when the two volatility measurements thresholds are combined into a single signal. While the VIX alone produces more triggers of low dispersion, it appears the combination of the VIX and VVIX are timelier in their production of a signal before spikes within the Volatility Index.

Up to this point only a visual representation of the signals has been shown, but next we shall look at the numerical changes that occur in the VIX following the methods previously discussed in this paper along with the superior method outlined in the section above.

Table 1 shows the three week change in the VIX, utilizing the maximum and minimum average and median. We can see that the previously discussed methods of using a low in the VIX (lowest close in four weeks) and large declines (15+% decline in three days) do not produce an ‘edge’ over the average three week change in all VIX readings. However, we do see a much larger maximum and smaller minimum when using the VIX, VVIX, and combined signal.

In fact, the VIX signal has an average three-week maximum that is 54% greater than that of the large VIX drop with the minimum change being smaller by 49%. Not only does the VIX rise on average by a greater degree for the VIX, VVIX, and combined signal, the VIX declines less after a signal has been produced as well. This increase in ‘edge’, with the previously discussed decrease in sample sizes produces a more manageable signal generation with more accurate forecasting ability than the discussed alternative methods of VIX spike forecasting.

CONCLUSION

This paper provides an argument for using the dispersion of the VIX, through the use of a 20-day standard deviation as a superior tool in forecasting spikes within the Volatility Index.  While not every trader has a specific focus on the Volatility Index within their own respective trading styles or strategies, Munenzon (2010) shows that the VIX has important implications for return expectations for many different asset classes such has bonds, commodities, and real estate. Although the Volatility Index itself cannot be bought or sold directly, by knowing
how to properly evaluate volatility, an investor can better prepare his or her portfolio, whether from a standpoint of defense (raising cash, decreasing beta, etc.) or offense (initiating a trading position to capitalize on the expected rise in volatility through the use of ETNs, futures and/oroptions). With Charts 6 through 9, it has been shown that the evaluation of the dispersion within the VIX and VVIX act as accurate barometers for future large advances in the Index. Table 1 provides evidence that the VIX rises more and declines less after a signal has be established through dispersion analysis over more commonly used methods applied to volatility. While the scope of this paper is not to create a standalone investment strategy, the concept discussed within can be taken and utilized in a broad scope of investment paradigms and timeframes.

It is believed by the investment community that by having the VIX at relatively low levels or following large declines, its nature to mean-revert would carry the Index immediately higher, snapping like a rubber band to elevated levels. This line of thinking produces signals with sample sizes much greater than most traders would likely be able to act upon or monitor, and as Table 1 shows, forecasts on average, sub-par future changes within the VIX. While the parameters used within this paper to analyze the dispersion of the Volatility Index were not optimized, the author believes further research can be done to better hone the forecasting ability of analysis when the VIX and VVIX trade in narrow ranges prior to spikes in the underlying Index.

With relative confidence, the author believes dispersion of price, as measured by the daily standard deviation of the VIX and VVIX acts as a more accurate and timely method of forecasting spikes, as defined in this paper, in the Volatility Index. This method provides an early warning signal of a potential oncoming “volatility tsunami” that can have large negative implications for an investment portfolio and allows for the potential to profit from the rising tide of the VIX.

REFERENCES

Bollinger, John. “Bollinger’s Brainstorm.” Bollinger Bands. Bollinger Capital Management, Inc., n.d. Web. 12 Oct. 2016. <http://www.bollingerbands.com/services/bb/>.

CBOE Volatility Index FAQs, Chicago Board of Options Exchange, n.d. Web. 4 Nov. 2016. <http://www.cboe.com/micro/vix/faq.aspx#1>.

CBOE VVIXSM Index, Chicago Board of Options Exchange, n.d. Web. 4 Nov. 2016. <http://www.cboe.com/micro/vvix/>.

Cipollini, Alessandro Paolo Luigi and Manzini, Antonio, Can the VIX Signal Market’s Direction? An Asymmetric Dynamic Strategy (April 2007). Available at SSRN: https://ssrn.com/abstract=996384 or http://dx.doi.org/10.2139/ssrn.996384

Data and Ticker Symbols, StockCharts.com, Web. 4 Nov. 2016. <http://stockcharts.com/docs/doku.php?id=data>.

Drimus, Gabriel G. and Farkas, Walter, Local Volatility of Volatility for the VIX Market (December 10, 2011). Review of Derivatives Research, 16(3), 267-293, (2013). Available at SSRN: https://ssrn.com/abstract=1970547 or http://dx.doi.org/10.2139/ssrn.1970547

Giot, Pierre, Implied Volatility Indices as Leading Indicators of Stock Index Returns? (September 2002). CORE Discussion Paper No. 2002/50. Available at SSRN: https://ssrn.com/abstract=371461 or http://dx.doi.org/10.2139/ssrn.371461

Gire, Paul J. (2005) Missing the Ten Best. Journal of Financial Planning.

How a Tsunami Wave Works. National Oceanography Centre, 11 Mar. 2011. Web. 27 Oct. 2016. <http://noc.ac.uk/news/how-tsunami-wave-works>.

Hulbert, Mark. “Bear Markets Can Be Shorter Than You Think.” The Wall Street Journal, 06 Mar. 2016. Web. 08 Oct. 2016. <http://www.wsj.com/articles/bear-markets-can-be-shorter-thanyou-think-1457321010>.

Munenzon, Mikhail, 20 Years of VIX: Fear, Greed and Implications for Alternative Investment Strategies (April 29, 2010). Available at SSRN: https://ssrn.com/abstract=1597904 or http://dx.doi.org/10.2139/ssrn.1597904

Park, Yang-Ho, Volatility-of-Volatility and Tail Risk Hedging Returns (May 18, 2015). Journal of Financial Markets, Forthcoming. Available at SSRN: https://ssrn.com/abstract=2236158 or http://dx.doi.org/10.2139/ssrn.2236158

Whaley, R. E. Derivatives on market volatility: Hedging tools long overdue, Journal of Derivatives 1 (Fall 1993), 71-84.

Whaley, R. E. (2009). Understanding the VIX. The Journal of Portfolio Management, 35(3), 98-105. doi:10.3905/jpm.2009.35.3.098

Whaley, Robert E., Trading Volatility: At What Cost? (May 6, 2013). Available at SSRN: https://ssrn.com/abstract=2261387 or http://dx.doi.org/10.2139/ssrn.2261387

Zakamulin, Valeriy, Abnormal Stock Market Returns Around Peaks in VIX: The Evidence of Investor Overreaction? (May 1, 2016). Available at SSRN: https://ssrn.com/abstract=2773134 or http://dx.doi.org/10.2139/ssrn.2773134

Contributor(s)

Andrew Thrasher, CMT

Andrew Thrasher, CMT is the Portfolio Manager for Financial Enhancement Group LLC and founder of Thrasher Analytics LLC. Mr. Thrasher holds a bachelor’s degree from Purdue University and the Chartered Market Technician (CMT) designation.  He is the 2017 and 2023 winner of the Charles H. Dow...

WHY DOES A TRADER NEED EDUCATION?

I feel very blessed in my position as the founder and architect of Optuma, that I get to travel the world speaking to traders, analysts, and portfolio managers at all levels. From those who are taking their first tentative steps in their trading life, to the hardened professional analyst who has experienced all market conditions and is doing very well from their endeavors. In all my conversations with traders and analysts over the last 20 years, I’ve found one critical element separating the winners from the losers. It’s their attitude to personal education and their willingness to invest in it.

Those who have done well over the long-term are always the individuals seeking out quality education—those who continue to learn. Those who have not been able to make trading work want a “magical” formula and are unwilling to invest time and energy to get the results. There is always more that we can learn. Even if someone is teaching very basic material, I find that I’m always able to take something away.

Here are a number of the excuses that wanna-be traders have told me over the years to justify why they won’t invest in their education:

If the education is so good, why do they sell it?
Wouldn’t they keep it for themselves?

This is one of the most common statements I hear, and I believe it comes from narrow-minded thinking. In most cases educators are using the systems they teach, and do so profitably. It’s not unusual for education companies these days to display independently audited results on their websites for those who want more than a few handouts showing the results of a backtesting system.

Then there are people who are gifted with the ability and passion to study the markets and devise new technical ways to trade. Often, what I have seen is that these people, though analytically brilliant, are terrible traders. The first person to introduce me to the Markets, Technical Analysis and Gann, was a man who devised an incredible set of rules for entering and exiting the market on real-time charts. The only problem was that he didn’t have the discipline to follow his own rules. The results for him were terrible. He was an analyst, not a trader.

Trading is largely an inward-focusing endeavor. A trader is more often than not interested in making money for themselves, usually at the expense of the person on the other side of the trade. In today’s jaded and skeptical society, many people are looking for the hidden agenda when someone stands up and offers a course on how to trade. The motivation of most of the reputable educators comes in the realization there is something “magical” that happens when you start to teach. In the act of passing on knowledge, you gain insight. The questions that a student asks can help refine your own beliefs and justifications. Giving to others can be more of a reward than taking for oneself. Of course—many people just love to teach!

As a company, Optuma only succeeds when all our clients succeed. The more we can do to help our clients, the longer they stay using our products and services. That helps offset the cost of our education, and also enables us to provide a range of education for free (see learn.optuma.com).

It’s just a scam.

Unfortunately there are too many cases where this is true. It’s greed in people that makes them willing to believe a miraculous course will teach them everything they need to know to be able to make millions each year. I’ve seen educators who charge very little for some of the most powerful trading techniques available, and also those who charge way too much for a really basic technical method of trading that soon stops working. Be careful if someone is charging an exorbitant amount of money for a course. Do your own research (Google is your friend). Make sure you are getting value for money, and watch out for the cunning salesman who talks about all the money you can make with little effort on your behalf. Like most professions, becoming a successful trader requires blood, sweat, and tears (well sweat, capital, and tears).

Many of us have been burnt when we’ve been blinded by greed (or is that just me?). The truth is there is no shortcut to success. It always astounds me when I hear of traders with a goal to earn a Brain Surgeon’s salary from trading – but they are unwilling to invest money in a course or even dedicate a year studying how to trade. I’m sure you’ve heard the story of the guy who became a millionaire overnight when he bought a penny-stock that skyrocketed.

In my opinion, they are the “lottery winners” of trading. In most cases they just got lucky, or they knew something that you and I didn’t. You will grow old very quickly waiting for those trades. Much better to get on with the business of making a percent here and there on good analytical trades.

I’ll just buy a book.

I’m an avid reader and I believe books have an important part to play in education on any topic, but I see that books enhance education from courses, they do not replace it. In building  a business from the ground up, I’ve read many books on different topics to help me learn what to do, but nothing has ever been as effective or life-changing as attending a seminar or face-toface mentoring. What I find is that taking the time away from daily activities to dedicate to your education is like pouring accelerant on a fire. It really helps take your understanding to a new level, and gives you the ability to apply new knowledge with confidence.

In any field of work there is always training and re-training that’s required. This is why professionals usually make good traders—they know education is not a destination but a lifelong journey.  They will continually reassess their methods and look if there are new and better ways to excel at their job. Some of the most successful traders I have met are always learning. They are open to finding new techniques to experiment with (note: they don’t just jump from one idea to another with their whole trading capital, they will usually keep most of their capital in systems that are tried and true, but divert an amount to try new ideas).

All of that to say “YES”—a trader does need education if they want to be successful long-term.  Trading can be a great way to make a comfortable living if you are willing to learn, persevere, and discipline yourself.

Contributor(s)

Mathew Verdouw, CMT, CFTe

As the founder and CEO of Market Analyst (1996) and Optuma (2019), Mathew Verdouw, CMT, CFTe has been working in the field of Technical Analysis for over 28 years. His inquisitive nature, engineering background, and passion for helping people led Mathew to...

THE MOST OBVIOUS MACRO TRADE EVERYONE SEEMS TO BE IGNORING

Editor’s note: Stephen Duneier is among the presenters at the Annual Symposium in April.  Below is a reprint of his work from his work at BijaAdvisorsLLC.com.

I recently gave a TED Talk titled, “How to Achieve Your Most Ambitious Goals.” In it, I repeatedly tell the audience that I possess no magical gift of skill or talent. I don’t say it out of humility, but rather because I believe it to be the truth. Listen in on any coaching call or read any of the publications I’ve written and you will discover that very little of what I say could be considered rocket science. In fact, it is all quite obvious. However, if there is something that separates me from the majority, it is that I actually enjoy frolicking in the domain of the obvious, the simple, and the easy to understand and predict. It is my philosophy that everything can be broken down into a simple, easy to understand form. Nothing is inherently complex. It is we who overcomplicate things by overweighting incidental factors and attempting to predict the inherently variable.

It happens all the time in this business, especially among macro investors. Rather than putting large bets on high probability occurrences, most in this industry would prefer to attempt lots of predictions on highly randomized, difficult to predict outcomes.

While it seems the entire world is putting all their chips on things like when the Fed will move and by how much, the demise of the Chinese banking system or even worse, which positions are crowded, I’m scratching my head wondering why everyone isn’t focused on the most obvious trade in all of global macro. Yes, I’ve been writing about the global macro forces that are and will likely continue to weigh on the price of oil, potentially for years and even decades to come, of course it doesn’t mean that you should always be short. However, thanks to OPEC’s mistake and the market’s lack of appreciation for one of the most powerful events in market history, we global macro investors have been presented with a wonderful gift. The gift of a fantastic risk/reward opportunity that can be expressed in at least two ways: short upside via defined risk strategies (see OIL2 – WTI Topped) and low delta puts. (If your immediate reaction was, “What should I be doing in CAD or NOK”, you’ve witnessed evidence of our natural desire to complicate what needn’t be complicated.)

Below, I share links to most of the pieces I’ve written related to oil over the past couple of years. In order to simplify this seemingly complex beast, you must invest the time and employ the cognitive strain necessary to gain perspective. Without it, you’ll likely continue ignoring the most glaringly obvious trade available, in favor of the trade du jour. For those unwilling to make the investment, here’s a synopsis.

1. It was the orchestration of the largest urbanization project in the history of man that caused the spike in oil, and every other commodity (read The Real Reason Commodities Collapsed).

2. The impact of that project is over.

3. There are no obvious growth drivers on the horizon anywhere in the world, and even if there were, none would have anything close to the impact on demand that the Chinese project had. So forget about reverting to the mean of an outlier event.

4. The historic spike in prices triggered two responses.

  • On the demand side, technology set about reducing future demand by making us far, far more efficient users of energy.
  • On the supply side, technology offered many new ways of satisfying energy demand and better at extracting every last drop.

5. The sudden disappearance of demand growth from China’s project combined with more  efficient usage and a supply side that is working feverishly to produce more and more, will weigh heavily on the price.

  • See the persistent growth in inventories since the Chinese impact disappeared in 2009. Because specialists in oil like to focus on the weekly changes in this data, they seem to be completely oblivious to the obvious long term trend that has resulted in a more than tripling of inventories, and growing.

6. In the wake of previous urbanization projects of historic proportions, commodity prices were depressed for extended periods.

7. OPEC is impotent, but neither they nor market participants seem to realize it yet (read OPEC’s Dilemma).

8. Deregulation under the Trump administration will only serve to speed up the eventual collapse of oil prices as production is ramped up.

9. Short-term supply and demand adjustments have very little correlation to the price of oil.  It is all about the long-term (see Supply, Demand and the Price of Oil). 

10. There is a tipping point in oil. In other words, there are discrete moments to be aware of, because they can be very impactful, and create great confusion if you don’t recognize them (see Two Certainties: Yield and Certainty of Yield).

11. Also see What Makes High Yield Energy Different.

Contributor(s)

Stephen Duneier

Stephen Duneier is the founder and CEO of Bija Advisors, an investment strategy coaching group. For nearly thirty years, he has applied cognitive science to investment management. The result has been the turnaround of numerous institutional trading businesses, career best returns for experienced portfolio...

ALPHANUMERIC FINANCIAL CHARTS

Editor’s note: Richard Brath is among the presenters at the Annual Symposium in April. Below is a reprint of his work from his blog at RichardBrath.wordpress.com.

Financial charting has long used alphanumerics as point indicators in charts. One of the oldest I can find is Hoyle’s Figure Chart (from The Game in Wall Street and How to Play it Successfully: 1898) which essentially plots individual security prices in a matrix organized by time (horizontally) and price (vertically).

This textual representation evolved over the decades. By 1910, Wyckoff (Studies in Tape Reading: 1910) was creating charts where x and y are still time and price, but he was writing down volumes instead of prices, and connecting together subsequent observations with a line.

By the 1930’s these had evolved into early point and figure charts, such as can be seen in DeVilliers and Taylor (Devilliers and Taylor on Point and Figure Charting: 1933). Columns use X’s to plot prices and other characters to denote particular price thresholds.

These charts look pretty close to modern financial point and figure charts. Now we typically use
X’s for a column of rising prices and O’s for a column of falling prices, and other character may
be used to denote particular time thresholds (e.g. 1-9, A-C to indicate the start of each month).

Other alphanumeric charts evolved along the way as well. Here’s an interesting depression era chart plotting a histogram of states based on state unemployment rates. Like Wyckoff, the author seems to be interested to keep the alphanumerics inside circles. Also, note standardized 2 letter codes for states did not yet exist – states are numbered instead. (from W. C. Cope’s book Graphic Presentation: 1939).

Fast forward to the 1980’s, and we have Peter Steidlmayer’s Market Profile (R) charts that appear reminiscent to the alphanumeric distributions seen in the depression era chart. In these distributions, the alphanumeric value represents times when a security traded at a specific price. Depending on the timeframe of the chart different mappings may be used. One common intraday convention is to use characters A-X and a-x to represent half hour intervals throughout the day, with a split from uppercase to lowercase at noon.

There are many, many variants of market profile charts now e.g. sierrachart.com, windotrader.com, bluewatertradingsolutions.com, prorealtime.com, cqg.com, etc, etc. Given the many possible data attributes and analytics that one might associate with a character in a chart, it can become a challenge to encode them. As a result, one can find interesting variants. Beyond position, letters and case:

  • color: of the foreground letter or background square
  • bold: to indicate a row or potentially as a highlight to one time interval, e.g. MarketDelta
  • superscripts: e.g. eSignal.
  • added symbols: asterisks, less than, greater than, etc.
  • added shapes: circles and diamonds

Jesse Livermore (How to Trade in Stocks: 1940) created his own variant of alphanumeric charts stripped down to tracking only the minimums and maximums, discarding the intervening levels and using color and underlines to indicate information.

One interesting discussion point is the actual use of these charts. Whenever I show these charts to the visualization research community, people are aghast and suspect. There’s so much going on in these charts, so many different things being shown simultaneously, that they don’t believe that people actually use them or that somehow these charts can’t be perceptually efficient.

On the other hand, I’ve talked to people who’ve traded off these charts their entire career.  They see patterns and pick out things immediately at very different scales: individual outliers, columns of a particular letter, the shape of a distribution, and so on. Much like an expert chess player, these market participants have learned these charts, know how to interpret them, and use them to make trading decisions.

To be fair, not everyone in the visualization community is shocked: some are genuinely curious.  Instead of reducing visualizations down to just one or two attributes, here’s something heavily loaded with a lot of visual attributes. And it’s not a static poster where you have no interaction: these are on computer screens packed with interactive features. In spite of all the computational ability to filter and reduce, here’s a community that that has these densely-packed charts. People are actually using them to see macro patterns (shapes of distributions) and micro readings (individual characters), but they are also able to attend to intermediate patterns such as particular letters within a distribution. Perhaps they aren’t seeing patterns as fast as preattentive recognition, but they are still seeing patterns quickly with this external cognitive aid. There’s still more that the visualization community needs to understand about expert users.

Contributor(s)

Richard Brath

Richard Brath is a long-time innovator in data visualization in capital markets at Uncharted Software. His firm has provided new visualizations to hundreds of thousands of financial users, in commercial market data systems, in-house buy-side portals, exchanges, regulators and independent investment research...

HOW DO STOP-LOSS ORDERS AFFECT TRADING STRATEGY PERFORMANCE

Editor’s note: Tucker Balch is among the presenters at the Annual Symposium in April. Below is a reprint of his work from his web site at AugmentedTrader.com.

“A stop order is an order placed with a broker to sell a security when it reaches a certain price. A stop-loss order is designed to limit an investor’s loss on a position in a security” —investopedia.

In this article, we investigate how the addition of stop-loss orders affect a generic trading strategy.  When investors enter a new position in a stock, they often simultaneously put in an order to exit that position if the price dips to a certain level. The intent being to prevent a substantial loss on that stock if a significant unanticipated negative event occurs.

As an example, if we bought a fictitious stock XYZ at $100.00, we might put in a 5% stop-loss order (at $95.00). If the price of XYZ continues upward as we hope, we accrue additional value, but if the price suddenly drops 15% to $85.00 we’d exit with a loss of only 5%. So a stop-loss order limits downside risk while enabling upside gains.

In many cases this plan works as intended.

Sounds great, how can it go wrong?

A key problem with stop-loss orders is that the price might dip before it goes up more significantly.  Consider the chart at right of AAPL’s price during a few months in 2012. If we purchased AAPL at $86.14 on April 27 (all the way to the left) and simultaneously put in a 5% stop-loss order we’d exit on or about May 4th with a 5% loss.

On the other hand, if we avoided the stop-loss order and held AAPL until September 12, we would have made 12.85% instead of losing 5%. In this case, the stop-loss order effectively cost us 17.85%. This sort of outcome is more likely with volatile stocks because they’re more likely to bounce around and “tag” the stop-loss price along their way to a higher price.

There are other risks as well. As the price is on its way down and the stop-loss order is triggered, it is not necessarily the case that you’ll get the price you wanted. The price may continue past the stop-loss level to a significantly lower price before your order executes. There are more complexities with stop-loss exits that we could go into, but suffice it to say that there’s no guarantee you’ll get the stop-loss price you set.

We can also enter stop-gain orders

There’s another sort of order that is symmetric to the stop-loss. Stop-gain orders enable you to “lock in” gains when the price reaches a certain target level. The idea being that once the stock meets a target price you had, you should take the profits and avoid the risk of the stock later losing value.

An experiment: How stop-loss and stop-gain orders affect strategy performance

In order to evaluate the utility of stop-loss and stop-gain orders we created a notional strategy.  We then tested it first with no stop orders, and then with stop orders at different levels. Our strategy works as follows:

• Each month, compose a portfolio of the Dow Jones Industrial 30 stocks
• Require a minimum 2% allocation to each stock
• Optimize the remaining funds to stocks for maximum Sharpe Ratio
• Enter individual stop orders (if any) for each position
• Exit positions as appropriate over the next month

Note that for this experiment we utilized the members of the Dow as of today (January 2016) over the entire simulation, so our backtests are not survivor bias free. That doesn’t matter much though because what we’re investigating here is how stop orders affect the strategy in a relative manner.

The performance of our baseline strategy from January 2001 to early 2016 is illustrated below:

As you can see, this strategy provides great cumulative returns (410%), but it is more volatile than the Dow and also subject to significant drawdown (-55%). Can stop-loss orders help?  Let’s see. Here’s a chart of the same strategy, but now with a 5% stop-loss order applied at the beginning of each monthly trading cycle:

As you can see, the addition of the 5% stop-loss significantly reduces cumulative return (from 410% to 101%). There are some benefits however, it limits drawdown to only -40.31%, and it reduces daily volatility by about 40%.

It might be that a 5% is too small, causing us to exit too early. So, let’s try more values.

Results varying stop-loss and stop-gain settings

We repeated our experiment while varying the stop-loss value from 1% to 20%. We also tested the strategy with symmetric stop-gain orders (in other words, if we have a 2% stop loss, we also add a 2% stop gain). While varying the stop-loss we measured: Cumulative return, Sharpe Ratio, and max drawdown. Let’s look at each of these metrics separately:

In the figure at right you can see that as we increase the stop-loss level from 1% to 20%, cumulative returns increase significantly (blue line). We see very similar results when we additionally add stop-gain orders(red). The highest return is provided when no stop-loss orders are applied at all (green line).

A reasonable conclusion to draw here is that to maximize cumulative return it is best not to exit with stop-loss or stop-gain orders. That approach, however does expose the strategy to drawdown risk. So let’s take a look at drawdown.

Drawdown is a measure of peak to trough loss (remember the -55% drawdown during the great recession?). Smaller negative numbers are better. Figure 4 illustrates how drawdown is affected by increasing stop-loss order levels. As you can see drawdown increases significantly as we increase stop-loss (and corresponding stop-loss/stop-gain pairs). So stop-loss orders do serve their intended purpose of protecting against significant drawdown. This protection though, comes at the price of overall returns.

Clearly there is a tension between the protection afforded by stop-loss orders and the potential return for our strategy without them. We can look at one more metric to seek some resolution.

Sharpe ratio is a measure of risk-adjusted return. It considers volatility of the portfolio as well as return. Higher Sharpe ratios are better. Figure 5 shows us how Sharpe Ratio is affected as stop-loss size is increased. Notice that with stop-loss only (blue), Sharpe Ratio is fairly constant at about 0.50. So changing the stop-loss level has little effect. However, when we add stopgain as well, we see especially poor Sharpe ratios at low stop-gain levels from 1% to 6% (red).  As we increase stop-loss/stop-gain past 8% though, we see a fairly constant Sharpe Ratio.

Some take home conclusions

Remember that the results here are for a particular strategy and that they may not necessarily generalize to your trading strategy. Given that caveat, here are some of the conclusions we can draw from our experiments using stop-loss orders with this strategy:

• Stop-loss orders do effectively protect against drawdown, but at the cost of cumulative return.
• The combination of stop-loss with stop-gain orders is more effective at limiting drawdown than stop-loss orders only.
• Low stop-gain orders (1% to 5%) significantly negatively impact cumulative return and Sharpe ratio.
• For this strategy, the “sweet spot” where Sharpe ratio is maximized and drawdown is somewhat limited seems to be in the 8% to 10% range with symmetric stop-loss and stop-gain orders.

Note again, that these conclusions are specific to this strategy. Please do not consider this to be investment or trading advice.

This information has been prepared by Lucena Research Inc. and is intended for informational purposes only. This information should not be construed as investment, legal and/or tax advice. Additionally, this content is not intended as an offer to sell or a solicitation of any investment product or service. Please note: Lucena is a technology company and not a certified investment advisor. Do not take the opinions expressed explicitly or implicitly in this communication as investment advice. The opinions expressed are of the author and are based on statistical forecasting based on historical data analysis. Past performance does not guarantee future success. In addition, the assumptions and the historical data based on which an opinion is made could be faulty. All results and analyses expressed are hypothetical and are NOT guaranteed. All Trading involves substantial risk. Leverage Trading has large potential reward but also large potential risk. Never trade with money you cannot afford to lose. If you are neither a registered nor a certified investment professional this information is not intended for you. Please consult a registered or a certified investment advisor before risking any capital. 

Contributor(s)

Tucker Balch, Ph.D.

Tucker Balch, Ph.D. is a former F-15 pilot, professor at Georgia Tech, and co-founder and CTO of Lucena Research, an investment software startup. His research focuses on topics that range from understanding social animal behavior to the challenges of applying Machine Learning...

Anderson Trimm, Ph.D.

MiFID II SOLUTIONS

Editor’s note: The Markets in Financial Instruments Directive (MiFID) is the EU legislation that regulates firms who provide services to clients linked to ‘financial instruments’ (shares, bonds, units in collective investment schemes and derivatives), and the venues where those instruments are traded. MiFID will result in significant changes for the research community. IHS Markit has prepared a white paper explaining the requirements of MiFID and solutions firms can consider to meet those requirements. MiFID II Solutions can be downloaded from their web site for free.  Below are extracts from the paper highlighting the new requirements.

A wide-ranging piece of legislation, MiFID II aims to create fairer, safer and more efficient markets through improving investor protection, increasing transparency in OTC markets and changing market structure to encourage more competition.

Taken together, the measures of MiFID II affect every part of the securities trading value chain.

Investor Protection

Under the reforms, new legislation establishes strict rules around conflicts of interest, commissions and inducements to improve investor protection by increasing transparency around the use of client money to pay for research.

Historically, research payments have been linked to trading volumes with few firms using formal research budgets. Unbundling the payment for research from the payment for execution has been acknowledged as one way to address these potential conflicts.

New requirements state that research is not considered an inducement if the Investment Firm (IF) pays directly out of their P&L or from a research payment account (RPA).

Transparency

MiFID II expands pre-trade and post-trade transparency regimes to equity-like instruments,

bonds, derivatives and structured products, among other financial instruments. For OTC derivatives, there are two layers of trade reporting to enhance price transparency and help regulators monitor risk and market activity.

Post trade public reporting: MiFIR real-time public reporting is required to be sent to an Approved Publication Arrangement (APA). Post trade transaction reporting MiFIR transaction reporting is required to be sent to an Approved Reporting Mechanism (ARM) by T+1 and requires transaction reports to include; transaction data, legal entity data and personal data about the trader, nonpublic personal information (NPPI).

Market Structure

MiFID II aims to increase competition in OTC derivatives markets through mandating the use of electronic trading venues for certain instruments.

Trading requirement: MiFIR requires trading of certain liquid instruments on a trading venue; Multi-lateral trading facility (MTF) or Organised Trading Facility (OTF).

Repapering: MiFID II will require firms to establish and implement an order execution policy.  This will mean that some firms will need to undertake a substantial repapering process when updating their terms of business and obtaining consent from clients. Additionally firms will need to confirm legal entity identifiers (LEIs) and reaffirm client categorizations as professional or retail.

Planning ahead and working to comply

Despite MiFID II being delayed to 2018, the urgency around implementing solutions to comply with the new requirements has not subsided.

These and other regulatory pressures are mounting across the major regions, and as electronification and market structure changes expand globally, firms have to commit to building a basic foundation to ensure regulatory compliance.

Looking beyond simple compliance, forward-looking buyside and sell-side firms are exploring future-proof frameworks as a potential competitive differentiator. The tools that were once ‘nice to have’ are becoming more ‘must have’ as a change is afoot.

To learn more about MiFID and HIS Markit’s solutions, please visit their web site.

Contributor(s)

IHS MARKIT

VISUALIZING THE ANXIETY OF ACTIVE STRATEGIES

Editor’s note: Cory Hoffstein is among the presenters at the Annual Symposium in April. This post was originally published at ThinkNewfound.com and is available as a PDF here.

Summary
• Prospect theory states that the pain of losses exceeds the pleasure of equivalent gains.  An oft-quoted ratio for this pain-to-pleasure experience is 2-to-1.
• Evidence suggests a similar emotional experience is true for relative performance when investors compare their performance to common reference benchmarks.
• The anxiety of underperforming can cause investors to abandon approaches before they benefit from the long-term outperformance opportunity.
• We plot the “emotional” experience investors might have based upon the active approach they are employing as well as the frequency with which they review results. The more volatile the approach, the greater the emotional drag.
• Not surprisingly, diversifying across multiple active approaches can help significantly reduce anxiety.

Last week, Longboard Asset Management published blog post titled A Watched Portfolio Never Performs. What we particularly enjoyed about this post was a graphic found in the middle, which applied prospect theory to demonstrate actual results versus perceived investor results based upon emotional experience.

In prospect theory, investors tend to feel the pain of losses more than the pleasure of equivalent gains. Investors that check their portfolio more frequently compound those negative emotions faster than those that check less frequently. As a result, they may perceive their experience as being riskier than it really is.

This is made worse by the fact that investors that check their portfolios more frequently are mathematically more likely to see periods of losses than those that check less frequently.

When prospect theory and mathematics are tied together, we get the following result:

While in actuality, the investors checking their portfolios daily, weekly, and monthly all had the same long-term performance result (assuming, of course, they were able to stick with their investment), the anxiety caused by checking performance more frequently caused the daily investor to feel like their long-term performance was much worse than it really was.

While prospect theory is most often applied to absolute gains and losses, we believe it also applies to relative portfolio performance. Investors constantly compare their results to standard benchmarks.

In the remainder of this commentary, we want to extend Longboard’s example to explore how typical active strategies – expressed as factor tilts – feel to investors based upon how frequently they evaluate their portfolio.

Methodology & Data

To explore the idea of anxiety caused by relative performance in active strategies, we will look
at the performance of long/short factor portfolios.

The idea here is that a long-only factor portfolio (e.g. a long-only value portfolio) can be made by overlaying a market portfolio with a long/short value portfolio. Therefore, relative performance to the benchmark will be governed entirely by the size of the long/short portfolio overlay.

There are a variety of reasons why this framework is not true in practice, but we feel it adequately captures the concept we are looking to explore in this commentary.

The long/short factor portfolios we employ come from AQR’s factor library. Specifically, we leverage their Size (“SMB”), Value (“HML Devil”), Momentum (“UMD”), Quality (“QMJ”), and anti-beta (“BAB”) factors data.

Factor portfolio returns are only available on a monthly basis, so we will recreate the above Longboard graphic for investors that review their portfolio on a monthly, quarterly, and annual basis. Using monthly data allows us to go back as far as 1927 to evaluate performance for several factors.

To create “experience” returns, the return of the long/short portfolio is calculated over the investor’s evaluation period. If the return over the period is negative, then the loss is doubled, to account for the fact that investors are reported to experiences the pain of a loss twice as much as the pleasure of an equivalent gain.

Size Factor

The size factor is the relative performance between small capitalization stocks and large capitalization stocks, with the idea being that small should outperform large over the long run.

What we can see is that while size has been a positive premium over the long run, even investors that only evaluate their portfolios on an annual basis have had a negative emotional experience.

Due to the asymmetric response to gains versus losses, we can see the pain of “volatility drag” in periods like the 1950s, where the size factor was largely flat in return, but the experience for investors was largely negative.

Value Factor

The value factor captures the relative performance of cheap stocks versus expensive ones.  Our anecdotal experience is that this is, by far and away, the most actively employed portfolio tilt for investors.

Unlike the size premium, we see that the long-term performance of the value factor is strong enough, and the historical frequency of underperformance limited enough, that an investor who checks their relative performance annually will feel like they ultimately ended up in the same place as the broad market.

At first review, this may seem disheartening. After all, over the long run value has delivered significant outperformance.

However, what this tells us is that for investors that review their portfolios at most annually, a value tilt can be employed without creating too much long-term relative anxiety. The investor will still feel like they are keeping up with the market benchmark, despite the emotional drags of prospect theory, and can in reality harvest long-term outperformance opportunities.

Momentum Factor

The momentum factor captures the relative performance of prior winners versus prior losers: investing in those stocks that have relatively outperformed their peers and shorting those that have underperformed.

While the value factor ended up nearly in the same place as the market for annual reviewers, the momentum factor ends up significantly positive.

Furthermore, the consistency of the momentum factor is so strong from the 1940s to the 2009s that even a monthly reviewer feels like they are treading water.

The trade-off appears in the dreaded momentum crashes (e.g. 1932 and 2009) when winners dramatically underperform losers. The crashes have historically tended to occur during strong market rebounds. From an emotional experience, this might as well be the apocalypse.

Even for an annual reviewer, we see that the emotional drawdown in from 3/2009 to 11/2009 is almost 80%.

Quality Factor

The quality factor captures the relative performance of “high quality” stocks versus “junk stocks,” as measured by a variety of financial and performance metrics.

While the absolute return of the quality factor is nowhere near the absolute return of the momentum factor (over the same period, momentum returned nearly 90x while quality returned nearly 10x), it is one of the few factors where a quarterly reviewer has close to a net neutral emotional experience. This is likely due to the factor’s low volatility, which reduces the emotional drag caused by investors’ asymmetric response to positive and negative returns.

Anti-Beta (“Low Volatility”) Factor

Anti-beta (often referred to as “low volatility”) captures the relative outperformance of lower beta stocks versus higher beta stocks. Beta, in this case, is a measure of sensitivity to the overall market. It quantifies a stock’s exposure to systematic market risk.

Anti-beta has the distinction of being the only factor where even a quarterly reviewer has had a net positive experience.

This is due to two effects: a strong absolute return level (with the actual performance trumping even the momentum factor) and limited drag from volatility (as can be seen by how closely the annual review tracks the actual performance from 1945 to 1998).

Conclusion

At Newfound, we often say that the optimal portfolio is first and foremost the one investors can stick with. All too often, when it comes to active investing, we see investors go all in on a given approach without considering the emotional anxiety caused by relative underperformance.

The ability and discipline to stick with a strategy is just as important as the strategy itself when it comes to unlocking the potential of evidence-based active strategies.

What we find is that for each active approach, the strength of the anomaly versus its volatility and the frequency with which performance is reviewed will ultimately dictate the investor’s emotional experience. Less volatile premia may cause less of an emotional drag.

Yet perhaps the most powerful take away can be found in the following graph.

In the above chart, we construct a portfolio that holds an equal amount of each of the five factors, rebalanced monthly.

Not surprisingly, the benefits of diversification are so powerful that even an investor that evaluates their relative performance on a monthly basis is left with a positive emotional experience.  Once again, we find that diversification is hard to beat.

Contributor(s)

Corey Hoffstein

Corey Hoffstein is co-founder and Chief Investment Officer of Newfound Research. Investing at the intersection of quantitative and behavioral finance, Newfound Research is dedicated to helping clients achieve their long-term goals with research-driven, quantitatively-managed portfolios, while simultaneously acknowledging that the quality of...

WHY MULTIPLY BY SQRT (252) TO COMPUTE THE SHARP RATIO

Editor’s note: this article was originally posted at AugmentedTrader.com.

This question comes up every time I teach Computational Investing. Here’s my attempt to create the best, (final?) answer to this question.

In my courses I give the students the following equation to use when computing the Sharpe Ratio of a portfolio:

Sharpe Ratio = K * (average return – risk free rate) / standard deviation of return

Controversy emerges around the value of K. As originally formulated, the Sharpe Ratio is an annual value. We use K as a scaling factor to adjust for the cases when our data is sampled more frequently than annually. So, K = SQRT(12) if we sample monthly, or K = SQRT(252) if we sample the portfolio on every trading day.

How did we come up with these values for K? Are they correct? Let’s start with the original 1994 paper by William Sharpe: Sharpe’s paper. Here’s how he defines his ratio: For a time period t, the differential return Dt is the return on the fund minus the return on a benchmark over that period.

Dt = Rft – Rbt

We want to assess the ratio over a many periods, say t = 1 to T. Note that these periods could be years, months, days, etc. Now let’s define two factors:

Davg = The mean value of Dt for t = 1 to T
Dstdev = The standard deviation of Dt for t = 1 to T

Using those two factors, Sharpe defines his ratio as

Sharpe Ratio = Davg / Dstdev

That’s it. Note that there is no “K” involved in this equation, it is just the ratio of those two numbers. As long as we’re comparing results for two funds sampled at the same frequency (say, annually) the comparison is valid. Sharpe points out that comparing the ratio for cases where the frequency of measurements do not match, there will be problems. He does not seek to address that problem in his paper.

Here’s where “K” comes in: Suppose we’re interested to compare the performance of two funds, one for which we have monthly data and another for which we have daily data. The introduction of K enables us to appropriately scale the result according to this measurement frequency. Our formula for this approximation is

K = SQRT(number of samples per year)

This will scale Sharpe Ratios for the various funds as if they were sampled annually. Unfortunately, if you dig more deeply into the math you will discover a flaw. Namely that if you take a single portfolio value time series and compute the Sharpe Ratio for for it using different sample periods, say, weekly, monthly and annually, the resulting computed Sharpe Ratios are not guaranteed to be related exactly as predicted by our K.

There is no simple way to find a conversion factor that will solve this correctly. K is just an approximation that works pretty well.

Why? The reason is that Sharpe uses the arithmetic mean in his ratio. In order for the “K Method” to work precisely it must be the case that annual return = 12 x average monthly return.  But it’s not. One way to solve the problem is to reformulate Sharpe’s original equation in terms of log returns. It is then feasible to work out the relationships in a consistent way. This is the reason why many analysts use log returns in their work.

But if we used log returns, we wouldn’t be using the Sharpe Ratio.

THE TOP 5 INVESTOR BIASES

Editor’s note: This was originally published at EducatedTrader.com, the website of the Independent Investor Institute, an organization dedicated to providing unbiased education to Canadian investors that Larry M. Berman, CMT, CTA, CFA cofounded. Larry is among the presenters at the Annual Symposium in April.

If someone were to pick you up in a helicopter, put blinders over your eyes, then drop you into the middle of a jungle, it’s likely you’d have a tough time lasting for any length of time.  Undoubtedly, it’s hard enough surviving in the jungle, let alone with blinders on. The stock market is a lot like the jungle—it’s a dangerous place for those who don’t know what they’re doing, or even those who think they know what they’re doing. It isn’t a far stretch to see how biases (a.k.a. “blinders”) can compound the challenges posed by the wilderness of the markets.

When it comes to investing, we all have the same biases. This is because our brains have all evolved the same way. The trick is to be able to recognize our biases in our decision-making processes and work on countering them with the right behaviors.

Bias #1: I Know Enough, Therefore I Know Better

We all like to think that we are not as influenced by biases as other people are, which is our first and biggest investing mistake. Even professionals in the financial industry have biases that lead them to make less-than-optimal decisions. In fact, the more we know about a subject, the more confident we are that our forecasts will be correct. The reality is that information quantity is no match for quality; it’s not about how much you know, but what you do with the information you have.

So let’s get one thing straight: you know less than you think you do when it comes to investing, and that’s not a bad thing. You will never know all there is to know about the markets, and you will never gather enough information to give you certainty that an investment will perform the way you want it to. What really matters is separating the facts from the stories. Check your sources when you research an investment, and make sure they’re credible and reliable. Also, don’t take information at face value. Instead, think carefully about how it was presented to you.

Bias #2: I See What I Want to See

Another big problem of ours as humans is that we tend to seek out information that confirms our beliefs rather than challenges them. It makes us feel good to listen to people who share our views, which means that we are likely to dismiss negative information on an investment that we favor. What’s even more interesting is that we tend to view information that contradicts our beliefs as biased itself.

To counter this thought process, we need to constantly seek out information and people that disagree with us. This is not because we want them to try and change our minds, but because we have to be able to understand and deconstruct the logic of the argument. If we can’t see the argument’s flaws, we should seriously reconsider our viewpoint.

Bias #3: Numbers are My Anchor

Anchoring is a term used to describe the tendency for us to stick closely to numbers that are presented to us. The most common example of this is anchoring to share prices. When we see a share price of, say $10, we tend to immediately believe that it reflects the underlying value of the company. Because today’s markets are highly liquid, company values don’t tend to stray significantly from their share prices. However, it is nevertheless important to come to your own conclusions about the value of a company. If there is a significant deviation between your evaluation and the share price, you may have a trading opportunity on your hands.

Bias #4: Good Performance Follows Good Performance

You’ve likely heard the old adage, “past performance is not an indicator of future performance.”  So why do so many investors—even analysts—make the mistake of assuming that a good company with solid earnings over the past several years will continue to perform well? This belief stems from a type of bias known a representativeness, which is a phenomenon where we use a company’s past and current performance to predict its future likelihood of success.  However, this bias can lead you down the wrong path, because future performance relies heavily on events and circumstances that will likely be quite different than those that exist today.

The key here is to determine a company’s competitive advantage, which is the strongest predictor of future success. Most companies out there are or will eventually become quite average, and over time, their performance will revert to the average. So you need to answer the question, “what qualities does the company have today that substantially distinguish it from its competitors so that it will continue to perform well in the future?”

Bias #5: A Loss Isn’t a Loss Until I Take It

The tendency to hold on to investments as they drop in price is all too common. So why do we cling to losers? There are several biases at play here. First, we tend to value the things we own more than the things that we don’t. Whether it’s a coffee mug or 1000 shares of our favorite company, we prefer to sell the things we own for more than many people are willing to pay for them.

The second and biggest reason we don’t sell investments as quickly as we should is because of our aversion to taking losses. In general, we dislike incurring losses about 2.5 times more than we like making gains. We therefore tend to keep our losers longer than we should, and cut our winners sooner than we should. We rationalize the decision to keep declining investments by telling ourselves that the price will bounce back. Unfortunately, they tend to underperform the winners we have already sold.

The trick to avoiding this mental trap is to set up an investment strategy that requires you to buy and sell at pre-determined prices. To augment this strategy, you can “taper in” and “taper out” of investments, depending on their price movements. For example, if you plan to buy 1000 shares of a stock, start by purchasing 500 shares, then as the price moves in the direction you want it to go, buy 25% more. Continue to do so until you’ve reached your maximum number of shares. This same strategy applies to selling stocks.

Easy to Learn, Hard to Do

Although it is easy to understand our biases, it is much more difficult to know when they are influencing our decisions. This is why planning your investment strategy is so important.  Planning allows you to predict scenarios before they arise and work out how to properly handle them. Don’t underestimate your brain’s ability to trip you up. If it happens to professional money managers and analysts, it will happen to you.

Contributor(s)

Tucker Balch, Ph.D.

Tucker Balch, Ph.D. is a former F-15 pilot, professor at Georgia Tech, and co-founder and CTO of Lucena Research, an investment software startup. His research focuses on topics that range from understanding social animal behavior to the challenges of applying Machine Learning...

CHART OF THE MONTH: CDS DATA OFTEN LEADS EQUITY PRICS

Since ending 2016 at a price of 72.23, Target shares have broken technical support and dropped by 24%. One quantitative factor that could have helped investors avoid this stock is the credit default swap spread. CDS spreads indicate the market’s perception of a company’s creditworthiness.  As spreads rise, credit quality is thought to deteriorate. In the case of Target, the rise in spreads from a low of 22 last March to the February 2017 high of 65 provided an early warning that Target shares were increasing in risk.

While there’s currently no inexpensive source for investors to see single name CDS spreads, HIS Markit will soon be launching a report on Yahoo Finance that includes this data, along with a quantitative scoring system. It will also include Markit’s proprietary ETP flows, sector PMI, and short interest data.

Contributor(s)

Jason Meshnick, CMT

Jason Meshnick, CMT, is the Director of Product Management at Markit Digital, a division of IHS Markit. There, he creates well-known market analytics including the CNN Business Fear & Greed Index. His past career included work as a principal trader, market maker,...