Technically Speaking, October 2015

LETTER FROM THE EDITOR

In recent issues, we have been highlighting content from the Annual Symposium. As I noted last month, “That meeting lasts just a few days but it truly does provide months’ worth of ideas for attendees.” In this month’s issue of Technically Speaking we highlight how local Chapter meetings can also provide ideas to improve your analysis.  The Denver Chapter recently hosted a meeting on back testing and the discussion included best practices related to data. A summary of that presentation kicks off this month’s issue. In researching this topic further, I discovered the value of using data free from pre-inclusion bias. Testing by Cesar Alvarez is included in this issue to quantify that problem. As Cesar notes, “People often write about systems they have developed using the current Nasdaq 100 or S&P500 stocks and have tested back for 5 to 10 years. Looking at this table shows that one should completely ignore those results. The difference between the two results is scary. Using the current list would make one think that they had a great system but actuality it was much worse.”  Histest results are included in the article. This month’s also issue also includes some quantified data about the best time of day to trade ETFs along with some articles making a convincing bearish case for U.S. stocks. For those wondering where to turn in a bear market, the answer could be in preferred stocks as data from Global Financial Data shows in another article.  We hope you find some valuable information in this month’s magazine.  Please send any comments on Technically Speaking to editor@mta.org.

Sincerely,
Michael Carr

What's Inside...

INTRODUCTION TO BACK TESTING

Editor’s note: Matt Radtke made a presentation to the Denver Chapter of the MTA on September 23. In this presentation,...

Read More

HOW MUCH DOES NOT HAVING SURVIVORSHIP FREE DATA CHANGE TEST RESULTS?

Editor’s note: In his presentation to the Denver Chapter, Matt Radtke stressed the importance of using data free from survivorship...

Read More

RALPH ACAMPORA, CMT HONORED BY STA

The Security Traders Associaton (STA) recently honored Ralph Acampora, CMT, with...

Read More

CAREFUL TRADING ETFS MOO AND MOC

Editor’s note: this was originally published by KCG and is reprinted here with permission.

Market on Close (MOC) ETF trading...

Read More

BLOOMBERG BRIEF HIGHLIGHTS: CLASSIC CHART PATTERN CARRIES OMINOUS IMPLICATIONS FOR S&P 500

Editor’s note: This article was originally published in the September 24 issue of Bloomberg Brief: Technical Strategies. Below is an...

Read More

VOLATILITY AND TRADING: RE-DEFINING SUPPORT AND RESISTANCE

Editor’s note: This article was originally published at The Educated Analyst, an education blog maintained by Market Analyst.

Volatility Makes...

Read More

JOE GRANVILLE’S INDICATORS POINT TO A POSSIBLE BEAR MARKET

Editor’s note: Jerry Blythe was a friend of Joe Granville’s who computerized many of Joe’s indicators and sent a report...

Read More

CHART OF THE MONTH: SOCIAL MEDIA SENTIMENT

Studying Sentiment On StockTwits During A 10% Correction by Stefan Cheplick

Editor’s note: this was originally published on August 29,...

Read More

HOW SOCIAL MEDIA CAN MEASURE VOLATILITY AND FEAR IN THE STOCK MARKET

Editor’s note: sentiment analysis has long been a part of technical analysis. They have also been a subject of near...

Read More

HOW STOCK MARKET SENTIMENT LOOKS IN VOLATILE TIMES

Editor’s note: this was originally published on August 29, 2015 at The StockTwits Blog and is reposted with permission.

After...

Read More

RESEARCH UPDATE: IS SMART BETA DUMB?

Editor’s note: this paper was originally published at the Social Science Research Network and an electronic copy is available at...

Read More

ETHICS CORNER: EVALUATING INVESTMENT MANAGERS

Editor’s note: this case study is adapted from Lawton, Ethics in Practice. The full text is available at the CFA...

Read More

PREFERRED STOCKS IN A RISING INTEREST RATE ENVIRONMENT

Editor’s note: this article was originally published at the Global Financial Data blog and is reprinted here with permission.

Even...

Read More

PREFERRED PHASES & CYCLES®: THE RELIEF RALLY WAS WELCOME, BUT LATE-AUGUST LOWS ARE LIKELY TO BE RE-TESTED

Editor’s note: This was originally published on September 21, 2015 and is republished here with permission.

After a multi-year advance...

Read More

INTRODUCTION TO BACK TESTING

Editor’s note: Matt Radtke made a presentation to the Denver Chapter of the MTA on September 23. In this presentation, Matt provides an introduction to back testing trading strategies.

Back testing is the process of applying a set of rules to historical data with the goal of assessing the strategy’s effectiveness.  Of course, it’s important to remember the standard caveat that past performance is not a guarantee of future results.  While back testing will not precisely forecast the future it does provide a means of discovering reasonable expectations. 

To produce valid back test results, it is essential to use a quantified approach. A back test begins by defining a set of quantified rules for ranking, entering, exiting, and managing each trade. The process then applies those rules consistently to all members of your trading universe, as that universe existed at the time of the trade. 

Back tests can range from relatively simple to quite complex. No matter how complex the test, the output of the tests will consist of metrics, which are any type of measurement or summary information. One example of a common metric is frequency of occurrence. For example, we might be interested to know how many times the S&P 500 has declined by more than 5% in a single day since January 1, 2001. 

The answer is 13, but the distribution of that frequency provides additional insights. Most of those occurrences, 11 of the 13, were seen in 2008. There were additional occurrences in 2009 and 2011 with one in each of these years. Frequency of occurrence is an example of how any single metric provides some information but must often be used along with information from other metrics to provide a more comprehensive conclusion.

Defining metrics is the first step in back testing. Important metrics should be defined in advance. After they are defined, there are several different kind of tests that should be run. Again, each test provides important information but the complete picture comes from assembling information provided by different tests.

One type of test is the All Trades Test. An All Trades Test uses quantified entry and exit rules to define trades. However, they do not impose any of the limitations of real trading such as capital requirements or position sizing.

The purpose of an All Trades Test is to answer questions such as:

  • If I could take every trade signal generated by my strategy, what would be my average % gain or loss over the long
    term?
  • What percentage of my trades would be winners?
  • How many signals would my system generate?
  • Would the signals be bunched up on certain days or in certain years?

However, there are also disadvantages to an All Trades Test. Most importantly, it does not simulate real trading, and therefore does not reflect achievable results. You cannot accurately calculate metrics that are important to most traders, like Compound Annual Return.

Another test is the All Days Test. An All Days Test is very similar to the All Trades Test that we just discussed. The primary difference is that the All Days Test allows you to take every entry signal even if you’re already in a trade.

This approach might be useful if you’re trying to answer a question such as “what is the average 5-day return of a stock that has an RSI(2) value less than 10?”

The most significant test might be the Portfolio Test. A Portfolio Test is intended to simulate your actual trading as closely as possible. In addition to quantified entry and exit rules, it must incorporate all of the following:

  1. A position sizing algorithm
  2. Ranking of multiple trade candidates
  3. Commissions and fees
  4. Use of margin, if any

Running a Portfolio Test allows us to see what would have happened if we had diligently traded our strategy rules over
some period of time in the past.

We can also accurately calculate portfolio metrics that could not be derived from the other types of tests, including:

  • Compound Annual Return (CAR)
  • Max Drawdown
  • Average Overnight Exposure
  • Sharpe Ratio, Sortino Ratio, Ulcer Index, CAR/MDD and other similar metrics

A logical question is: given the value of the Portfolio Test why not always jump immediately to Portfolio Testing every time you develop a new strategy? One reason is that position sizing can make a tremendous difference in performance. Dr. Van Tharp and Dr. Howard Bandy have both written extensively on this topic. Other tests may offer insights into the optimal position sizing strategies. Another reason additional tests are important is because if you have many more trade signals than you can actually take, then the ranking function that you use can skew the results. If you have no baseline to compare to, you may reject a strategy simply because your ranking rules happen to favor the less profitable trades. The All Trades Test might provide that baseline.

To perform a back test requires a minimum of two things — historical price data and back testing tools.

Important considerations for historical price and volume data include problems associated with delisted securities and survivorship bias. Ideally, the back test includes results for stocks that no longer trade otherwise the results could be inflated because delisted stocks are not included. There is also the question of whether data should be adjusted for splits and dividends. While there is general agreement that prices should be adjusted for splits, there are arguments for and against adjusting for dividends.

Whether you adjust for dividends or not, it will be important to understand how accurate your data is. Not all vendors are equally reliable and this can require testing to determine which vendors provide the most accurate data.

Once you have data, there are a number of back testing tools available. Popular ones include AmiBroker, TradeStation, Ninja Trader and others. Microsoft Excel can also be used for back testing. While the process of identifying trades with Excel would be cumbersome, the process of evaluating the test results is often enhanced with Excel. In particular, Excel offers the ability to easily sort and filter results. Pivot charts could also assist in evaluating back test data.

There are some basic steps common to all back tests.

  1. Start with a thesis, for example “volatile, oversold stocks tend to revert to the mean, and thus represent buying opportunities”.
  2. Express your thesis as a set of quantified rules. In our example, we might use:
    • Minimum Price & Volume (as traded)
    • 100-Day Historical Volatility > X
    • 2-Day RSI < Y for Entry
  3. 2-Day RSI > Z for Exit
    Build and run the back test

You should also identify the limitations of your approach. For example, you need to consider whether or not there are enough trading signals to draw valid conclusions on future performance. This question requires identifying how many trading signals are “enough” and it can be a challenging question.

Test results should always be put through the “Sanity Checks” test. You should be sure the results are NOT too good to be true! You should also check to be sure that when you are testing different strategy parameter values, there is a reasonable and predictable progression of results. For example, if you are testing a moving average there should be a nearly linear relationship in results as you increase or decrease the number of days in the moving average.

Sanity Checks are one way to be sure you have avoided common back testing pitfalls. To make your back testing results as representative of real trading as possible, you will need to avoid developing “untradeable” rules. These can be rules that look ahead in the data. As an example, a rule to enter the trade on an intraday limit order, using today’s 14-day average true range (ATR(14)) value as the stop amount would require looking ahead to know today’s ATR(14) value. This isn’t possible in real trading and needs to be avoided when defining rules.

Another potential pitfall in back testing is finding trades that are impossible to place. This could be the result of not allowing for scan or calculation time. For example, you might test trading at the open by shorting the five stocks with the highest historic volatility which gapped up by at least 3% from yesterday’s close. The problem with this idea is that you will not have time to collect opening data, scan for gaps, sort the list of stocks with gaps by historic volatility and then place trades at the opening price. It’s important to consider how you will implement your trades when designing a back test.

It’s also important to avoid curve fitting or cherry picking. If an optimization run shows the only variation of your strategy with attractive results is the one that uses the entry rule “RSI(9) < 17”, then you probably don’t have a robust strategy!  You should observe relatively stable performance metrics over nearby parameters for a valid trading rule.

Contributor(s)

Matt Radtke

Matt Radtke is a developer at Arrow Electronics and has 25 years of software development experience in companies large and small, including Hewlett-Packard and Bell Northern Research. During his ten years with a Boulder, CO software firm, he rose from the position of senior...

HOW MUCH DOES NOT HAVING SURVIVORSHIP FREE DATA CHANGE TEST RESULTS?

Editor’s note: In his presentation to the Denver Chapter, Matt Radtke stressed the importance of using data free from survivorship bias. Cesar Alvarez has quantified this problem in this blog post which was originally published at Alvarez Quant Trading in February 2014. It is reprinted here with permission.

Over the last month several people have asked me how important it is to have survivorship-free data. For any researcher this is an important question to understand how the different data can change your results. We will be exploring three potential data issues: as traded prices, delisted stocks (survivorship-bias), and historical index constituents (pre-inclusion bias).

My data source is CSI Data, which includes delisted stocks and as traded prices. Unfortunately they are no longer selling this package to individuals. Norgate Investor Services supplies delisted stocks and as-traded pricing. I welcome comments from people that have on the quality of their data and customer service.

General Information

For the system used for ‘As Traded’ and ‘Delisted’ tests, entry and exit is on the open, maximum of 10 positions and signals are rank from high to low by 100 day historical volatility. Test results are from 1/1/2004 to 12/31/2013.

As Traded Prices

Setup

  • The 21 day Moving Average of Close*Volume is greater than $(5,15) Million
  • RSI(2) is less than 5
  • Close down three or more days in a row
  • 100 day Historical Volatility is greater than 40
  • Close greater than 200 day Moving Average(tested with and without this rule)
  • (As-trade price, adjusted-price) greater than $5
  • Using delisted stocks
  • Previous day is a setup, place a limit order 5% below previous day’s close
  • Close greater than 5 day Moving Average

Buy

  • Previous day is a setup, place a limit order 5% below previous day’s close

Sell

  • Close greater than 5 day Moving Average

As traded price is the actual price a stock traded on a particular day before splits, dividends, and one-time dividends. For example, you may have a rule that you do not trade stocks under $10. If you ran a test back to 1996, after splits and dividend adjustment MSFT price is around $7. MSFT was actually trading at around $150.One would skip this stock if they did not have as-traded pricing.

I had never run this test before and these results surprised me. Overall there is no significant difference in using as-traded price vs. adjusted price. The following tests using prices between $1 and $5.

Again there is no significant difference in using as-traded price vs. adjusted price.

Delisted Stocks (Survivorship Bias)

Setup

  • The 21 day Moving Average of Close*Volume is greater than $(5,15) Million
  • RSI(2) is less than 5
  • Close down three or more days in a row
  • 100 day Historical Volatility is greater than 40
  • Close greater than 200 day Moving Average(tested with and without this rule)
  • As-trade price greater than $5
  • With and without delisted stocks
  • Previous day is a setup, place an limit order 5% below previous day’s close
  • Close greater than 5 day Moving Average

Buy

  • Previous day is a setup, place an limit order 5% below previous day’s close

Sell

  • Close greater than 5 day Moving Average

I did this test about 8 years ago with similar results. For this mean reversion system, adding the delisted stocks improves the results. The improvement without the 200 day moving rule is huge. Now can we generalize that all mean reversion systems will get better with delisted stocks, no.

Trend Following System

For this trend-following system (I cannot share the rules), the results got worse. CAR went down, MDD goes up and Avg %p/l drops dramatically. Not using survivorship free data would hurt you in this case.

Historical Index Constituents (Pre-inclusion Bias)

Pre-inclusion bias is using today’s index constituents as your trading universe and assuming these stocks were always in the index during your testing period. For example if one were testing back to 2004, GOOG did not enter the S&P500 index until early 2006 at a price of $390. But your testing could potentially trade GOOG during the huge rise from $100 to $300.

Rules

  • It is the first trading day of the month
  • Stock is member of the S&P500 (on trading date vs as of today)
  • S&P500 closes above its 200 day moving average (with and without this rule)
  • Rank stocks by their six month returns
  • Buy the 10 best performing stocks at the close

Ctrl+Click to follow link for larger version of table

People often write about systems they have developed using the current Nasdaq 100 or S&P500 stocks and have tested back for 5 to 10 years. Looking at this table shows that one should completely ignore those results. The difference between the two results is scary. Using the current list would make one think that they had a great system but actuality it was much worse.

Final Thoughts

Good data is important. We have lots of landmines to avoid when testing but avoiding survivorship bias and pre-inclusion
bias is easy to do. If you test on a stock universe, buy the delisted stock data.

Spreadsheet

If you’re interested in a spreadsheet of my testing results, click here, enter your information at the bottom of the page, and I will send you a link to the spreadsheet. The spreadsheet contains more variations I tested along with yearly returns.

Contributor(s)

Cesar Alvarez

For the last six years, Cesar Alvaraz has written for his popular quant blog, Alvarez Quant Trading helping traders learn about the markets. He spent nine years as the Director of Research for Connors Research and TradingMarkets.com. Numerous strategies he created have...

RALPH ACAMPORA, CMT HONORED BY STA

The Security Traders Associaton (STA) recently honored Ralph Acampora, CMT, with the 2015 Dictum Meum Pactum Award. The Dictum Meum Pactum Award was established in 2002 to recognize individuals whose contributions to the industry, their firms, STA and their communities are consistent with the ideals that STA has supported since its inception.

“Dictum meum pactum” is a Latin phrase meaning “my word is my bond.” This phrase has been the motto of the London Stock Exchange since 1801 and symbolizes the trust traders place in each other.

Ralph is a pioneer in the development of market analytics and has a global reputation as a market historian and a technical analyst, providing unique insights on market timing and related investment strategy issues. Ralph was previously the New York Institute of Finance’s Director of Technical Analysis Studies. Ralph Acampora taught at the institute for nearly 40 years.

Before joining NYIF, he was Director of Technical Research at Knight Equity Markets. Prior to this, he worked for 15 years at Prudential Equity Groups as its Director of Technical Analysis. Ralph Acampora is one of Wall Street’s most respected technical analysts and has been consistently ranked by Institutional Investor for more than ten years. He is regularly consulted for his market opinion by the major business news networks as well as national financial publications.

He is a Chartered Market Technician (CMT), a designation he helped create and which is now recognized by the National Association of Securities Dealers as the equivalent of a Chartered Financial Analyst (CFA).

The STA presented the award formally on October 1, 2015 at its 82nd Annual Market Structure Conference.

The Security Traders Association (STA) was formed at a pivotal time in the nation’s economic history – as the Roosevelt Administration’s New Deal promised to move the United States from the grips of the Great Depression to prosperity. The Securities Act had just become law and the Securities and Exchange Commission had been formed to regulate the issuance and sale of corporate securities to investors and bolster public confidence in the stock market. The STA (formerly NSTA) was born in the Windy City, when the Chicago Bond Traders Club invited security traders across the Midwest to join them at their annual outing on August 21, 1934.

Contributor(s)

CAREFUL TRADING ETFS MOO AND MOC

Editor’s note: this was originally published by KCG and is reprinted here with permission.

Market on Close (MOC) ETF trading can cause volume shocks that disconnect ETFs from net asset value (NAV). Illiquid and high beta ETFs seem most at risk of mispricing in MOC. This creates tracking errors that make ETF PMs look bad too.

Crib Sheet:

  • ETFs depend on arbitrageurs to hold their price at fair value (NAV). And arbitrageurs depend on having a cheap and reliable hedge to offset risks.
  • The close is one time that arbitrageurs can’t hedge. This makes market making in ETFs in MOC more risky.
  • Data shows that disconnects from NAV in the MOC are common, but they are typically pretty small for most US ETFs, most of the time. This is good news for investors. It indicates that the markets are surprisingly efficient despite the lack of riskless arbitrage.
  • However, disconnects do occur, mostly in less-liquid ETFs and those with higher beta, which may indicate that investors are paying more for MOC liquidity than they realize.
  • Importantly, most benchmarks track index close, not ETF close. Consequently, we recommend investors with large MOC trades target NAV-close. This can be done via our ETF desk.

The importance of arbitrage for ETFs

In our recent report on intraday ETF trading we highlighted that 90% of US ETFs trade inside the spreads of their underlying baskets – effectively a no-arbitrage zone (exhibit 2). In addition, 42% of ETFs never traded in the arb-zone – accounting for 65% of ETF value traded.

But what about on the open (MOO) and close (MOC)? Well, that’s where things can get interesting.

ETFs typically trade very little MOC

Most stocks trade around 8% of their ADV on the market close (as we show in our September 2014 chartbook). However, ETFs tend to be more active in the morning – and many have a muted close. Exhibit 1 shows that most ETFs trade less than 2% of their ADV in the close.

There is no arb on the close

It’s good that large ETF trades typically avoid the close, because on the close it isn’t possible to arbitrage an ETF. Whenever ETFs are closed to arbitrage spreads tend to widen and the link bwtween the ETF and NAV weakens.

In the US close, both ETFs and underlying stocks close at the same time. Because of this it’s not possible for market makers to execute a stock hedge once they take on an ETF position. In fact, one of the only hedging tools available after 4pm is SPU futures, which trade until 4:15pm.

To account for this risk, market makers are less able to absorb large trades without moving ETF prices away from the last basket value. This can cause MOC to disconnect from NAV.

The MOC results are better than we expected

Despite all this, the average tracking of NAV by ETFs into the close is better than we had expected – especially given the volume shocks that we do see on the close (see appendix).

In many cases the average deviation is around the same as the intraday spread on the ETF (Exhibit 1 shows the majority of ETFs close within 5 bps of NAV on average).

Does the transparent US close help?

The US has a very transparent close. MOC orders need to be in the system 15 minutres early – and any stock with a large imbalance will be published to the whole market.

This gives traders a chance to pre-hedge a position and offset the close imbalance. Although this may explain the relatively small disconnects for most ETFs on the close, academic studies show this pre-hedging can also move the market, typically very efficiently. So it’s important not to confuse good tracking to the close with large trades being “free” from impact.

  • If you want ETF prices that are close to NAV, avoid both the open and the close.
  • The open is harder to arbitrage as spreads in the underlying are wide and some stocks may be gapping on overnight news

Avoiding the biggest disconnects

Although average disconnects from NAV look benign, nobody should want to be on the wrong end of a MOC mispricing, especially if it was caused by their own trade. Looking at our findings there are three factors that should minimize the chance of causing an outlier:

1.  Avoid volume shocks

In theory, disconnects should happen when large trades impact the liquidity available on the close. However, thanks to the transparency of the US close, we found that unusually large close volumes were often not a statistically significant predictor of price disconnect (we discuss this further in the appendix).

However, the stocks with the largest average disconnects tend to be illiquid to start, with even more illiquid MOCs. In exhibit 3, we see a spike in average disconnect for ETFs that trade less than $100,000 on the close. Many of these are also small circles, indicating the MOC is small in notional as well as a percent of ADV.

  • Many of the ETFs with the most deviations from NAV are levered ETFs.
  • Levered ETFs are even harder for arbitrageurs to trade into the close because the ETF itself needs to trade to recalibrate the fund for the next day.

In reality, most of the 1600+ ETFs don’t regularly trade on the close – so it’s easy for a medium-sized trade in a small ETF to slip under the radar at a busy time like the close.

2.  Beware high-beta & volatile ETFs

SPY with its beta of one is very easy to hedge, even after 4pm, via SPU futures. This helps keep its disconnect from NAV low despite reasonable MOC volumes. In contrast, higher beta and more volatile ETFs are harder to hedge. Exhibit 4 shows the clear relationship between historic ETF volatility and the frequency of large disconnects (over 10bps). The green color scale shows that Beta is also typically much higher for the stocks with more frequent large disconnects.

3.  Why not trade the underlying instead?

Rather than risk pushing an ETF away from NAV, and the underlying index close, investors could execute a NAV-close trade (talk to the desk). A key twofold benefit of this is that you tap into the underlying basket liquidity at the time when stocks are at their most liquid. Both factors should make it cheaper to trade.

PMs can blame traders for bad TE

Typically the amount of risk you want an index PM to take is small, ideally zero. And tracking error (TE) is the most common metric to measure the amount of risk an index portfolio manager is taking.

But the ETF industry tends to use TE incorrectly, not recognizing two problems with their calculation:

1.  You shouldn’t compare close-to-close ETF returns. To measure the risk of the ETF portfolio, you would calculate tracking error as the standard deviation of the differences in daily returns of the portfolio vs the benchmark.

Unfortunately, the industry tends to calculate TE using the ETF prices (not the portfolio). This means all the trading-related disconnects from NAV (that we discussed above) are included in the TE calculation.

2.  Annualizing TE exaggerates trading disconnects. It makes sense to convert the daily return difference into an annual

number. That way it can be easily compared to per-annum returns and outperformance. Doing this, however, involves multiplying the standard daily deviation by the square root of time (√252). For example, the standard disconnect for SPY in Exhibit 1 is just 2.9 bps per day – around the same as the underlying basket spread. But this extrapolates to a tracking error of over 46 bps (exhibits 5 and 6).

The √ takes into account a normal amount of mean reversion over the rest of the year, but there is a natural pull to NAV in mispriced (overbought or sold) ETFs. For example, the autocorrelation of the daily disconnect for SPY is -46%. This strongly negative number shows that an overshoot is very likely to be corrected the next day.

Consequently, industry TEs don’t say much about the portfolio’s performance. In fact, TE is artificially high for ETFs.

  • The tracking in ETFs is self-correcting.
  • An ETF that closes rich because of a large buy trade typically underperforms next day, as arbitrageurs pull the ETF back to NAV.

A better way to assess ETF tracking
The issues above are key reasons we don’t like to use tracking error to measure ETFs or ETF managers. There are better ways. In fact, ETF.com utilizes a measure they call, “tracking difference”, to evaluate how well index-tracking ETFs work.

Using a metric like tracking difference is similar to what you can see when you eyeball longer-term data (exhibit 7a below).  This is also much more useful when assessing the tracking for international ETFs who’s NAV close is at a different time to the US ETF’s close.

MOO is also more expensive to trade

At the open, stocks are all adjusting to the overnight news. Consequently, the underlying stocks tend to trade with more volatility and wider spreads – as we highlight in our September Chartbook. This adds to the risk and cost of market making in the ETF too. So it’s not surprising to see that ETF spreads are also wider in the early part of the day (exhibit 8).

In fact, in the first minute of trading, when some stocks may not have officially opened, we saw the median ETF spread at 33 bps. That’s almost six times wider than the spread at the end of the day.

Typically, wider spreads and more volatility make it more expensive to trade stocks and ETFs. That means that unless you’re one of the investors with a macro trade based on overnight news, you’ve got to be careful not to trade too aggressively in the first 30 minutes of the day and incur unnecessarily high trading costs.

APPENDIX: How did we do this?

We looked at just under one year of trading data, for ETFs with US underlying so NAV was in sync with the US Close. We also limited our sample to ETFs with consistent MOC trading – a total of 133 ETFs are in the sample.

We collected daily MOC volumes on a stock-by-stock basis and compared them to average MOC trading for that stock.  We then looked at close prices of the NAV basket and the ETF, and computed the disconnects (rich or cheap) from NAV.

Finally, we compared the disconnects from NAV (in absolute value space) with the volume shocks on the close. This created charts similar to exhibit A, and a whole lot of metrics for each ETF, which we’ve used to compare ETFs in this report.

In general, the US ETF MOC is very efficient

For most ETFs, especially liquid ETFs, the market was able to digest unusually large MOC trades well.

The SPY chart (exhibit A1) shows that high MOC days in December 2013 caused disconnects around 5-6bps. Then another volume shock in April 2014 led to a difference of less than 4bps from NAV. We see similar patterns across most liquid ETFs.

Volume shock wasn’t a great predictor of price disconnect

Across the 133 ETFs that we studied, representing the more liquid US underlying ETFs, the volume shocks proved to be a poor predictor of price shocks (very low R2). In fact, only 3% had R2 over 20%.

Even for illiquid ETFs like IYF (exhibit A2), which had some very extreme MOC volume shocks, the correlation with price disconnects was far from strong, at 26%.

Beware of sector, mid cap and style ETFs

Seven of the 12 strongest R2 came from sector funds like XBI (exhibit A3). We found that of the 15 strongest R2’s:

  • Sector ETFs had the most extreme MOC volume shocks.
  • 5 were style funds.
  • 2 were mid-cap ETFs.

Often we found that the normal MOC volumes in these examples were especially low – making their volume shocks that much more significant. Intuitively, this may be because they are used a lot by hedge funds who short them – making their trading is less concentrated into the close.

Contributor(s)

Phil Mackintosh

BLOOMBERG BRIEF HIGHLIGHTS: CLASSIC CHART PATTERN CARRIES OMINOUS IMPLICATIONS FOR S&P 500

Editor’s note: This article was originally published in the September 24 issue of Bloomberg Brief: Technical Strategies. Below is an extract of that article.

In “S&P 500 Chart Waves a Pennant for a Move Closer to October 2014 Lows,” Anthony Bosco, CMT, A Bloomberg Technical Analysis Application Specialist, explained a pennant formation suggests a target for the S&P 500 of 1830, a level that coincides with the low of October 2014.

The projection is based on the height of the pennant. The S&P 500’s current bearish pennant is 130 points wide. Projecting down 130 points from the break of the lower trend line takes us to 1830.

Source: Bloomberg. To see a live version of this chart on the Bloomberg terminal run G BBTA 1265.

Breadth measures are shown in the bottom two panels of the chart and confirm the bearish outlook. The percentage of stocks above their 10-day moving average moved from less than 1% to nearly 90%. The longer-term picture, shown with the percentage of stocks above their 50-day moving average, remained weak.

These negatives don’t necessarily mean we are entering a bear market. The current bull market could be entering a consolidation phase with further basing needed prior to starting its next up leg. 

Anthony Bosco can be reached at abosco5@bloomberg.net.

Contributor(s)

VOLATILITY AND TRADING: RE-DEFINING SUPPORT AND RESISTANCE

Editor’s note: This article was originally published at The Educated Analyst, an education blog maintained by Market Analyst.

Volatility Makes the World Go Around

Volatility is the most important technical measurement we can make as analysts. We are all familiar with the old saying, ‘money makes the world go around’. I now believe ‘volatility makes the world go around’. By my estimate I think this to be true beginning about 20 years ago. This owes mostly to the radical expansion of securitization which began in the 1990’s.

What does volatility have to do with Support and Resistance? You’re about to see. Strap on your thinking cap, this is likely to be conceptually new for you.

First let’s take a quick look at what we covered in the first article, which shed needed light on why volatility is so important.

For an in-depth definition of Big Money see the Lagniappe section at the end of this article.

Onward then…

Support = the price point where buying power (buyers) will overtake selling power (sellers), thus price declines no further.

Resistance = the price point where selling power (sellers) will overtake buying power (buyers), thus price rises no further.

Simple right? These price points are very handy to have identified. But is it really that simple? This definition is true but incomplete.

The definition ignores the time element. Support for traders who hold a position for 60 minutes, is not really support for traders who typically sell after 5 days. Short-term support/resistance is not equal to long-term support/resistance, respectively. However we mathematically identify support and resistance, there needs to be an important quality within it.

It must be Fractal.

That is to say it must work equally well, with the same rules, for all trading hold times – and by default for all technical charting timeframes.

Identifying Support/Resistance with Volatility Measurement

My method for accomplishing this is by using a set of trading bands I created which have very different properties than any others. Mine are called N bands; and yes the N stands for Northington.

Trading bands are typically designed to gauge the strength of a momentum movement. That’s because virtually all forms of classic momentum measurement struggle to do that. Classically one would use a Bollinger band or Keltner band to assist in that task.

N bands are designed to identify support and resistance for the ‘right now’ timeframe. If price touches the lower N band, or gets close, it encounters support; buyers begin to overpower sellers.

The nearby chart of Microsoft (MSFT) [65 minute], shows typical N band behavior. The solid line is the N band, and the dashed line shows where the effects of resistance or support begin. Look how price lowers to the ‘support zone’ then goes no lower.

Note: Keep in mind that no technical measurement is absolute in its performance. Also remember that 50% accuracy is the point of uselessness; which is to say statistically random. The closer to 100% the better.

This is what’s different about N bands. They are designed to contain price, while other trading bands are not.

This is very useful. N band zones make very good exit points. They make even better exit points when plotted in a higher timeframe. Suppose in this case you normally trade from 15 minute charts or lower. A long exit at N band resistance, plotted on a 60 to 65 minute chart is even more effective, as shown above on the 10th.

Think of the concept of:

“Short Term Risk with Long Term Reward”
(more on this shortly)

What happens when price goes through an N band? Wouldn’t that be considered to be a failure? Yes, it would if that
were simply the end of its usefulness. Let’s say you do trade on 15 minute charts.

Uh oh . . . looks like the price rise just mauled the upper N band. Failure!

Not so fast. When price closes outside of an N band it usually means trend change. Some might think this is trend change confirmation, and they are not incorrect from one viewpoint. My belief is trends only change when they breach real support or real resistance. Increased accuracy of predicting trend directions is a major component of technical trading.

See the point where price closes above the upper N band? That tells me the odds are in my favor price will continue in that direction. It means strength because buyers way overpowered sellers.

Also remember the best practice is to exit at N band resistance in a higher timeframe. A good rule of thumb is 4 to 5 times
higher. In this case a 60 minute to 75 minute chart would show best probability exit points.

Predicting Support/Resistance

If we can identify, with probability, current support/resistance, can we also identify future support/resistance? Yes we can.

The key to predicting future support/resistance is to use the peaks and troughs of the N bands. It’s really as simple as that. Only the extremes matter – in so many ways. What follows is proof of this concept. Once again let’s look at the MSFT 65 minute chart below:

To help prove an extreme, think about the definition of peak and trough. The concept of peak and trough may seem straight forward at first; but there are a few important characteristics.

peak = a high point, which precedes a low point, where the new low is lower by some predetermined amount – such as a percent of movement or a fixed quantity of units.

trough = a low point, which precedes a high point, where the new high is higher by some predetermined amount – such as a percent of movement or a fixed quantity of units.

Thus peaks and troughs are quantified extremes. The only other important point to make is the method to quantify the extreme (the amount of retracement required) should identify a significant extreme. That is to say not every little up and down that occurs.

On the chart above the peaks and troughs are notated. Once the peak is confirmed by an adequate retracement, a corresponding resistance zone is projected forward. If price were above the zone then it would be support. The same applies to the troughs.

The predictive part is possible because the zone is established as soon as the peak or trough is confirmed. See how price responds to the zones on the 17th and the 18th? Price pauses or reverses at the resistance and support zones.

It’s also important to point out that when price is above an SR zone, then the SR zone is support. When price is below an SR zone, the zone is resistance.

Yes, the extremes are what matter most. These points make the most profitable entry and exit points. A rewarding
experience is had by identifying real extremes as opposed to false extremes.

Below is another example comparing classically drawn trend line resistance and Volatility-Based Support Resistance (VBSR):

The ASX 200 Index above shows a break at a resistance trend line. Simple, right? One could easily make the case the trend is likely to change upon several bar closings above the break – and subsequently enter a long futures contract position.

Certainly the trend momentum changes due to the break. However for the trend to change, real (meaningful) resistance should first be broken.

As you can see, the ASX Index halts its rise when it reaches the SR 3-4 resistance zone. Should price climb above the SR 3-4 zone, the probability of significant trend change is likely.

In this case the SR 3-4 zone was confirmed and plotted on market close of 2015/6/22. It occurred because the upper N band confirmed a new trough at that point. The new trough was a new volatility extreme.

20th century technical analysis depended on a very linear way of looking at price information. In the 21st century, Big Money moves markets, and most individual securities exchange traded and over-the-counter (OTC). Big Money decisions are most heavily influenced by volatility calculations.

Volatility-Based Support Resistance (VBSR): Why does it work?

I must say I can only answer this question with a thorough explanation. That is to say, “the devil’s in the details” – or perhaps more accurately stated, “the angel’s in the details”.

So put on your thinking caps and absorb this:

N bands are a proxy representation of consolidated levels of implied volatility. Expressed as price levels (not %), designed to approximate volatility limits of similar contract expirations to that of the chart periodicity (timeframe; daily, weekly, etc.). In that way it is meant to be a ‘view’ of consolidated implied volatility.

It is the extremes of the N bands which are most significant. The ‘peaks’ and ‘troughs’ of the N band levels represent the extremes of concentrations of investor commitment, in the form of contract valuations (due to valuation models and their dependency on the volatility input), reaching points where decisions need to be made and action taken.

Concentrations of commitment at key price levels are what support and resistance is made of. This is because it’s at those levels where buy/sell decisions must be made.

Whew! So much for the heavy talk!

In our next article we’ll dissect the above theory and show how it maps to the real world. We’ll do this by looking into one of my favorite subjects: the market micro-structure. Throughout I’ll show practical trading methods for VBSR and Northington Dahlberg Tools.

Lagniappe

Big Money: What’s it made of?

G-SIB (Globally Systemically Important Bank)

G-SIB definition: [Basel Committee on Banking Supervision]

A G-SIB is defined as a financial institution whose distress or disorderly failure, because of its size, complexity and systemic interconnectedness, would cause significant disruption to the wider financial system and economic activity. 

Click here to see a List of international G-SIBs

Size of G-Sibs; relative to their respective local economies

As of October 2011, the top 40 G-SIBs possessed or controlled 57.75% of the total financial assets of their respective economies.

source: http://www.fincialstabilityboard.org

Implementing Volatility Analysis into Your Strategy: Kirk’s articles demonstrate the pivotal role Volatility has in analyzing the market. The next step is working to implement the techniques into your own strategies. To assist you with this, the team at ND Research have provided a free 3 month trial of the complete ND Research suite of indicators for all Market Analyst 7 clients. You don’t need to sign-up to access the trial; the ND Research group will be available to you automatically the next time you login to Market Analyst.

Contributor(s)

Kirk Northington, CMT

Kirk Northington, who holds a Chartered Market Technician (CMT) designation, is a quantitative technical analyst and the founder of Northington Trading, LLC. He is also the creator of MetaSwing, advanced analytic software for Bloomberg Professional, MetaStock and TradeStation. He trades his own accounts, and...

JOE GRANVILLE’S INDICATORS POINT TO A POSSIBLE BEAR MARKET

Editor’s note: Jerry Blythe was a friend of Joe Granville’s who computerized many of Joe’s indicators and sent a report to Joe on a nightly basis for many years. This made it possible to objectively analyze the numbers. Jerry has continued to run the numbers and has noticed a similarity between the current market and the bear markets of 1929 and 2008.

Joe Granville understood the stock market and applied his understanding to create indicators that allowed others to see the market as he did. The tool or indicator Joe made famous is On Balance Volume (OBV), a visualization of buying and selling pressures. If for example prices are rising while OBV is falling, that’s a sign of distribution and that large interests are getting out of positions. Joe might have said that’s an indication Wall Street is setting up individual investors on Main Street to be left holding the bag while large investors are left holding the money that was in the bag.

Joe developed a suite of indicators using OBV that helped him analyze and time the markets for much of his 60-year career, most recognizable being his Climax indicator, CLX (momentum); Net Field Trend indicator, NFI (accumulation distribution); and True CLX (a trending tool).

We’ll look at one of his indicators, the NFI, and compare data in 1929 and 2008, and 2015.

The NFI can be a bit confusing since it represents two layers of structural analysis, requiring the market to do a bit of work before reflecting an underlying change in market tone. Each individual issue in an index (or average) has an NFI designation and the overall index has a combined NFI value and designation; it is the latter we are interested in.

Joe was famous for some of his bear market calls and many were based on comparisons. Analysts however know that while similarities and analogs are interesting and can add excitement to analysis, they seldom hold up, though often long enough to add a few gray hairs. So, what’s up now?

In the three examples below, what is unusual are the number of consecutive, high double-digit negative daily NFI readings for the market (DJIA).

THE STRUCTURE OF THE NFI

First, let’s get an idea how the Net Field Trend is built. The field trend of a single issue is determined from an OBV pattern of volume for that issue. OBV is a running, cumulative total of daily volume of an issue, with daily volume added to the running total on up days, subtracted from the running total on down days and ignored on unchanged days.

The cumulative volume pattern for a single issue creates an UP designation on a given day when a zigzag pattern occurs with progressively higher daily cumulative volume highs and progressively higher daily cumulative volume lows, AND a new cumulative volume high is higher than a prior cumulative volume high. That defines an UP. The pattern creates a DOWN designation in a similar way when there have been progressively lower daily cumulative volume highs and progressively lower daily cumulative volume lows. If neither designation can be determined, there is no UP or DOWN designation for that issue.

Next, the cumulative volume totals of an issue with UP and DOWN designations are compared to prior cumulative volume totals with UP and DOWN designations in the same issue, to determine a Rising or Falling zigzag pattern of UPs and DOWNS. If the cumulative OBV volume total of the current UP is higher than the volume total of the prior UP, the issue is said to have a Rising field trend. Conversely, for a Falling field trend. With no discernible pattern meeting these criteria, the field trend is DOUBTFUL.

Then, the number of issues in an index with a Falling field trend is subtracted from those in a Rising field trend. That number is the NFI for the index that day, ranging theoretically from +30 to -30 for the DJIA depending on how many of the 30 issues are in Rising or Falling field trends that day. Doubtful field trends are ignored in determining the number.

It takes a fair bit of market work and time to change a field trend in an individual issue, and then for those collective changes to affect or change the NFI of the index, making this two-layered analysis a reasonable indication of whether accumulation or distribution is going on.

Very high or very low readings of the NFI for an index are unusual, or usually don’t last for many days, because a counter trend move for a few days or so can change field trends in the issues, and then in the index. So, in the three market examples of the DJIA below, what is unusual are the number of days of consecutive, high double-digit negative NFI readings.

Joe said he spent months gathering and calculating the 1929 data from microfilm and many nights at the library. That data allows us to develop an analog comparing the current market to the historic 1929 bear market. There is also an analog to the 2008 market shown below. In all charts, the Dow Jones Industrial Average (blue line) is shown on the left hand scale and the NFI (red bars) is shown on the right hand scale. The time frame shown for 1929 is September and October, for 2008, October to early December, and for 2015, August and September.

What can sometimes be missed in similarities is a larger message. People want precise comparisons and when that doesn’t occur, the analysis is dismissed. Joe understood that. What his numbers presaged in both 1929 and 2008, was a possible sharp drop in the index, which occurred. The recognition of a weak economy came months later after a big rally and the shock had worn off, followed by a relentless decline to new market lows. Joe referred to this as the internal and external market bottoms.

What’s going on in this market is evidence of concentrated distribution seen through a persistently high negative double-digit NFI.

The question with this data is whether the current market is beginning to unfold a scenario similar to prior major market declines, registering an initial low followed by a robust rally which, in this market, might mean a low this fall, followed by a robust rally and much lower market in 2016.

The persistence of the NFI seems to be the commonality, making it difficult for fence sitters to get out or shorts to comfortably step in. The 2010 and 2014 markets (not included) had brief periods where OBV looked like it might signal similar caution but prices and OBV recovered quickly; so, the beginning comparisons were simply warnings to tighten up a bit, and also reminders not to fall in love with setups.

However, with these numbers I suspect Joe would be singing from the treetops since his work has spotted trouble for some months now.

All that’s left is to see it happen.

Contributor(s)

Dr. Jerry Blythe, MD

Joe Granville was a market analyst who published The Granville Market Letter from 1963 until his death in 2013. Joe popularized on balance volume and other technical indicators. In Nobel Prize winning economist Robert Shiller’s book, Irrational Exuberance, Shiller notes Granville’s market...

HOW STOCK MARKET SENTIMENT LOOKS IN VOLATILE TIMES

Editor’s note: this was originally published on August 29, 2015 at The StockTwits Blog and is reposted with permission.

After seeing the results of our last sentiment stock screener we wanted to extend the results to include this week’s startling action. It was characterized by extremes in both price and sentiment. The Dow and S&P 500 suffered their first 10% corrections in several years. But what’s equally startling is how sentiment changed for individual stocks. Take a look:

1.  Group RR: Rising Sentiment + Rising Prices

This group is led by coal mining and processing firm Arch Coal ($ACI) which saw its stock price soar 107% on an extremely Bullish sentiment imbalance. The stock has been beat down of late as commodity prices and China’s economic prospects have sagged, but it seems to have caught a serious bid.

Inovio Pharmaceuticals ($INO) had one of the most imbalanced sentiment pictures for the week, with fully 50% of its messages on the Bull side vs only 2% for the Bears. The stock finished up 18% on the week.

2.  Group RF: Rising Sentiment + Falling Prices
This group was pretty light given the rebound in overall prices, but still contained some interesting names. At the top of the list is NantKwest, Inc ($NK) an immunotherapy company whose shares continued its previous slide, moving down 15% on the week despite nearly 1 in 5 messages expressing Bullish sentiment.

Biopharma company Omeros ($OMER) dropped into Group RF in this latest scan after first appearing in RR last week.  Sentiment in the stock remained strong despite a pullback in the share price. The stock has fallen around 50% since reaching a high of over $30 on August 18th.

3.  Group FR: Falling Sentiment + Rising Prices

Electronics retailer Best Buy ($BBY) sits near the top of this group, gaining 23% despite nearly 15% of its message volume expressing a Bearish sentiment. The stock is up after reporting its earnings earlier in the week.

Clothing retailer Abercrombie & Fitch ($ANF) had one of the most Bearish sentiment scores for the week, with 1 out of 5 messages expressing a downside bias vs around 1 out of 20 on the upside. The stock bounced back 11%, erasing last week’s sharp decline in the process.

4.  Group FF: Falling Sentiment + Falling Prices

This group had the thinnest representation in this week’s stock screener. It is topped by two ETF’s $SQQQ and $BIS. $SQQQ tracks the triple inverse perforance of the Nasdaq 100 ($QQQ), while $BIS is the double inverse of the Nasdaq Biotechnology Index. Both saw their prices tumble this week as the market whipsawed.

This market environment is detrimental to leveraged ETF’s (LETF’s). These funds have a decay factor which is proportionate to realized volatility. In other words, the higher volatility goes the quicker a LETF decays. While this may seem like a tiny mathematical quirk, this optionality has profound implications for how LETF’s trade. For a technical breakdown of the phenomenon, be sure to check out this paper on the subject from NYU.

You can get access to data like this using the StockTwits API.

Contributor(s)

MKTSTK For The Stockwits Blog

RESEARCH UPDATE: IS SMART BETA DUMB?

Editor’s note: this paper was originally published at the Social Science Research Network and an electronic copy is available at SSRN. It is republished here with permission.

Abstract: The question of beta vs. smart beta has evoked a lot of debate. Investment businesses have thrived on both sides of the question. The smart beta has carved a new business for itself calling the beta dumb, justifying the need for smart beta solutions and advocating a need for moving away from the popular benchmarks that are constructed on market capitalization methodology. The smart beta is built on the premise that popular cap-weighted benchmarks are wrong and inefficient. Even smart beta itself has come under attack. But there has been limited work explaining smart beta and beta together. The debate has added to the list of other debates like; efficient-inefficient markets, random-nonrandom behavior, bell curve or power law etc. On one side researchers cite proofs that beta is an inefficient measure of risk and on the other side there are proofs that beta is not redundant. The current paper uses the ‘Mean Reversion Framework’ (Pal, 2015) to explain the debate on both sides to explain why beta is neither dead nor dumb and it’s the smart beta thinking that is inaccurate and needs to evolve.

Pareto and Galton

Vilfredo Pareto, father of microeconomics created the Pareto curve explaining wealth distribution in Italy. The popular rebranded “80-20” law suggests that, “80% of effects come from 20% of the causes”. In stock market terms it means, “winners and losers persist” or “momentum is a natural continuum.” Pareto’s law is also referred to as the Power law.  The behavior is observed in stock markets and also used to make a case against the normally distributed bell curve.

Though under attack owing to some of his work associated with the normal distribution, Sir Francis Galton created the statistical concept of “regression to mean” through his work on genetic studies. He developed the term “standard deviation” to measure normal distributions. In stock market terms this would also mean, “past winners tend to lose,” “past losers tend to win” or “momentum fails and reversion occurs.” Behavioral Finance acknowledges reversion in many popular research papers including Does the stock market overreact? (DeBondt and Thaler, 1985.)

Both diametrically opposite observances may seem incomplete, but they do occur often at the same time.

Winners don’t always win
Indexing in capital markets has proven to be a popular method to gain broad market participation in a low cost vehicle that tends to outperform active managers net of fees and expenses over the intermediate and long term.

Index investments worldwide totaled over $9 trillion in 2014 up from $6.1 trillion two years prior. The majority of these indices are market capitalization weighted.

This growth stems from cap-weighted indexes providing a low-cost, low-fee broad market exposure that historically has outperformed the majority of active managers over five-year periods. The cap weighted approach links the price of the security to the portfolio weight. Therefore, all overpriced securities become over weighted in the portfolio relative to their future return. The converse is true for underpriced securities becoming under weighted relative to their future stock performance. In simple terms, a cap-weighted approach suggests an investor to keep buying more of the winner. The result is a performance drag for cap-weighted portfolios.

The problem: a performance shortfall for cap-weighted indices, which means that if you invest in a capitalization weighted Index portfolio your investments will not deliver an optimal return.

Is not buying the winner a solution?

This is why the emergence of Smart Beta Indices. These indices are price-indifferent weightings. They break the link between price and weight, not buying something that keeps getting expensive. Any movement away from cap-weighting (buying winners) can result in consequential style biases relative to the cap-weighted benchmark, such as a negative relative momentum load. This means that if you don’t run after winners, you can automatically get value biased. Choosing not to be with growth means shunning growth, which also pushes investment selections to another extreme, mainly value extreme.

Smart Beta Indices have a tendency to load on value and small cap and a negative momentum load relative to the capweighted benchmark. While they tend to outperform cap-weighted benchmarks overtime, they tend to underperform the market during momentum phases, which may persist over time.

Neither buying winner helps, nor buying the loser

So this leaves investors with a problem. The popular cap-weighted indices tend to underperform in value style markets, while price-indifferent Smart Beta indices tend to underperform in growth style markets. This means buying winners or losers independently fails to deliver.

Buy both winners and losers

This is where the ‘Mean Reversion Framework’ comes in. Markets move from growth phase to transition to value phase to transition to growth phase, and so forth. This pattern may be repeatable, but the timing and duration of the change between market leadership of growth/ value is difficult to predict. Further complicating the market cycles is the fact that within markets, individual securities may move in either the same pattern as the market or contrary to the overall movement of the market. The Index based on the ‘Mean Reversion Framework’, participates in both the market and security style biases and does so in a dynamic process that more fully captures the continuing momentum of appreciating stocks and the bottoming foundation of stocks and markets poised for reversion. The framework suggests that managers  and investors should buy both winners and losers, because both of them deliver remarkable profits, but with different holding durations.

Momentum and Reversion

The universal persistent relationship between momentum and reversion. Any selected asset or portfolio is driven by both momentum and reversion. In the paper, ‘Momentum and Reversion’ (Pal, 2015) the author explained how momentum and reversion are connected and transform into each other. This means value and growth are connected and indeed transform into each other.

Value and Growth

Value and Growth are risk factors that reward investors over different cycles of variable time frames. But within style driven markets there are securities of contrary style that may outperform. The most efficient holding period is relatively short. Growth and Value have different return patterns. Growth stocks benefit from momentum, faster but fleeting. Value stocks benefit from mean-reversion, price reversal. Bottoming process tends to develop slower. This means value is longer term, while growth is shorter term.

Why one rebalancing?
Hence the key question: if growth and value stocks move in distinctly different price patterns why have one portfolio rebalancing? Should we not hold value longer than growth?

Ranking Process
This process is designed to effectively capture the benefits of momentum (growth) and reversion (value) in the equity markets in a single liquid, cost-effectively managed portfolio.

Cap-weighted benchmark constituents are ranked by their relative price movement measured around the dynamic mean of each quintile of holdings. The holdings are ranked into five quintile groups of high momentum on one end to high mean reversion on the opposite end. This is done through ‘Mean Reversion Framework’ Rankings of each quintile relative to its mean rather than the average mean of the whole index.

The ranking based on the framework powers seasonal patterns of growth (strength) or decay (weakness) in variables (assets).  They are derived from percentile rankings from 1 to 100. 80-100 classifies performance as Growth, while 0-20 classifies performance as value. The 20-80 is the middle transition bin (Core). We have included a case study on an American stock, to explain how framework ranking anticipates.

Visa was in reversion from early 2011, as the framework ranking was at 20. This was a bottoming price trend. Visa got in momentum from mid 2012, as ranking rose from a bottom of 20 to 80. This was a high momentum selection and was expected to exhibit continued price appreciation, an example of a winner continuing to persist. Visa in Reversion from mid 2014, as ranking fell below 80, suggesting time for underperformance, stagnation in this case.

Smarter Beta Portfolio Construction

Our portfolio construction process captures the difference in value and growth. The two extreme quintiles of momentum and reversion are given higher weight higher weights vs. the middle three transition quintiles. All the securities of the cap-weighted benchmark are included in the ValueCore-Growth Index also referred to as the Value-Core-Growth Index. Next the holing periods are established to most effectively extract the different contributions of growth and value holdings.  Momentum (high growth) is held for a shorter time frame to better capture the momentum of growth stocks before they revert through lower price action. Reversion (high value) is held for considerably longer to allow such deep value stocks to appreciate.

Contribution Analysis

To further substantiate how the Index delivers more than the popular cap-weighted benchmark Index, we have illustrated here the contribution of value and growth components in the returns of the Value-Core-Growth Index and that for the popular benchmark. As expected the Value-Core-Growth Index dynamically selects between Value and Growth, while the benchmark is static in its selections, not being able to move dynamically between Value and Growth, the reason benchmark is not able to perform optimally. We have carried the illustrations below.

Need for Smarter Beta

The popular smart beta today has taken the value end of the investment spectrum to claim that growth is dead. While growth continues to be popular and continues to thrive. It’s hard to challenge growth and consider it dead. There are times high risk will deliver higher returns and times when low risk will deliver low returns. There is no rule that can challenge that. Calling beta dumb may make a case against growth and popular benchmarks and a case for value, but this is a part of the story. The smarter beta is the one which understands that growth-value are connected just like momentumreversion. It knows that value is longer term and growth shorter term and that market value and growth are not just fundamentally driven. And above that we should not forget, market is made of risk preferences, there are people who will always love S&P 500, and the market cap growth approach and there will always be value selectors and there will be new innovators who push the boundaries further.

Bibliography

Mean Reversion Indexing, Pal, 2012

Mean Reversion Framework, Pal, 2015

Momentum and Reversion, Pal 2015

Regression towards mediocrity in Hereditary Stature, Galton, 1886

Power laws, Pareto distributions and Zipf ’s law, M. E. J. Newman, 2006

CAPM is CRAP (or, the Dead Parrot Lives!), Montier, 2013

Why CAPM is not CRAP, Pal, 2014

Contributor(s)

Mukul Pal

Mukul Pal, a technical analyst who holds the Chartered Market Technician (CMT) designation, is the founder of AlphaBlock, a technology group focused on bringing the predictive mapping characteristics of AI to the market mechanisms that use blockchain to create an adaptive “intelligent...

ETHICS CORNER: EVALUATING INVESTMENT MANAGERS

Editor’s note: this case study is adapted from Lawton, Ethics in Practice. The full text is available at the CFA Institute web site. This and all ethics study material has been licensed for use by the MTA and is used here under the terms if that license.  It is presented here as an example of the ethics challenges MTA members could face. Ethics extend beyond the markets to all aspects of the investment profession.

Case Study: River City Pension Fund

Case Facts

Jack Aldred was reviewing his notes in preparation for an Investment Commission meeting when his manager, River City Treasurer Barbara Castel, stepped into his office. “The meeting is in two hours,” she said. “I want to know what you’re planning to say about Northwest Capital.”

Aldred is the Chief Investment Officer of the River City Pension Fund, a mature defined benefit pension plan for municipal employees. He accepted this appointment six months ago after taking early retirement when his previous employer, an insurance company, was reorganized. While he worried that he was still a novice in navigating city politics, Aldred had already initiated significant improvements in the management of the city’s pension assets. In particular, he recommended changes to the investment policy statement, designed more informative performance reports for the Investment Commission, and gained the Commission’s approval to engage in securities lending through the plan’s custodian. Although he has only one person on his staff, Aldred sees further opportunities to improve the investment program. For example, he believes that the pension plan has more active managers than it needs, that their mandates overlap, and that the plan is consequently paying higher investment advisory fees than the investment program requires. His immediate problem, however, is deciding what to do about one of the pension plan’s external managers in particular.  

Northwest Capital Advisors has managed a small cap value equity portfolio for the River City Pension Fund since the firm was founded eleven years ago. Under the leadership of President and CEO Roger Gray, Northwest has emerged as one of the area’s foremost small businesses. Among many other highly visible contributions to the community, Northwest donated financial expertise to a low-income housing program developed by the River City Interfaith Coalition, and the firm was credited with winning the Coalition a substantial grant from the state government. Northwest’s employees also contributed large amounts in their own names to the election campaigns of local politicians with progressive policies, including the Treasurer, as Aldred unintentionally discovered when he noticed some personal checks on her assistant’s desk. Aldred was aware that the state legislature had enacted a law several years ago making it illegal for officers of firms doing business with municipalities to make campaign contributions to elected officials or candidates who might be in position to influence the selection of vendors.

Although Northwest Capital Advisors is well regarded in the community, in the last three years the firm’s always mediocre
investment performance has declined substantially. Aldred looked at the results as reported by the manager and the
custodial bank’s performance measurement group:

When he asked Roger Gray about the discrepancies between the returns calculated by Northwest and by the custodian, Gray explained that the custodian’s standard asset pricing sources did not properly value the portfolio’s small cap holdings.  “Those pricing services are geared to highly liquid, frequently traded stocks with large market capitalizations,” he said.  “We’re the experts in the small cap market. We’re out there transacting every day!” However, the custodian stood by its valuations after reaffirming the prices of some of securities specifically challenged by Northwest.

No less troubling was preliminary evidence that Northwest had strayed from its small cap value mandate. Value investing had fallen out of favor in the marketplace, with growth stocks achieving substantially better rates of return for five of the last six quarters. Aldred wondered if Northwest was tactically introducing a growth tilt in an attempt to improve reported results in comparison with the benchmark and other small cap value managers. At Aldred’s request, the custodian had provided the holdings-based portfolio characteristics shown below. They were computed as of the end of the most recent quarter and as of the same time a year ago.

Finally, Aldred was concerned by the fact that Northwest’s small cap value portfolio manager, one of the three original principals, had suddenly left the firm. When Aldred learned of this from an acquaintance, he called Gray, who declared that the portfolio manager had left amicably to accept a better-paying position at a larger corporation. Northwest was actively recruiting a replacement, but in the meantime Gray himself had assumed responsibility for the pension plan’s portfolio. “And don’t forget,” he said, “we have a whole team of experienced people here.”

Aldred looked at his boss. “I think the Investment Commission has to take some action,” he said, “and I have a few ideas about how to proceed. But you understand the politics, and I’ll do whatever you say. What are your instructions?”

Case Discussion – Jack Aldred

By indicating that he will do whatever his supervisor directs him to do, Aldred is at risk of violating Standard III(A) by disregarding his obligation to act for the benefit of the client and comply with applicable fiduciary duty. He may also fail to exercise independent professional judgment in violation of the Code and Standard I(B).

Aldred is an investment professional employed by a municipal pension plan sponsor, River City, with a fiduciary obligation to act on behalf of the plan participants and beneficiaries. He must monitor the performance of the System’s external managers and take appropriate corrective action when their investment results are unfavorable. This primary responsibility overrides any other concerns, however worthy, such as supporting a local small business with a strong record of community involvement.

In addition, Aldred is obligated under the Code and Standards to exercise independent professional judgment. In this case, rather than simply assuring his supervisor that he will do whatever she tells him to do, he should set forth the alternative courses of action and give the Treasurer his specific recommendations.

Suggested Actions. Jack Aldred should:

  • determine which set of performance figures to use;
  • determine whether the portfolio is being managed in conformity with its mandate;
  • determine whether the departure of the portfolio manager adversely affects the firm’s ability to produce acceptable investment results; and
  • advise the Treasurer how he thinks they should proceed with an evaluation of Northwest Capital Advisors’ continuing eligibility to manage assets on behalf of the pension plan participants and beneficiaries.

Case Discussion – Roger Gray

Roger Gray may be in violation of Standard III(E), which requires the fair, accurate, and complete presentation of investment performance, and Standard III(C.2), which requires investment managers to make recommendations and take actions that are consistent with the portfolio mandate. He may also be in violation of Standard I(A), Knowledge of the Law, and/or Standard IV(C), Responsibilities of Supervisors.

There is disturbing evidence in the case facts that Gray may be overstating asset values in order to improve the firm’s reported performance. If so, this would constitute a violation of Standard III(E), which obligates Members and Candidates to make Lawton, Ethics in Practice, p. 45 reasonable efforts to ensure that investment performance information is fair, accurate, and complete.

Standard II(C.2) reads, “When Members and Candidates are responsible for managing a portfolio to a specific mandate, strategy, or style, they must only make investment recommendations or take investment actions that are consistent with the stated objectives and constraints of the portfolio.” If it is the case that Gray is introducing growth securities to a value portfolio, he is taking investment actions at variance with the portfolio’s mandate in breach of the Standard.

Officers of Northwest Capital Advisors appear to be making illegal campaign contributions. If Gray himself is making such contributions, he is personally in violation of Standard I(A), which requires covered persons to understand and comply with all applicable laws, rules, and regulations. In any event, as President and CEO of the firm, he has supervisory responsibility under Standard IV(C) to make reasonable efforts to detect and prevent violations of applicable laws, rules, and regulations by those subject to his supervision or authority.

Suggested Actions. Roger Gray should:

  • critically review the firm’s pricing sources and practices to ensure that portfolio valuations are fair and accurate;
  • re-examine portfolio holdings to ensure that they are consistent with the mandate to invest in small cap value securities; and
  • Discontinue, and instruct his employees to discontinue, the practice of making illegal campaign contributions.

Contributor(s)

PREFERRED STOCKS IN A RISING INTEREST RATE ENVIRONMENT

Editor’s note: this article was originally published at the Global Financial Data blog and is reprinted here with permission.

Even though the Fed is not due to raise interest rates for a few years (2016 per the latest numbers that I’ve seen lately).  People are still looking and preparing for this to happen. What asset classes will be affected positively and negatively? In the graph attached, I wanted to show the inverse correlation between preferred stocks and interests rates. with the recent move in rates, the price performance of preferred stocks has been negative. Please look at this long-term graph and see for yourself.

Contributor(s)

Pierre Gendreau

Pierre Gendreau is the Sr. VP of Sales at Global Financial Data. He helps some of Wall Street’s top analyst put together investment models that require high quality, reliable and comprehensive data for a more complete analyst. Pierre has a strong background...

PREFERRED PHASES & CYCLES®: THE RELIEF RALLY WAS WELCOME, BUT LATE-AUGUST LOWS ARE LIKELY TO BE RE-TESTED

Editor’s note: This was originally published on September 21, 2015 and is republished here with permission.

After a multi-year advance the breakdown of the prolonged trading range in the S&P 500 and the volatile plunge into the late-August low changed the “feel” of the markets for most participants. As we move past Labor Day – and enter the historically important September/October period – three facts stand out.

First, as our recent Market Comments detailed, the interpretation of the August 24th low as a “selling climax” is strongly supported by numerous technical and sentiment indicators that reached extreme levels as the stampede out of stocks accelerated. As further confirmation, the Investors Intelligence survey of investment adviser sentiment shows that the percentage of bullish advisers has now sunk to its lowest level (25.7%) since the bull market started.

Second, the current corrective move is best interpreted as “Leg 4” of an ongoing bull market. Substantial technical damage has been done in many sectors and individual stocks. But a final “Leg 5” advance should follow, albeit with narrowing participation.

Third, the old trading range is gone but not forgotten. If and when the markets get their Leg 5 rallies underway, the bulls will have to overcome the S&P 500’s 2,040 to 2,135 trading range in its new guise – as a block of overhead resistance to further advance.

The key question remains whether August 24th was the end of the decline or do the sellers have the strength to keep the markets under further pressure? Our view, as stated in the previous Market Comments and Ron’s Briefs, is that the final end to the decline could be weeks away, likely in October, and that the August low will likely be re-tested. The crucial levels for the S&P 500 are the recent 1,867 low and the previous major low of 1,821 made in October 2014. A bullish outlook for the S&P 500 can tolerate a decline to or below the 1,825 level, but any lower, and the bull’s case starts to unravel. As for Toronto, it is well along in its correction (Leg 4), but likely to move in sync with New York.

Two important matters need to be watched as Leg 4 proceeds. First, are there any signs of capitulation by weak holders of stocks, as evidenced by high volume selling days with poor advance/decline figures? Second, do positive divergences appear as the S&P 500 re-approaches the 1,867 level? If these conditions appear, chances are good that the final low of Leg 4 is looming.

In sum, despite the recent battering, the bull market remains in force until such time when key support levels are erased.  If our “Leg 4 correction” scenario is correct then the bulk of the price damage has already been done but risk control remains absolutely essential. This is a time to be cautious and let the bull regain strength. We believe that the “end game” of the correction still has a number of weeks to run, likely ending in mid-October.

The S&P 500 corrected 12.6% (268 points) from May to late August. This pullback was the one-third correction of the long advance from September 2012 to May 2015.

Below the recent 1,867 low the S&P 500 has support at the previous 1,821 low and important trend line support near 1,800.

The low volume rally from the 1,867 low has now re-traced 50% of the previous decline. It is likely that there will be further movement between the 2,000 level on the upside and the 1,867 low, with the next few weeks seeing repeated pressure for another test of this low. Gap openings (up and down) have been an S&P 500 characteristic recently, and when this changes it will be a sign that the volatility is settling down.

The S&P 500’s correction should end somewhere between the recent 1,867 low and the October, 2014 low at 1,821.

The Toronto market’s correction is much further advanced than New York’s. From September 2014 the S&P/TSX Composite Index has traced an “A-B-C” or down-up-down pattern, with the “C” wave recently hitting a new low at 12,705.  Importantly, this recent low reached the 1/2 correction target and generated a dramatically oversold condition.

The correction is still underway in Toronto but there is a good chance that the price low has already been hit. Toronto needs to break the pattern of “lower highs and lower lows” that has persisted since April and it is likely that the Banks will be leaders in any future turnaround. If the market rallies, the 13,900 to 14,000 area is the first zone of upside resistance.

We expect that the 12,705 to 13,000 zone will be tested again and will contain any further bearish pressure.

The Dow Industrials declined nearly 3,000 points from its mid-May peak to its late-August low. The quick bounce back rally from the low gained back almost 50% of this decline, a normal retracement.

It is easy to be negative about the Dow Industrials. The Index broke down through its major support in the low 17,000s, the major bull market trend line has been pierced, the 200-day Moving Average is curling down, and the Transports continue to be weak. But the correction has now erased about 50% of the entire advance from November 2012 to May 2015, which suggests that the worst of the selling could be over.

We expect that the August low of 15,370 will be re-tested in the weeks ahead. If successful, any recovery rally will encounter resistance in the 16,600 to 17,100 area. 

The FTSE has been in an identifiable correction since its April high. In mid-August the FTSE broke down through its 6,430 support level indicating, as we suggested a month ago, that the correction had more room to run.

Critically, the late-August collapse also took out the major bull market trend line, the October 2014 low at 6,073 and the major breakout level of 6,000 dating from early 2013. The 6,000 to 7,100 zone is now a multi-year block of overhead resistance. And the longer the FTSE spends in the low 6,000s and high 5,000s, the 200-day Moving Average will start to tip over and slope down.

A case can be made that the FTSE has declined in a down-up-down pattern into an extreme late-August low. A successful re-test of this low could then see the FTSE re-capture 50% of its decline from the April top, targeting 6,500.

The FTSE should re-test its late August low at 5898.87. If this level holds, a good rally is possible. Should this level fail to hold, a resumption of the decline is likely.

Phases & Cycles is proud to celebrate its 25th year of providing independent research for clients. The team of Nancy Lydon (Vice-President – 24 years), David Tippin (contributor – 23 years), Monica Rizk (Senior Analyst – 15 years), Carolyne Mignault (editor and publisher – 8 years), Angelina Palermo (marketing – 18 months) and yours truly, wishes to thank you for your trust and support through the Bulls and Bears, ups and downs, rallies and declines, tops and bottoms.

Contributor(s)

David Tippin, PhD

David Tippin, PhD, has been a contributor to Phases & Cycles since 1995.  He has over 20 years of stock market experience and provides a monthly Market Comment.

Ron Meisels

Ron Meisels is Founder and President of Phases & Cycles Inc. with over 50 years of stock market experience.  He specializes in the independent research of Canadian and U.S. securities and market using Behavior Analysis.  Institutions ranked him among the top three analysts...