Technically Speaking, March 2017

What's Inside...

ALPHANUMERIC FINANCIAL CHARTS

Editor’s note: Richard Brath is among the presenters at the Annual Symposium in April. Below is a reprint of his...

Read More

NO TIME FOR SKEPTICISM

As stocks remain comfortably near yet another record high, we find it interesting just how suspiciously the market is still...

Read More

HOW DO STOP-LOSS ORDERS AFFECT TRADING STRATEGY PERFORMANCE?

Editor’s note: Tucker Balch is among the presenters at the Annual Symposium in April. Below is a reprint of his...

Read More

MIFID II SOLUTIONS

Editor’s note: The Markets in Financial Instruments Directive (MiFID) is the EU legislation that regulates firms who provide services to...

Read More

VISUALIZING THE ANXIETY OF ACTIVE STRATEGIES

Editor’s note: Cory Hoffstein is among the presenters at the Annual Symposium in April. This post was originally published at...

Read More

WHY MULTIPLY BY SQRT(252) TO COMPUTE THE SHARPE RATIO?

Editor’s note: this article was originally posted at AugmentedTrader.com.

This question comes up every time I teach Computational Investing....

Read More

THE TOP 5 INVESTOR BIASES

Editor’s note: This was originally published at EducatedTrader.com, the website of the Independent Investor Institute, an organization dedicated to...

Read More

THE JANUS FACTOR

Editor’s note: Gary Anderson is among the presenters at the Annual Symposium in April. Below is a reprint of his...

Read More

TACTICAL REPORT: EUR/USD: PARITY TARGET, A CLEAR AND PRESENT REALITY

Editor’s note: This report was originally published on March 6, 2017. All data and opinions are current as of that...

Read More

ALPHANUMERIC FINANCIAL CHARTS

Editor’s note: Richard Brath is among the presenters at the Annual Symposium in April. Below is a reprint of his work from his blog at RichardBrath.wordpress.com.

Financial charting has long used alphanumerics as point indicators in charts. One of the oldest I can find is Hoyle’s Figure Chart (from The Game in Wall Street and How to Play it Successfully: 1898) which essentially plots individual security prices in a matrix organized by time (horizontally) and price (vertically).

An early figure chart (from Hoyle: 1898). Time is implied horizontally, price vertically. A numeric “figure” is recorded for each price that occurs for each day.

This textual representation evolved over the decades. By 1910, Wyckoff (Studies in Tape Reading: 1910) was creating charts where x and y are still time and price, but he was writing down volumes instead of prices, and connecting together subsequent observations with a line.

Wyckoff’s figure chart records rising and falling prices in adjacent columns. For each price level he records the volume figures and connects together the sequence with a line.

By the 1930’s these had evolved into early point and figure charts, such as can be seen in DeVilliers and Taylor (Devilliers and Taylor on Point and Figure Charting: 1933).  Columns use X’s to plot prices and other characters to denote particular price thresholds.

DeVilliiers and Taylor’s Point and Figure chart (1933).

These charts look pretty close to modern financial point and figure charts. Now we typically use X’s for a column of rising prices and O’s for a column of falling prices, and other character may be used to denote particular time thresholds (e.g. 1-9, A-C to indicate the start of each month).

Modern Point and Figure chart, via Wikipedia.

Other alphanumeric charts evolved along the way as well. Here’s an interesting depression era chart plotting a histogram of states based on state unemployment rates. Like Wyckoff, the author seems to be interested to keep the alphanumerics inside circles. Also, note standardized 2 letter codes for states did not yet exist – states are numbered instead. (from W. C. Cope’s book Graphic Presentation: 1939).

Distribution Chart made of stacked characters. Note additional information encoded in shading and added markers.

Fast forward to the 1980’s, and we have Peter Steidlmayer’s Market Profile (R) charts that appear reminiscent to the alphanumeric distributions seen in the depression era chart. In these distributions, the alphanumeric value represents times when a security traded at a specific price. Depending on the timeframe of the chart different mappings may be used. One common intraday convention is to use characters A-X and a-x to represent half hour intervals throughout the day, with a split from uppercase to lowercase at noon.

Very basic Market Profile chart

There are many, many variants of market profile charts now e.g. sierrachart.com, windotrader.com, bluewatertradingsolutions.com, prorealtime.com, cqg.com, etc, etc. Given the many possible data attributes and analytics that one might associate with a character in a chart, it can become a challenge to encode them. As a result, one can find interesting variants. Beyond position, letters and case:

  • color: of the foreground letter or background square
  • bold: to indicate a row or potentially as a highlight to one time interval, e.g. MarketDelta
  • superscripts: e.g. eSignal.
  • added symbols: asterisks, less than, greater than, etc.
  • added shapes: circles and diamonds

Many variants of Market Profile (R) charts by various vendors. Note all the additional information added via foreground/background color, bold, superscript, etc.

Jesse Livermore (How to Trade in Stocks: 1940created his own variant of alphanumeric charts stripped down to tracking only the minimums and maximums, discarding the intervening levels and using color and underlines to indicate information.

Livermore strips down charts to a simple table recording only the local minimums and maximums, using different colored text and different colored underlines.

One interesting discussion point is the actual use of these charts. Whenever I show these charts to the visualization research community, people are aghast and suspect. There’s so much going on in these charts, so many different things being shown simultaneously, that they don’t believe that people actually use them or that somehow these charts can’t be perceptually efficient.

On the other hand, I’ve talked to people who’ve traded off these charts their entire career. They see patterns and pick out things immediately at very different scales: individual outliers, columns of a particular letter, the shape of a distribution, and so on. Much like an expert chess player, these market participants have learned these charts, know how to interpret them, and use them to make trading decisions.

To be fair, not everyone in the visualization community is shocked: some are genuinely curious. Instead of reducing visualizations down to just one or two attributes, here’s something heavily loaded with a lot of visual attributes. And it’s not a static poster where you have no interaction: these are on computer screens packed with interactive features. In spite of all the computational ability to filter and reduce, here’s a community that that has these densely-packed charts. People are actually using them to see macro patterns (shapes of distributions) and micro readings (individual characters), but they are also able to attend to intermediate patterns such as particular letters within a distribution. Perhaps they aren’t seeing patterns as fast as preattentive recognition, but they are still seeing patterns quickly with this external cognitive aid. There’s still more that the visualization community needs to understand about expert users.

Contributor(s)

Richard Brath

Richard Brath is a long-time innovator in data visualization in capital markets at Uncharted Software. His firm has provided new visualizations to hundreds of thousands of financial users, in commercial market data systems, in-house buy-side portals, exchanges, regulators and independent investment research...

NO TIME FOR SKEPTICISM

As stocks remain comfortably near yet another record high, we find it interesting just how suspiciously the market is still viewed by so many.  Their caution and anxiety is based on a variety of reasons including:  company valuations, bullish sentiment readings, political uncertainty and the record high prices themselves.  However, as we’ve explained for some time through a variety of approaches, this is no time for skepticism.  To further strengthen our bullish stance, we’ll review the findings of one of our key market breadth measures and its trends in order to shed perspective and allay some of those investor fears.

One of the indicators Lowry Research monitors is the Percent of Operating Company Only (OCO) Stocks 20% or More Below 52-Week Highs.  The concept is to gain a greater understanding of the trends in the percentages of common stocks in their own individual bear markets (traditionally defined as down more than 20%) as a gauge of overall market wellness.  We use these indicators much like how an economist might track trends of the foreclosure rate for homes as an indicator of the housing market’s relative and absolute levels of strength within the cycle.

In the example in the chart below, we’ve used the Segmented Percent of OCO Stocks 20% or More Below 52-Week Highs to gain another level of understanding of market health by differentiating between market cap sizes. 

We see that starting in April 2015, for the Small- and Mid-Caps, and late May 2015 for the Large-Caps, the % of OCO Stocks 20% or More Below 52-week Highs began to trend higher.  This followed a long period of declines in this measure and came at a time when stocks price indexes were still making new record highs.  However, for those paying attention, under the surface of price we saw continued deterioration in breadth with more stocks in bear markets.  This trend continued until peaking in February 2016.  Just as Lowry seeks trends of deterioration, we observed a confirmed trend of improvement by early to mid-March 2016 when the uptrend lines were broken as identified by the green arrows. 

The trend of improvement in the % of OCO Stocks Down 20% or More Below 52-Week Highs has continued from March 2016 to this day.  And, until this trend reverses in a sustained way along with other cues, the market likely still has room to run.  Simply put, the current conditions of investor Demand and market breadth are not historically congruent with how bear markets, or even appreciable market corrections like the 2015 correction, are born.

Contributor(s)

Vincent M. Randazzo, CMT

Vincent Randazzo, is the Head of Technical Research at CFRA and Chief Market Strategist at Lowry Research, a CFRA business. Vincent produces written and recorded stock market research based on Lowry’s proprietary, statistically driven measures of equity market demand/supply and breadth. Prior...

HOW DO STOP-LOSS ORDERS AFFECT TRADING STRATEGY PERFORMANCE?

Editor’s note: Tucker Balch is among the presenters at the Annual Symposium in April. Below is a reprint of his work from his web site at AugmentedTrader.com.

“A stop order is an order placed with a broker to sell a security when it reaches a certain price. A stop-loss order is designed to limit an investor’s loss on a position in a security” —investopedia.

In this article, we investigate how the addition of stop-loss orders affect a generic trading strategy.

When investors enter a new position in a stock, they often simultaneously put in an order to exit that position if the price dips to a certain level.  The intent being to prevent a substantial loss on that stock if a significant unanticipated negative event occurs.

As an example, if we bought a fictitious stock XYZ at $100.00, we might put in a 5% stop-loss order (at $95.00).  If the price of XYZ continues upward as we hope, we accrue additional value, but if the price suddenly drops 15% to $85.00 we’d exit with a loss of only 5%. So a stop-loss order limits downside risk while enabling upside gains.

In many cases this plan works as intended.

Sounds great, how can it go wrong?

The price of AAPL between May 4 and September 12, 2012. The stock accumulated 12.85% in value over that time.

A key problem with stop-loss orders is that the price might dip before it goes up more significantly.  Consider the chart at right of AAPL’s price during a few months in 2012. If we purchased AAPL at $86.14 on April 27 (all the way to the left) and simultaneously put in a 5% stop-loss order we’d exit on or about May 4th with a 5% loss.

On the other hand, if we avoided the stop-loss order and held AAPL until September 12, we would have made 12.85% instead of losing 5%. In this case, the stop-loss order effectively cost us 17.85%.  This sort of outcome is more likely with volatile stocks because they’re more likely to bounce around and “tag” the stop-loss price along their way to a higher price.

There are other risks as well.  As the price is on its way down and the stop-loss order is triggered, it is not necessarily the case that you’ll get the price you wanted.  The price may continue past the stop-loss level to a significantly lower price before your order executes. There are more complexities with stop-loss exits that we could go into, but suffice it to say that there’s no guarantee you’ll get the stop-loss price you set.

We can also enter stop-gain orders

There’s another sort of order that is symmetric to the stop-loss.  Stop-gain orders enable you to “lock in” gains when the price reaches a certain target level.  The idea being that once the stock meets a target price you had, you should take the profits and avoid the risk of the stock later losing value.

An experiment: How stop-loss and stop-gain orders affect strategy performance

In order to evaluate the utility of stop-loss and stop-gain orders we created a notional strategy. We then tested it first with no stop orders, and then with stop orders at different levels.  Our strategy works as follows:

  • Each month, compose a portfolio of the Dow Jones Industrial 30 stocks
  • Require a minimum 2% allocation to each stock
  • Optimize the remaining funds to stocks for maximum Sharpe Ratio
  • Enter individual stop orders (if any) for each position
  • Exit positions as appropriate over the next month

Note that for this experiment we utilized the members of the Dow as of today (January 2016) over the entire simulation, so our backtests are not survivor bias free. That doesn’t matter much though because what we’re investigating here is how stop orders affect the strategy in a relative manner.

The performance of our baseline strategy from January 2001 to early 2016 is illustrated below:

Figure 1: Our baseline strategy of investing in the Dow 30, and optimizing each month for maximum Sharpe (orange). Performance of the Dow itself is illustrated in blue. This baseline strategy does not include any stop-loss or stop-gain orders.

As you can see, this strategy provides great cumulative returns (410%), but it is more volatile than the Dow and also subject to significant drawdown (-55%).  Can stop-loss orders help?  Let’s see.  Here’s a chart of the same strategy, but now with a 5% stop-loss order applied at the beginning of each monthly trading cycle:

Figure 2: Same strategy, but with a 5% stop-loss order entered for each stock at the beginning of each month. The addition of stop-loss significantly reduces cumulative return.

As you can see, the addition of the 5% stop-loss significantly reduces cumulative return (from 410% to 101%).  There are some benefits however, it limits drawdown to only -40.31%, and it reduces daily volatility by about 40%.

It might be that a 5% is too small, causing us to exit too early.  So, let’s try more values.

Results varying stop-loss and stop-gain settings

We repeated our experiment while varying the stop-loss value from 1% to 20%.  We also tested the strategy with symmetric stop-gain orders (in other words, if we have a 2% stop loss, we also add a 2% stop gain).  While varying the stop-loss we measured: Cumulative return, Sharpe Ratio, and max drawdown.  Let’s look at each of these metrics separately:

Figure 3: The cumulative return of our strategy varies as we change stop-loss from 1% to 20% (blue), stop-loss with stop-gain (red), and no stop-loss (green).

In the figure at right you can see that as we increase the stop-loss level from 1% to 20%, cumulative returns increase significantly (blue line). We see very similar results when we additionally add stop-gain orders(red). The highest return is provided when no stop-loss orders are applied at all (green line).

A reasonable conclusion to draw here is that to maximize cumulative return it is best not to exit with stop-loss or stop-gain orders.  That approach, however does expose the strategy to drawdown risk.  So let’s take a look at drawdown.

Figure 4: How max drawdown is affected by various stop-loss levels (blue), and additionally with stop-gain (red). Drawdown with no stop orders is shown in green.

Drawdown is a measure of peak to trough loss (remember the -55% drawdown during the great recession?).  Smaller negative numbers are better.  Figure 4 illustrates how drawdown is affected by increasing stop-loss order levels.  As you can see drawdown increases significantly as we increase stop-loss (and corresponding stop-loss/stop-gain pairs).  So stop-loss orders do serve their intended purpose of protecting against significant drawdown. This protection though, comes at the price of overall returns.

Clearly there is a tension between the protection afforded by stop-loss orders and the potential return for our strategy without them.  We can look at one more metric to seek some resolution.

Figure 5: How Sharpe Ratio varies as we increase stop-loss size.

Sharpe ratio is a measure of risk-adjusted return.  It considers volatility of the portfolio as well as return.  Higher Sharpe ratios are better.  Figure 5 shows us how Sharpe Ratio is affected as stop-loss size is increased. Notice that with stop-loss only (blue), Sharpe Ratio is fairly constant at about 0.50.  So changing the stop-loss level has little effect.  However, when we add stop-gain as well, we see especially poor Sharpe ratios at low stop-gain levels from 1% to 6% (red).  As we increase stop-loss/stop-gain past 8% though, we see a fairly constant Sharpe Ratio.

Some take home conclusions

Remember that the results here are for a particular strategy and that they may not necessarily generalize to your trading strategy.  Given that caveat, here are some of the conclusions we can draw from our experiments using stop-loss orders with this strategy:

  • Stop-loss orders do effectively protect against drawdown, but at the cost of cumulative return.
  • The combination of stop-loss with stop-gain orders is more effective at limiting drawdown than stop-loss orders only.
  • Low stop-gain orders (1% to 5%) significantly negatively impact cumulative return and Sharpe ratio.
  • For this strategy, the “sweet spot” where Sharpe ratio is maximized and drawdown is somewhat limited seems to be in the 8% to 10% range with symmetric stop-loss and stop-gain orders.

Note again, that these conclusions are specific to this strategy.  Please do not consider this to be investment or trading advice.

This information has been prepared by Lucena Research Inc. and is intended for informational purposes only. This information should not be construed as investment, legal and/or tax advice. Additionally, this content is not intended as an offer to sell or a solicitation of any investment product or service. Please note: Lucena is a technology company and not a certified investment advisor. Do not take the opinions expressed explicitly or implicitly in this communication as investment advice. The opinions expressed are of the author and are based on statistical forecasting based on historical data analysis. Past performance does not guarantee future success. In addition, the assumptions and the historical data based on which an opinion is made could be faulty. All results and analyses expressed are hypothetical and are NOT guaranteed. All Trading involves substantial risk. Leverage Trading has large potential reward but also large potential risk. Never trade with money you cannot afford to lose. If you are neither a registered nor a certified investment professional this information is not intended for you. Please consult a registered or a certified investment advisor before risking any capital.

Contributor(s)

Tucker Balch, Ph.D.

Tucker Balch, Ph.D. is a former F-15 pilot, professor at Georgia Tech, and co-founder and CTO of Lucena Research, an investment software startup. His research focuses on topics that range from understanding social animal behavior to the challenges of applying Machine Learning...

Anderson Trimm, Ph.D.

MIFID II SOLUTIONS

Editor’s note: The Markets in Financial Instruments Directive (MiFID) is the EU legislation that regulates firms who provide services to clients linked to ‘financial instruments’ (shares, bonds, units in collective investment schemes and derivatives), and the venues where those instruments are traded. MiFID will result in significant changes for the research community. IHS Markit has prepared a white paper explaining the requirements of MiFID and solutions firms can consider to meet those requirements. MiFID II Solutions can be downloaded from their web site for free. Below are extracts from the paper highlighting the new requirements.

A wide-ranging piece of legislation, MiFID II aims to create fairer, safer and more efficient markets through improving investor protection, increasing transparency in OTC markets and changing market structure to encourage more competition.

Taken together, the measures of MiFID II affect every part of the securities trading value chain.

Investor Protection

Under the reforms, new legislation establishes strict rules around conflicts of interest, commissions and inducements to improve investor protection by increasing transparency around the use of client money to pay for research.

Historically, research payments have been linked to trading volumes with few firms using formal research budgets. Unbundling the payment for research from the payment for execution has been acknowledged as one way to address these potential conflicts.

New requirements state that research is not considered an inducement if the Investment Firm

(IF) pays directly out of their P&L or from a research payment account (RPA).

Transparency

MiFID II expands pre-trade and post-trade transparency regimes to equity-like instruments, bonds, derivatives and structured products, among other financial instruments. For OTC derivatives, there are two layers of trade reporting to enhance price transparency and help regulators monitor risk and market activity.

Post trade public reporting: MiFIR real-time public reporting is required to be sent to an Approved Publication Arrangement (APA). Post trade transaction reporting MiFIR transaction reporting is required to be sent to an Approved Reporting Mechanism (ARM) by T+1 and requires transaction reports to include; transaction data, legal entity data and personal data about the trader, non-public personal information (NPPI).

Market Structure

MiFID II aims to increase competition in OTC derivatives markets through mandating the use of electronic trading venues for certain instruments.

Trading requirement: MiFIR requires trading of certain liquid instruments on a trading venue; Multi-lateral trading facility (MTF) or Organised Trading Facility (OTF).

Repapering: MiFID II will require firms to establish and implement an order execution policy. This will mean that some firms will need to undertake a substantial repapering process when updating their terms of business and obtaining consent from clients. Additionally, firms will need to confirm legal entity identifiers (LEIs) and reaffirm client categorizations as professional or retail.

Planning ahead and working to comply

Despite MiFID II being delayed to 2018, the urgency around implementing solutions to comply with the new requirements has not subsided.

These and other regulatory pressures are mounting across the major regions, and as electronification and market structure changes expand globally, firms have to commit to building a basic foundation to ensure regulatory compliance.

Looking beyond simple compliance, forward-looking buyside and sell-side firms are exploring future-proof frameworks as a potential competitive differentiator. The tools that were once ‘nice to have’ are becoming more ‘must have’ as a change is afoot.

To learn more about MiFID and HIS Markit’s solutions, please visit their web site.

Contributor(s)

IHS MARKIT

VISUALIZING THE ANXIETY OF ACTIVE STRATEGIES

Editor’s note: Cory Hoffstein is among the presenters at the Annual Symposium in April. This post was originally published at ThinkNewfound.com and is available as a PDF here.

Summary­­

  • Prospect theory states that the pain of losses exceeds the pleasure of equivalent gains. An oft-quoted ratio for this pain-to-pleasure experience is 2-to-1.
  • Evidence suggests a similar emotional experience is true for relative performance when investors compare their performance to common reference benchmarks.
  • The anxiety of underperforming can cause investors to abandon approaches before they benefit from the long-term outperformance opportunity.
  • We plot the “emotional” experience investors might have based upon the active approach they are employing as well as the frequency with which they review results. The more volatile the approach, the greater the emotional drag.
  • Not surprisingly, diversifying across multiple active approaches can help significantly reduce anxiety.

Last week, Longboard Asset Management published blog post titled A Watched Portfolio Never Performs. What we particularly enjoyed about this post was a graphic found in the middle, which applied prospect theory to demonstrate actual results versus perceived investor results based upon emotional experience.

In prospect theory, investors tend to feel the pain of losses more than the pleasure of equivalent gains.  Investors that check their portfolio more frequently compound those negative emotions faster than those that check less frequently. As a result, they may perceive their experience as being riskier than it really is.

This is made worse by the fact that investors that check their portfolios more frequently are mathematically more likely to see periods of losses than those that check less frequently.

When prospect theory and mathematics are tied together, we get the following result:

Source: Longboard Asset Management. http://www.longboardfunds.com/articles/watched-portfolio-never-performs

While in actuality, the investors checking their portfolios daily, weekly, and monthly all had the same long-term performance result (assuming, of course, they were able to stick with their investment), the anxiety caused by checking performance more frequently caused the daily investor to feel like their long-term performance was much worse than it really was.

While prospect theory is most often applied to absolute gains and losses, we believe it also applies to relative portfolio performance.  Investors constantly compare their results to standard benchmarks.

In the remainder of this commentary, we want to extend Longboard’s example to explore how typical active strategies – expressed as factor tilts – feel to investors based upon how frequently they evaluate their portfolio.

Methodology & Data

To explore the idea of anxiety caused by relative performance in active strategies, we will look at the performance of long/short factor portfolios.

The idea here is that a long-only factor portfolio (e.g. a long-only value portfolio) can be made by overlaying a market portfolio with a long/short value portfolio.  Therefore, relative performance to the benchmark will be governed entirely by the size of the long/short portfolio overlay.

There are a variety of reasons why this framework is not true in practice, but we feel it adequately captures the concept we are looking to explore in this commentary.

The long/short factor portfolios we employ come from AQR’s factor library.  Specifically, we leverage their Size (“SMB”), Value (“HML Devil”), Momentum (“UMD”), Quality (“QMJ”), and anti-beta (“BAB”) factors data.

Factor portfolio returns are only available on a monthly basis, so we will recreate the above Longboard graphic for investors that review their portfolio on a monthly, quarterly, and annual basis. Using monthly data allows us to go back as far as 1927 to evaluate performance for several factors.

To create “experience” returns, the return of the long/short portfolio is calculated over the investor’s evaluation period.  If the return over the period is negative, then the loss is doubled, to account for the fact that investors are reported to experiences the pain of a loss twice as much as the pleasure of an equivalent gain.

Size Factor

The size factor is the relative performance between small capitalization stocks and large capitalization stocks, with the idea being that small should outperform large over the long run.

Source: AQR. Calculations performed by Newfound Research.  Past performance is not a guarantee of future results.

What we can see is that while size has been a positive premium over the long run, even investors that only evaluate their portfolios on an annual basis have had a negative emotional experience.

Due to the asymmetric response to gains versus losses, we can see the pain of “volatility drag” in periods like the 1950s, where the size factor was largely flat in return, but the experience for investors was largely negative.

Value Factor

The value factor captures the relative performance of cheap stocks versus expensive ones.  Our anecdotal experience is that this is, by far and away, the most actively employed portfolio tilt for investors.

Source: AQR. Calculations performed by Newfound Research.  Past performance is not a guarantee of future results.

Unlike the size premium, we see that the long-term performance of the value factor is strong enough, and the historical frequency of underperformance limited enough, that an investor who checks their relative performance annually will feel like they ultimately ended up in the same place as the broad market.

At first review, this may seem disheartening.  After all, over the long run value has delivered significant outperformance.

However, what this tells us is that for investors that review their portfolios at most annually, a value tilt can be employed without creating too much long-term relative anxiety.  The investor will still feel like they are keeping up with the market benchmark, despite the emotional drags of prospect theory, and can in reality harvest long-term outperformance opportunities.

Momentum Factor

The momentum factor captures the relative performance of prior winners versus prior losers: investing in those stocks that have relatively outperformed their peers and shorting those that have underperformed.

Source: AQR.  Calculations performed by Newfound Research.  Past performance is not a guarantee of future results.

While the value factor ended up nearly in the same place as the market for annual reviewers, the momentum factor ends up significantly positive.

Furthermore, the consistency of the momentum factor is so strong from the 1940s to the 2009s that even a monthly reviewer feels like they are treading water.

The trade-off appears in the dreaded momentum crashes (e.g. 1932 and 2009) when winners dramatically underperform losers.  The crashes have historically tended to occur during strong market rebounds.  From an emotional experience, this might as well be the apocalypse.

Even for an annual reviewer, we see that the emotional drawdown in from 3/2009 to 11/2009 is almost 80%.

Quality Factor

The quality factor captures the relative performance of “high quality” stocks versus “junk stocks,” as measured by a variety of financial and performance metrics.

Source: AQR.  Calculations performed by Newfound Research.  Past performance is not a guarantee of future results.

While the absolute return of the quality factor is nowhere near the absolute return of the momentum factor (over the same period, momentum returned nearly 90x while quality returned nearly 10x), it is one of the few factors where a quarterly reviewer has close to a net neutral emotional experience.  This is likely due to the factor’s low volatility, which reduces the emotional drag caused by investors’ asymmetric response to positive and negative returns

Anti-Beta (“Low Volatility”) Factor

Anti-beta (often referred to as “low volatility”) captures the relative outperformance of lower beta stocks versus higher beta stocks.  Beta, in this case, is a measure of sensitivity to the overall market.  It quantifies a stock’s exposure to systematic market risk.

Source: AQR.  Calculations performed by Newfound Research.  Past performance is not a guarantee of future results.

Anti-beta has the distinction of being the only factor where even a quarterly reviewer has had a net positive experience.

This is due to two effects: a strong absolute return level (with the actual performance trumping even the momentum factor) and limited drag from volatility (as can be seen by how closely the annual review tracks the actual performance from 1945 to 1998).

Conclusion

At Newfound, we often say that the optimal portfolio is first and foremost the one investors can stick with.  All too often, when it comes to active investing, we see investors go all in on a given approach without considering the emotional anxiety caused by relative underperformance.

The ability and discipline to stick with a strategy is just as important as the strategy itself when it comes to unlocking the potential of evidence-based active strategies.

What we find is that for each active approach, the strength of the anomaly versus its volatility and the frequency with which performance is reviewed will ultimately dictate the investor’s emotional experience.  Less volatile premia may cause less of an emotional drag.

Yet perhaps the most powerful take away can be found in the following graph.

Source: AQR.  Calculations performed by Newfound Research.  Past performance is not a guarantee of future results.

In the above chart, we construct a portfolio that holds an equal amount of each of the five factors, rebalanced monthly.

Not surprisingly, the benefits of diversification are so powerful that even an investor that evaluates their relative performance on a monthly basis is left with a positive emotional experience.

Once again, we find that diversification is hard to beat.

Contributor(s)

Corey Hoffstein

Corey Hoffstein is co-founder and Chief Investment Officer of Newfound Research. Investing at the intersection of quantitative and behavioral finance, Newfound Research is dedicated to helping clients achieve their long-term goals with research-driven, quantitatively-managed portfolios, while simultaneously acknowledging that the quality of...

WHY MULTIPLY BY SQRT(252) TO COMPUTE THE SHARPE RATIO?

Editor’s note: this article was originally posted at AugmentedTrader.com.

This question comes up every time I teach Computational Investing.  Here’s my attempt to create the best, (final?) answer to this question.

In my courses I give the students the following equation to use when computing the Sharpe Ratio of a portfolio:

Sharpe Ratio = K * (average return – risk free rate) / standard deviation of return

Controversy emerges around the value of K. As originally formulated, the Sharpe Ratio is an annual value.  We use K as a scaling factor to adjust for the cases when our data is sampled more frequently than annually.  So, K = SQRT(12) if we sample monthly, or K = SQRT(252) if we sample the portfolio on every trading day.

How did we come up with these values for K?  Are they correct?  Let’s start with the original 1994 paper by William Sharpe: Sharpe’s paper.  Here’s how he defines his ratio: For a time period t, the differential return Dt is the return on the fund minus the return on a benchmark over that period.

Dt = Rft – Rbt

We want to assess the ratio over a many periods, say t = 1 to T.  Note that these periods could be years, months, days, etc.  Now let’s define two factors:

Davg = The mean value of Dt for t = 1 to T
Dstdev = The standard deviation of Dt for t = 1 to T

Using those two factors, Sharpe defines his ratio as

Sharpe Ratio = Davg / Dstdev

That’s it.  Note that there is no “K” involved in this equation, it is just the ratio of those two numbers.  As long as we’re comparing results for two funds sampled at the same frequency (say, annually) the comparison is valid. Sharpe points out that comparing the ratio for cases where the frequency of measurements do not match, there will be problems.  He does not seek to address that problem in his paper.

Here’s where “K” comes in: Suppose we’re interested to compare the performance of two funds, one for which we have monthly data and another for which we have daily data.  The introduction of K enables us to appropriately scale the result according to this measurement frequency.  Our formula for this approximation is

K = SQRT(number of samples per year)

This will scale Sharpe Ratios for the various funds as if they were sampled annually. Unfortunately, if you dig more deeply into the math you will discover a flaw.  Namely that if you take a single portfolio value time series and compute the Sharpe Ratio for it using different sample periods, say, weekly, monthly and annually, the resulting computed Sharpe Ratios are not guaranteed to be related exactly as predicted by our K.

There is no simple way to find a conversion factor that will solve this correctly.  K is just an approximation that works pretty well.

Why?  The reason is that Sharpe uses the arithmetic mean in his ratio.  In order for the “K Method” to work precisely it must be the case that annual return = 12 x average monthly return.  But it’s not.  One way to solve the problem is to reformulate Sharpe’s original equation in terms of log returns.  It is then feasible to work out the relationships in a consistent way.  This is the reason why many analysts use log returns in their work. But if we used log returns, we wouldn’t be using the Sharpe Ratio.

Contributor(s)

Tucker Balch, Ph.D.

Tucker Balch, Ph.D. is a former F-15 pilot, professor at Georgia Tech, and co-founder and CTO of Lucena Research, an investment software startup. His research focuses on topics that range from understanding social animal behavior to the challenges of applying Machine Learning...

THE TOP 5 INVESTOR BIASES

Editor’s note: This was originally published at EducatedTrader.com, the website of the Independent Investor Institute, an organization dedicated to providing unbiased education to Canadian investors that Larry M. Berman, CMT, CTA, CFA cofounded. Larry is among the presenters at the Annual Symposium in April.

If someone were to pick you up in a helicopter, put blinders over your eyes, then drop you into the middle of a jungle, it’s likely you’d have a tough time lasting for any length of time. Undoubtedly, it’s hard enough surviving in the jungle, let alone with blinders on. The stock market is a lot like the jungle—it’s a dangerous place for those who don’t know what they’re doing, or even those who think they know what they’re doing. It isn’t a far stretch to see how biases (a.k.a. “blinders”) can compound the challenges posed by the wilderness of the markets.

When it comes to investing, we all have the same biases. This is because our brains have all evolved the same way. The trick is to be able to recognize our biases in our decision-making processes and work on countering them with the right behaviors.

Bias #1: I Know Enough, Therefore I Know Better

We all like to think that we are not as influenced by biases as other people are, which is our first and biggest investing mistake. Even professionals in the financial industry have biases that lead them to make less-than-optimal decisions. In fact, the more we know about a subject, the more confident we are that our forecasts will be correct. The reality is that information quantity is no match for quality; it’s not about how much you know, but what you do with the information you have.

So let’s get one thing straight: you know less than you think you do when it comes to investing, and that’s not a bad thing. You will never know all there is to know about the markets, and you will never gather enough information to give you certainty that an investment will perform the way you want it to. What really matters is separating the facts from the stories. Check your sources when you research an investment, and make sure they’re credible and reliable. Also, don’t take information at face value. Instead, think carefully about how it was presented to you.

Bias #2: I See What I Want to See

Another big problem of ours as humans is that we tend to seek out information that confirms our beliefs rather than challenges them. It makes us feel good to listen to people who share our views, which means that we are likely to dismiss negative information on an investment that we favor. What’s even more interesting is that we tend to view information that contradicts our beliefs as biased itself.

To counter this thought process, we need to constantly seek out information and people that disagree with us. This is not because we want them to try and change our minds, but because we have to be able to understand and deconstruct the logic of the argument. If we can’t see the argument’s flaws, we should seriously reconsider our viewpoint.

Bias #3: Numbers are My Anchor

Anchoring is a term used to describe the tendency for us to stick closely to numbers that are presented to us. The most common example of this is anchoring to share prices. When we see a share price of, say $10, we tend to immediately believe that it reflects the underlying value of the company. Because today’s markets are highly liquid, company values don’t tend to stray significantly from their share prices. However, it is nevertheless important to come to your own conclusions about the value of a company. If there is a significant deviation between your evaluation and the share price, you may have a trading opportunity on your hands.

Bias #4: Good Performance Follows Good Performance

You’ve likely heard the old adage, “past performance is not an indicator of future performance.” So why do so many investors—even analysts—make the mistake of assuming that a good company with solid earnings over the past several years will continue to perform well? This belief stems from a type of bias known a representativeness, which is a phenomenon where we use a company’s past and current performance to predict its future likelihood of success. However, this bias can lead you down the wrong path, because future performance relies heavily on events and circumstances that will likely be quite different than those that exist today.

The key here is to determine a company’s competitive advantage, which is the strongest predictor of future success. Most companies out there are or will eventually become quite average, and over time, their performance will revert to the average. So you need to answer the question, “what qualities does the company have today that substantially distinguish it from its competitors so that it will continue to perform well in the future?”

Bias #5: A Loss Isn’t a Loss Until I Take It

The tendency to hold on to investments as they drop in price is all too common. So why do we cling to losers? There are several biases at play here. First, we tend to value the things we own more than the things that we don’t. Whether it’s a coffee mug or 1000 shares of our favorite company, we prefer to sell the things we own for more than many people are willing to pay for them.

The second and biggest reason we don’t sell investments as quickly as we should is because of our aversion to taking losses. In general, we dislike incurring losses about 2.5 times more than we like making gains. We therefore tend to keep our losers longer than we should, and cut our winners sooner than we should. We rationalize the decision to keep declining investments by telling ourselves that the price will bounce back. Unfortunately, they tend to underperform the winners we have already sold.

The trick to avoiding this mental trap is to set up an investment strategy that requires you to buy and sell at pre-determined prices. To augment this strategy, you can “taper in” and “taper out” of investments, depending on their price movements. For example, if you plan to buy 1000 shares of a stock, start by purchasing 500 shares, then as the price moves in the direction you want it to go, buy 25% more. Continue to do so until you’ve reached your maximum number of shares. This same strategy applies to selling stocks.

Easy to Learn, Hard to Do

Although it is easy to understand our biases, it is much more difficult to know when they are influencing our decisions. This is why planning your investment strategy is so important. Planning allows you to predict scenarios before they arise and work out how to properly handle them. Don’t underestimate your brain’s ability to trip you up. If it happens to professional money managers and analysts, it will happen to you.

Contributor(s)

THE JANUS FACTOR

Editor’s note: Gary Anderson is among the presenters at the Annual Symposium in April. Below is a reprint of his Charles H. Dow Award winning paper.

Traders alternate between two modes. At times traders exhibit trend-following behavior. Relatively strong stocks are favored, while laggards are sold or ignored. At other times, the reverse is true. Traders-in-the-aggregate turn contrarian. Profits are taken in stocks that have been strong, and proceeds are redirected into relative-strength laggards. This paper presents the market as a system of capital flows reducible to the effects of traders’ Janus-like behavior.

Arriving at a systematic view of a process may begin with a series of inferences or with one or two analogical leaps. Every model is ultimately the expression of one thing we hope to understand in terms of other things we do understand, and analogies, like pictures, are useful devices that simplify and clarify, particularly early on. In the end, understanding must be grounded on primitive notions, each of which pictures some part of the whole and which we agree to accept on intuitive merit.

As a foundation for method, two pictures are offered. First, we will look at feedback loops. Next, I will introduce a new approach to relative-strength. Then, the concepts of feedback and relative-strength will be fused to portray the market as a system of capital flows.

But the market is a hard taskmaster and demands that insights provided by analogical thinking be translated into explicit method. So, finally, I will offer two demonstrations of the power of the methods outlined in this paper.

Feedback Loops

Feedback is commonplace. Businesses routinely solicit feedback from customers, and that information is returned to the marketplace in the form of improved products and services. The best companies seek feedback continuously, and in the process, convert information into long-term success. To a large extent such feedback determines winners and losers and, more generally, helps move the economy forward. In a free-market society feedback is pervasive, so it should come as no surprise that feedback is at work in the equities market as well.

There are two sorts of feedback–positive and negative.

A common example of positive feedback is the audio screech that occurs when a microphone gets too close to a speaker. Sound from the speaker is picked up by the microphone, then amplified and sent back through the speaker. Sound continues to loop through the system, and with each pass the volume increases until the limit of the amplifier is reached. All of this happens quickly, and the result is both loud and annoying.

Another, less common example of positive feedback is the nuclear “chain reaction”, in which particles released from one area of nuclear material release a greater number of particles from areas nearby. The process accelerates rapidly until the whole mass is involved. The result is explosive.

A spreading fire is another example. A discarded match ignites the carpet. The fire spreads to the curtain, then up the wall. Quickly the whole room is in flames, and soon the entire house is burning.

In each of these cases an accelerating trend continues until some limit of the system is reached. The amplifier peaks out, the nuclear material is spent, or all nearby fuel in the house is burned up. Positive-feedback systems exhibit accelerating trends.

A good example of negative feedback is the thermostat, which cools a room as ambient temperature rises and heats as temperature falls. The thermostat stabilizes room temperature within a comfortable zone. Another example of negative feedback is the engine governor, commonly used to stabilize the output of industrial engines.

An interesting example of negative feedback is the predator-prey relationship. An increase in the predator population tends to put pressure on the prey population. However, a fall in the number of available prey reduces the number of predators who may feed successfully, and so the predator population declines. A decline in predators, in turn, boosts the prey population, and so on. The interaction of predator and prey tends to stabilize both populations. Negative-feedback systems are stable systems, with values fluctuating within a narrow range.

Feedback in the Market

When traders respond to market events, they are closing a feedback loop. The actions of individual traders collect to produce changes in the market, and those actions prompt a collective response. Sometimes traders’ aggregate behavior is amplified through positive feedback. In the case of positive feedback during a rising market, rising prices trigger net buying on the part of the aggregate trader. Net buying lifts prices, and higher prices, in turn, generate more buying. An accelerating advance results. Positive feedback in a falling market, on the other hand, develops when lower prices induce traders to sell. Net selling pushes prices down, and lower prices, in turn, encourage additional net selling, and so on, producing an accelerating decline. Positive feedback, when it occurs, produces a trend. Traders’ aggregate behavior during these periods may be characterized as ‘trend-following’ (see Figure 3).

At other times feedback between market inputs and traders’ aggregate response is negative. When negative feedback prevails, the composite trader reacts to rising prices by taking profits. That net selling puts pressure on prices. However, falling prices encourage traders to hunt for bargains among depressed issues. A strong bid for weakened stocks pushes prices higher again, and the cycle repeats (Figure 4).

When negative feedback drives traders’ response to price change, price action tends to be choppy or corrective. Traders’ behavior during these periods may be characterized as ‘contrarian’.

A New Model of Relative Strength

Markets are risky. And risk, everyone knows, involves loss, or the possibility of loss. The connection we all make between risk and loss is intuitive and powerful. Because the probability of equity loss is greatest when markets are falling, a stock’s ability to defend against loss is most critically tested, and therefore best measured, during periods of general market decline.

But rising markets are risky, too. Regardless of how well a stock defends against loss during falling markets, if it does not score gains as the market rises, the trader is subjected to another risk, lost opportunity. Because the probability of opportunity loss is greatest when the broad list advances, a stock’s offensive qualities are best measured when the market is rising.

Picturing Offense and Defense

Webster’s Dictionary defines a benchmark as a “standard or point of reference in measuring or judging quality, value, etc.” A benchmark may be a published market index or the average performance of a universe of targets (stocks, groups, etc) under analysis. For our purposes, two benchmarks are required, one to measure offensive performance and the other to measure defensive performance. To accomplish this, the average daily performance of a universe of stocks is separated into two sets of returns. The first set includes only those days when average performance was either positive or flat. That set of returns makes up the offensive benchmark. The defensive benchmark is built from the balance of the daily returns, those during which average performance was negative.

Each target within the universe is compared separately to both offensive and defensive benchmarks. To produce an offensive score, the sum of the offensive benchmark’s daily returns (flat-to-rising days) over some period–say, 100 days–is divided into the sum of the target’s returns for the same days and over the same period. If the result of that calculation is 110, then the target is ten percent stronger than the benchmark on those days when benchmark returns are flat-to-rising. The target has an offensive score of 110.

A similar calculation is made to determine defensive relative strength. The sum of the defensive benchmark’s returns on negative-return days is divided into the sum of the target’s returns for the same days. A result of 110 in this case indicates that the target is ten percent weaker than the defensive benchmark.

Offensive and defensive performances of a target are pictured graphically in Figure 5. The vertical axis displays offensive performance. The offensive benchmark is indicated by a horizontal line that divides the vertical axis equally. A score above 100 indicates that the target’s cumulative return during positive-return days exceeds the offensive benchmark’s. A weak offense under-performs the benchmark and earns a score below 100.

The horizontal axis shows defensive performance. A vertical line bisecting the matrix designates the defensive benchmark. A strong defensive score of less than 100 places the target to the left of the vertical benchmark. A weak defense generates a defensive score above 100 and locates the target to the right of the vertical benchmark.

A target in the position marked with an asterisk (Figure 5) has an offensive score of 110 and a defensive score of 95. This target has outperformed the benchmark both offensively and defensively.

The Benchmark Equivalence Line (BEL)

Notionally, there are infinite combinations of offensive and defensive performance that match the overall performance of the average stock (benchmark). These combinations range from very weak offense plus very strong defense to the other extreme of excellent offense together with very poor defense. All possible combinations of offense and defense that tie the universe’s average performance comprise the Benchmark Equivalence Line (BEL).

A target with an offensive/defensive score of, say, 110/110 has rallied ten percent more than the offensive benchmark during rising periods. The target has also fallen ten percent more than the defensive benchmark during declining periods. When offensive and defensive performances are combined, overall performance of the target matches the average performance of the universe. The target is simply more volatile than the benchmark. Similarly, a score of 90/90 matches average performance, but in this case the target is less volatile than the benchmark. The original benchmark (100/100) at all volatilities comprises the BEL. The BEL is shown in Figure 5 (above) and forms a straight line that runs diagonally through the matrix.

A target’s location anywhere northwest of the BEL indicates that combined offensive-defensive performance is better-than-benchmark, while a location to the southeast of the BEL marks worse-than-bench-mark performance. The further NW of the BEL, the more a target’s performance has exceeded benchmark performance. The further to the SE, the more a target has fallen short of the benchmark. 

The next chart (Figure 6) pictures a universe consisting of the Standard & Poor’s 100 plus the NASDAQ 100 as of mid-December, 1998. The market has suffered through a sharp summer decline, and confidence in the new advance is still weak. Traders are risk-averse and contrarian. Relative strength differences (NW-SE) are small and eclipsed by differences based on volatility (SW-NE). As a result, stocks hug the benchmark and arrange themselves along the BEL.

How Positive Feedback Expands the Universe

During periods of positive feedback, traders buy into strength and sell into weakness. Whether the overall market is rising or falling, capital flows from weaker to stronger issues. As the process continues, relatively strong stocks become even stronger and relatively weak stocks become still weaker. The period from December 1998 through March 2000 marks a period during which traders’ aggregate behavior was dominated by trend following. Traders engaged in a virtuous positive-feed-back cycle that drove the strongest stocks to new extremes of relative strength. Laggards rallied, but not as well as the average stock, and so continued to drift below the BEL as their relative strength declined. Figure 7 shows the 200-stock universe in March 2000, near the end of that expansion phase, and pictures the flow of capital from weak targets SE of the BEL to stronger targets NW of the BEL.

When feedback is positive, capital is pumped into strong targets NW of the BEL, and so the relative strength of those targets tends to improve. As relative strength improves, strong targets migrate toward the NW. On the other hand, relatively weak targets are drained of capital and so become relatively weaker. Weak stocks move to the SE and further away from the BEL. Positive feedback in both rising and falling markets produces a northwesterly flow of capital and causes the universe to expand.

As the universe expands, the strongest stocks push well into the NW quadrant. Movement toward the NW indicates that relatively strong stocks are not only outpacing the benchmark during advances but also finding exceptional support during weak market periods. Improvements in both offensive and defensive scores provide evidence that these stocks are under active sponsorship.

How Negative Feedback Contracts the Universe

During periods of negative feedback, capital flow across the BEL is reversed. In the aggregate, traders have turned from trend-following behavior to contrarian behavior. Traders buy only once stocks are considered cheap, and profits, when they come, are taken quickly on rallies. As a result, trends are not durable, and price action is range-locked or corrective

Driven by negative feedback, capital flows out of stronger issues NW of the BEL and into weaker stocks to the SE. Stocks that have been strong lose relative strength and fall back toward the BEL. On the other hand, stocks with a recent history of weakness, pumped by an infusion of capital, migrate in a northwesterly direction toward the BEL

as relative strength improves. Negative-feedback periods produce a south-easterly flow of capital and cause the universe to contract. Figure 8 shows the universe in November 2002, near the end of a long contraction phase, and pictures the flow of capital under negative-feedback conditions.

Confidence

The current of capital alternates back and forth in a cycle repeated over and over as the universe of stocks expands then contracts. But what is it that prompts traders, as if with one mind, to push stocks to relative-strength extremes before pulling them back toward the benchmark?

It is confidence in the trend.

It takes confidence to buy into strength and to let profits ride. When traders, for whatever reasons, become confident of a bullish trend, they defer profits and chase strong stocks into new high ground. Stocks that do not participate in the trend are ignored or sold. Trends accelerate, and profits, for those trading with the trend, come easily.

On the other hand, when traders are confident of a bearish trend, the weakest stocks are liquidated or shorted aggressively, and proceeds are held in cash or shifted to stronger stocks that defend well in a falling market. Trends are durable, albeit negative, and traders willing to sell into the trend are rewarded.

In either case, confidence in the trend leads to trend-following behavior. The controlling dynamic is positive feedback. Relatively strong stocks out-perform weaker issues, and the universe expands.

The dynamic is quite different once traders lose confidence in the trend. Risk-averse and contrarian, traders respond negatively to price change. Buying is focused on oversold “bargains”, and profits are taken in stocks that have rallied. Trends are short-lived and unreliable, and profits are elusive. Stocks with a recent history of relative strength fall back toward the BEL while laggards improve, and the universe contracts.

Red Shift

There is a shift of color toward the red end of the spectrum in the light emitted by the most distant galaxies. Astronomers cite this as evidence that these galaxies are moving away from us at the fastest speeds as the universe expands.

Something like that happens in a universe of stocks. During bullish expansions, the strongest stocks, those furthest from the BEL, book the strongest forward gains. Perhaps stronger relative strength attracts greater demand from trend-following traders. In any case, the best immediate gains during such periods are most likely to come from targets near the furthest extreme of relative-strength. 

Similarly, during bearish expansions the best short profits are likely to come from the weakest stocks and groups. Even during contracting markets, the best opportunities on the long side are consistently provided by the most laggard issues. Generalizing, the most profitable opportunities consistently come from targets furthest from the BEL.

The Spread

The spread in performance between relatively strong and relatively weak targets offers a running picture of expansion and contraction. The Spread is calculated as the difference in forward performance of relatively strong vs. relatively weak targets. One may choose to compare the average forward performance of all targets NW of the BEL with that of all targets SE of the BEL. To make the comparison, all targets NW of the BEL on day d are identified, as well as all targets SE of the BEL. Then the average performance for each set of stocks on the following day (d+1) is calculated, and the difference between the two averages is determined. The resulting number is the daily performance spread between all strong and all weak targets. Daily spreads are cumulated to create The Spread.

The next chart (Figure 9) shows both the average performance as well as The Spread of the 200-stock universe from January 1999 to April 2003. Periods during which The Spread rises indicate an expanding universe driven by positive feedback. Traders are confident in the trend and their behavior is characteristically trend-following. Trends develop momentum and persist. Periods during which The Spread rises are shaded.

Unshaded areas bracket periods during which The Spread fell, the universe contracted, and traders were risk-averse and contrarian. Market action is turbulent and long-lasting trends are hard to find. In this whipsaw-prone environment, even tight risk-control may not save the trader from accumulating outsized losses.

There is, however, one notable exception to this dreary contrarian outcome: after a significant decline, oversold, volatile laggards rise fastest during the initial phase of a new advance. During these periods, contrarian long positions in laggard issues are likely to produce superior short-term profits. But for this one exception, a falling Spread is a signal for caution.

The generally rising trend of the Spread from the spring of 1999 through March 2000 (shaded area 1, Figure 9) indicates that the universe of stocks was expanding throughout a long positive-feedback cycle. Traders favored relative-strength leaders, and the most profitable strategy was to own the strongest stocks and groups.

Despite the continuation of a bull market in prices, the Spread’s sharp decline in March of 2000 (2) warned that traders had lost confidence in the rising price trend. The fact that prices continued to advance during this contrarian period suggests that traders attempted to reduce risk, not by moving to cash, but by replacing bulled-up leaders with laggard issues.

During period 3 The Spread recovered as prices continued to rise, but by period 4, during which the average fell as The Spread rose, it was clear that momentum had tipped to the downside. Traders were gaining confidence in the declining trend.

Period 5 shows a typical contrarian pattern. Price moves irregularly within a trading range.

Period 6 offers traders the first good opportunity to trade the short side in synch with the trend. The average stock fell as The Spread rose, our indication that positive feedback was operating in a declining trend. Under these conditions, weak stocks fall faster and further than stronger issues, and the best strategy is to sell or to sell-short relative-strength laggards.

Another big wave of selling is supported by a rising Spread in period 7. Momentum, as measured by the trend of The Spread, is now quite strong, and prices tumble to new lows.

A solid contrarian rally featuring oversold laggards (8) returns the average to long-term resistance. Early in a contrarian rally, as The Spread begins to dip and the average stock begins to advance, the best strategy is to buy volatile laggards in the expectation of good, though likely short-term, profits.

After that corrective rally, the average declines again in three consecutive waves of selling under increasing momentum (9, 10 and 11). Since mid-2000, periods of downside momentum have been progressively longer, and prices have fallen further with each event.

The Spread discloses the direction of capital flow within a universe of targets and offers a new and precise definition of ‘momentum’. Traders may use The Spread not only to identify profitable trending periods but to avoid difficult markets as well. Indeed, these indications are consistent enough to support reliable trading rules. Those rules are listed below:

  1. When The Spread is rising, and relative-strength leaders are advancing, buy the strongest stocks and groups;
  2. When The Spread is rising, and relative-strength laggards are declining, sell or sell short the weakest stocks and groups;
  3. After a decline, if The Spread is falling and relative-strength laggards are advancing, buy the weakest stocks and groups.

Testing The Spread

A protocol was devised to back-test the efficacy of The Spread. To isolate the effect of The Spread, simultaneous long-short trades were assumed in order to neutralize the impact of market direction. The sole pre-condition for trades was the immediate direction of The Spread.

Figure 10 summarizes five separate computer back-tests of a market-neutral strategy based on the direction of The Spread. The method employed is simple, direct and free of any attempt to optimize out-comes. The Spread is used to determine whether the universe of 200 stocks is expanding or contracting. If The Spread rises (universe expands), long positions are selected from relatively strong stocks and short positions are selected from relatively weak stocks. Positions are reversed when The Spread falls (universe contracts). The net percentage change for the following day (close to close) resulting from long and short positions is cumulated. No leverage is assumed.

No allowance is made for commissions or other costs. As with any back-test, results are theoretical and are intended only as a demonstration of the validity and power of the methods developed in this paper.

The back-test was made assuming stock-sets of varying size. “10%” tags the overall performance that results from trading only the strongest and the weakest ten percent of the universe. That set posted a gain of 404% with a maximum draw-down of 14%. Over the same period (4.3 years), the 200-stock average gained 69%, with a maximum draw-down of 39%.

Set-size was increased incrementally by ten-percent until the relatively strong half of all stocks were positioned on one side of trades and the relatively weak half on the other (“50%”). Each set tested scored a higher net gain and a smaller maximum draw down than the 200-stock average.

The best overall performance came from the set of stocks (10%) nearest the two relative-strength extremes of the universe. This result is consistent with the “Red Shift” phenomenon discussed above.

Postscript

Markets make sense. Price series are not chaotic, but are carried along on currents of underlying capital flow. As we have seen, those currents may be observed through their effect on price. Moreover, a proper reading of capital flows can lead to consistent trading success.

Skeptics hold that operations based only on observed price changes cannot succeed. Markets are moved by news, they argue, and since, by definition, news cannot be predicted (or it would not be news), price movement cannot be anticipated. It is a short step to conclude that price data are not linked and that price series follow a random walk.

Skeptics fail to take into account that price activity is also news. As we have noted, traders respond to news of price change, just as they respond to other sorts of news. By their collective response traders forge causal links between past price data and current price movement. Price data are linked because traders link them.

Granted, markets are the free and spontaneous creation of buyers and sellers motivated only by insular self-interest. Yet the whole of their activities assumes a shape and flow beyond the intent of any individual trader. Out of the chaos of daily trading, something new, orderly and recognizably human emerges. At bottom, it is hope and fear, measured by the rhythms of expansion and contraction in a process as relentless and as natural as breathing or the beating of a heart.

Contributor(s)

Gary Anderson

Gary Anderson has been a principal of Anderson & Loe since 1990. Over that period, Gary has provided stock market consulting and advisory services to an international clientele of professional asset managers, including banks, mutual funds, hedge funds and financial advisors. Gary’s...

TACTICAL REPORT: EUR/USD: PARITY TARGET, A CLEAR AND PRESENT REALITY

Editor’s note: This report was originally published on March 6, 2017. All data and opinions are current as of that date but may have changed since publication.

Executive Summary

The latest up-swing in the USD index reactivated a historic 31-year trend breakout signal. Its final impulsive move offers a paradigm shift for the market’s collective investor psychology, capital flow trends and perception of key events ahead. Such a positive technical backdrop, coupled with a hawkish long-term policy shift, helped amplify price reactions to the Fed’s sequel 0.25% hike in Dec 2016, relative to the prior year. History had a positive market rhyme, but to a much larger extent.

USD gains are likely to extend sharply higher, as part of a 5-wave impulsive cycle, coupled with positive speculative flows, which signals further upside scope into 120. Long-term cycles project an average 8-year cycle, extending into 2019. Expect a non-linear move, supported by positive seasonality in the month of January (led by traditional repatriation flows and the presidential inauguration). Stay alert for key cycle windows during mid-July, into H2 2017, when market volatility is expected to spike. Our timing models signal a panic-rogue cycle ahead.

Growing divergences of interest rate policy between US and Europe, is weighing on EUR/USD. The spread between US/German 2-year government yields, recently at a historic widening of -238bps, is leading EUR/USD lower. The higher interest rate environment is also supported by a rising trend in long-term yields, pre-and-post President-elect Trump’s victory. Potential remains for a UST yield rise to 3%.

EUR/USD triggered a new low breakout under 1.0462, which serves as the lowest level since 2003. The move is part of a much larger historical price symmetry last seen between 1985-2001. A sustained weekly close beneath 1.0462 would make the parity target a clear and present reality, into a 31-year trend support. A perfect storm of asymmetric risk, including political instability in the Euroland ahead of the coming elections is likely to remain EUR/USD negative.

Why history rhymes, but does not repeat?

  • The latest up-swing in the USD index is having a temporary respite, after breaking out of a 2-year trading range, which has reactivated a historic 31-year trend breakout signal (see Fig 4 which is explained in more detail later).

  • Technically speaking, this renewed bullish USD sentiment is part of a 5-wave cycle that started in 2011 (see Fig 3 which is also explained in more detail later). The final impulsive move offers a paradigm shift for the market’s collective investor psychology, capital flow trends and perception of key events ahead.

  • Such a positive technical backdrop helped amplify price reactions to the Fed’s sequel 0.25% hike in Dec 2016, relative to the prior year. History had a positive market rhyme, but to a much larger extent. The USD index made a higher intraday gain of 1.1%, dwarfing the previous lacklustre move of 0.02% (Fig 1).

  • From a macro perspective, the difference of market reaction was also driven by the Fed’s more hawkish long-term stance, which raised their “dot-plot” projection and reversed a strong 2-yr decline (Fig 2).

  • Markets had become so conditioned to dovish Fed surprises, that even a mildly hawkish shift was enough to trigger significant market reactions. Speculative flows in net long USD positions also reflected greater confidence in this latest rise (Fig 2 & 3)

USD breakout signals l-t gains into 120.

  • USD gains are likely to extend sharply higher, as part of a 5-wave impulsive cycle, after breaking out of a 2-year trading range (Fig 3). The move which started in 2011, represents a paradigm shift within investor psychology and capital flow trends.
  • CFTC large speculative net long USD positons have already triggered a bullish trend reversal and reflect growing confidence. Although these liquidity indicators do not offer precise market timing, they can still provide valuable directional confirmation. Watch for a test of the old 2015 high to re-fuel USD.
  • In terms of the big picture, USD’s latest upswing has also reactivated a 31-year trend breakout which signals further upside scope into 110 and 120. The latter price target equates to a 50% quantum retracement of the decline from 1985 and +2 STD of the USDs historical value zone (Fig 4).
  • A closer study of the USD’s long-term cycles (measured from peak-trough since 1985), projects an average 8-year cycle that would extend into 2019. Expect a non-linear move, supported by positive seasonality in the month of January (led by traditional repatriation flows and the presidential inauguration). Stay alert for key cycle windows during mid-July, into H2 2017, when market volatility is expected to spike and renew safe-haven flows into the USD and related haven assets.

Rate divergences pushes EUR/USD lower

  • The growing divergence of interest rate policy, particularly between US and Europe, is weighing on EUR/USD. Fig 5 illustrates the spread between US/German 2-year government yields at a historic widening of -238bps, leading EUR/USD lower. 30-day rolling correlations remain very strong and stable at around 0.90.

  • Here, the bond market is likely to be acting as a potential remote proxima to currencies, where the bond-dog controls the FX market-tail e.g. by offering greater investment yield return on the USD.
  • On the surface level both government yields are steepening, but for very different reasons. The US curve is being led the Fed’s renewed Hawkish stance, coupled with rising l-t rising yields. Such a scenario, marked by rising short and long-end yields, with the latter outperforming, is termed as a ‘bear steepener’. Whereas, the European short-end of the curve is being pinned down by the ECB’s altered QE policy (reducing the market’s desire of holding EUR in the medium-term). In contrast, this signals a ‘bull steepener’ curve, with the short-end falling faster than rising l-t yields.
  • The higher interest rate environment is also supported by a rising trend in l-t yields, pre-and-post President-elect Trump’s victory (Fig 6a).

This marks an important end of the 35-year bear trend in yields, after a two-stage bottom process in 2012 and 2016. Potential remains for a UST yield rise to 3%, which equates to +1 STD of the historical mean (Fig 6b, shown above).

This psychological level could indicate a pain threshold for rising inflation, weighed by toxic debt or/and future tail-risk (Fig 8b which is explained in more detail below).

Parity target, a clear and present reality.

  • EUR/USD is once again nearing its extreme low at 1.0341, which serves as the lowest level since 2003. The move is part of a much larger historical price symmetry last seen between 1985-2001, which exhibited a two-stage impulsive rise and volatile corrective fall. Both price analogs developed strong uptrends, lasting 91-93 months that were short-circuited by crisis events. (Fig 7).

  • A sustained weekly close beneath 1.0341 would make the parity target a clear and present reality, into a 31-year trend support. Of note, EUR/USD parity target was projected over 2-years ago, following a major breakdown from a 12-year accumulation pattern, supported by a long-term bear cycle skew (refer to Fig 7).
  • Using more traditional methods such as Point & Figure charting also signals further downside scope into 0.9900 and 0.9700 (Fig 8a). The latter target was activated in October 2014. Only a close back above 1.1000 (value zone) would neutralize this scenario.

  • There is a perfect storm of asymmetric risk, marked by the latest technical breakdown, growing interest rate divergences, compounded by geometric event risk into 2017 (Fig 8b). The highest probability tail risk is signalled by our timing models, predicting a panic/rogue cycle this year. Political instability in the Euroland is a plausible trigger, not least, ahead of the oncoming elections. Both factors are EUR/USD negative.

Contributor(s)

Ron William, CMT, CFTe

Ron William is a market strategist, educator/mentor and performance coach; with +20 years of experience, working for leading economic research & institutional firms; producing tactical research & trading strategies. He specializes in global, multi-asset, top-down framework, grounded in behavioural technical analysis, driven by...