The Growth and Inflation Sector Timing Model
“big forces to worry about: growth and inflation. Each could either be rising or falling, so I saw that by finding four different investment strategies—each one of which would do well in a particular environment (rising growth with rising inflation, rising growth with falling inflation, and so on)—I could construct an asset-allocation mix that was balanced to do well over time while being protected against unacceptable losses. Since that strategy would never change, practically anyone could implement it.”
― Ray Dalio, Principles: Life and Work
Note: a big thank you to the always excellent Allocate Smartly for providing sector data for this post.
In the Business Cycle Sector Timing Model, I presented a simple sector rotation model that used the S&P500 as a proxy for expected economic growth and the business cycle to rotate between early and late stage economically sensitive sectors when the cycle is rising or expanding, and to defensive sectors when growth is falling or in a recession. Putting theory and backtests aside, there is empirical justification for using trends or momentum in the S&P500 for expected growth. Nicolas Rabener demonstrated that the stock market is a valid forward-looking economic indicator using long-term data:
“We extended our analysis back to 1900 using annual data from MacroHistory Lab. Since the stock market is forward-looking and tends to anticipate economic news flows, we instituted a one year lag. So for 2000, we compared that year’s GDP numbers with the performance of the S&P 500 in 1999……All of which suggests the S&P 500 was a good proxy for the US economy for much of the last 120 years.”. (Rabener, CFA Institute)

While this single factor model is useful it can be extended to a more robust two-factor model in order to account for inflation. The traditional Growth/Inflation matrix using real GDP and CPI was popularized by Ray Dalio from Bridgewater Capital to create an “All-Weather Portfolio”, and also referenced by Harry Browne (Permanent Portfolio), Geoffrey Moore at NBER, and Sam Stovall from Standard and Poor’s. The trends and/or momentum in these two variables using a two-state model (Up/Down or Rising/Falling) significantly explain cross-asset class returns (Brazen Capital). The Growth and Inflation Matrix is shown below along with four separate macroeconomic regimes:

“Goldilocks” refers to an economic regime characterized by low inflation and strong economic growth, with classic examples including the tech boom of the late 1990s and much of the 2010s. Reflation, on the other hand, is typically marked by an overheating economy with both high inflation and high growth, as seen in 2006-2007 during the housing boom when oil prices and other commodity prices began to rise significantly, and in 2009-2010 after the Global Financial Crisis in 2008. Stagflation, or stagnation, is characterized by high inflation and low growth, with the 1970s serving as a prime example. Deflation or disinflation, which is less common, involves falling growth and declining prices, with the most prominent examples being the Great Depression and the 2008 Global Financial Crisis. A good paper by Flexible Plan Investments called “The Role of Gold in Investment Portfolios” (I was a co-author in earlier versions) shows the regime performance for various asset classes using this Growth Inflation Matrix along with several other regime scenarios. Another good summary of using the Growth Inflation Matrix to invest in different asset classes was written by Resolve Asset Management.
The Role of Inflation
To understand how these macroeconomic regimes affect various sectors and the broader economy, it is essential to first examine the role of inflation, as it is a key driver behind many of these economic shifts. Inflation plays a central role in the financial system. Without it, there would be much less need to put your hard-earned savings at risk. Inflation affects everything from monetary policy to consumer purchasing power and the returns of all asset classes including stocks, bonds, commodities and currencies. The late Ronald Reagan once said, “Inflation is as violent as a mugger, as frightening as an armed robber, and as deadly as a hit man.” While this is true of hyperinflation, as seen in Weimar Germany, Venezuela, and Zimbabwe, or during the stagflation of the 1970s, moderate inflation still plays a crucial role in a healthy economy. When kept in check, it encourages spending and investment, and fuels economic growth even as it gradually erodes purchasing power.
The stock market tends to perform best when inflation is moderate, stable, and predictable. Equity returns tend to be negatively correlated with unexpected inflation as it increases the discount rate on future returns. At the business level, inflation increases costs and erodes corporate earnings and purchasing power. Uncertainty about future costs makes long-term business planning difficult. On the other hand, deflation—seen during the Great Depression and Japan’s stagnation in the 1990s—can be far more concerning, once triggered it creates a self-reinforcing downturn that is difficult for policymakers to reverse.
The chart below from O’Shaughnessy Asset Management below shows average equity market returns in different inflation regimes.

O’SHAUGHNESSY ASSET MANAGEMENT, L.L.C.
Clearly moderate inflation is the best regime for the equity market, while really high inflation is the worst. In addition, really low inflation is also undesirable as this is correlated with a stagnant or deflating economy. If this chart included the Great Depression, presumably this low inflation quintile might show even worse overall performance.
So how would inflation be used for sector rotation? Sectors are well-known to have distinct characteristics based on the long-term nature of their businesses, market dynamics, regulatory environment, and other factors. These unique traits can lead to predictable patterns in how sectors perform in both the short and long-term in response to changes in macroeconomic variables like growth and inflation. For example, Morningstar divides the major sectors into three separate super sectors – Defensive, Cyclical and Sensitive- based on their respective macro sensitivity. However this categorization is based primarily on the business cycle and the growth dimension but it fails to capture the inflation dimension directly.
Inflation can impact sectors in distinct ways, with some proving more resilient than others. Classic fundamental analysis of their business models and basic economics can reveal how they respond to inflation. In the short term, “bond-like” sectors with high operating leverage, such as Consumer Staples, Healthcare, and Utilities are more immediately impacted by inflation because they face delays in passing on price increases, which can lead to short-term margin compression. Their variable costs, such as energy, food, and labor, are highly exposed to inflation. Since these sectors have high dividend yields and low growth, they are also more sensitive to rising interest rates. However, in the long run, they tend to benefit from pricing power due to contracts that allow price hikes in line with rising prices and their role in providing essential goods and services. This makes the defensive sectors more resilient to sustained inflation. At the opposite end, consumers cut back on discretionary purchases such as automobiles and electronics during sustained inflation as wages fail to keep up with rising prices. A lack of pricing power, and falling demand among other factors can cause Consumer Discretionary and Technology companies to perform very poorly.
The chart below from Investors.com and Schroders Investment Management shows the historical performance of different stock sectors in sample (measuring contemporaneous returns during inflation) from 1973-2021 in excess of CPI when inflation is high and rising:

source: Schroders and Investors.com
As expected, Energy is a standout sector during rising inflation along with Precious Metals and Real Estate. Less intuitive is the below-average performance of the Materials sector, which can occur when energy costs rise faster than revenues that can be tied to other commodities in the long term. Energy is a key input for materials production, and if energy prices increase disproportionately to the prices of the materials themselves, it can lead to margin compression and weaker sector performance. It is also interesting to note that Utilities performs the best during inflation of the defensive sectors but this long-term result masks the sector’s poor performance during Stagflation in the 1970s. Besides high interest rates, what has changed in the Utility sector over time is that companies now have more flexibility to raise prices post deregulation and their power sources are less reliant on oil and gas given the push towards clean energy. Another predictable result we see in the table is that long duration/high growth assets like Technology and Consumer Discretionary are at the bottom of the list. These sectors suffer from a lack of pricing power in the long-term and falling demand as well as being sensitive to interest rates that can rise to combat inflation.
The real question is how do we measure inflation or better yet how do we find a leading indicator for inflation?
The problem with using CPI that not only is it slow and backward-looking but itis also unreliable due to restatements, methodology changes, and weighting issues. It is also a monthly indicator and cannot be used on a daily basis. This is why traders and economists often prefer market-based inflation measures (like breakevens or commodity trends) for real-time insights. The chart below from State Street- the issuer of Sector SPYDERs- shows the different primary sector’s forward-looking or expected inflation betas over the 10-year period from 2014-2024.

Source: State Street Advisors
What this table is showing is whether certain sectors have a positive or negative or neutral relationship with inflation expectations as proxied by breakeven rates. When inflation expectations are rising Energy, Industrials, Financials and Materials tend to rise. Real Estate, Consumer Discretionary and Technology are relatively neutral. Defensive Sectors such as Utilities, Health Care and Staples tend to fall as inflation expectations rise. The unusual disparity between long-term performance of sectors during rising inflation and the initial response to expected inflation (Financials and Materials for example) can be explained in the table below:

Constructing an Expected Inflation Indicator
Using this information about how sectors react to inflation expectations we can create an expected inflation indicator by tracking trends in various sectors. There is a benefit to this approach vs using breakeven rates-equity markets can potentially react faster than TIPS breakevens since equity markets are much larger and more liquid and hence can easily absorb changes in expectations. In contrast breakevens can be more noisy because they are more affected by shifts in liquidity, monetary policy, and stale pricing. This expected inflation indicator can be superior to CPI due to forward-looking pricing, while CPI is lagged and subject to revisions. We can create this indicator by first constructing portfolios with positive and negative expected inflation betas. The relative strength or trend between these portfolios acts as a market-implied inflation signal, potentially leading traditional measures like CPI and breakevens in real-time.
The goal here is to employ a simple approach that generally captures trends in expected inflation that can easily be used in trading systems. The relative strength between two portfolios is an easy way to accomplish this goal. To do that we need to create portfolios with positive and negative betas to expected inflation. To do that we will reference the previous table, and create simple portfolios with common sense rather than optimization. For the positive expected inflation beta portfolio we will overweight Energy which has a high positive expected inflation beta and equally weight Industrials, Financials and Materials (1/6 in each) which all have positive and highly similar expected inflation betas.

Next we will create a negative expected inflation beta portfolio by equally weighting the defensive sectors (Communications was omitted due to a lack of long-term data) since their betas are all negative and fairly similar.

To create this relative time series we will simply divide the Positive by the Negative Expected Inflation Beta Portfolio. This simple ratio-based indicator is shown below:

In this chart using the sector implied expected inflation indicator we can clearly see that the tech boom in the 1990’s was a stable inflation regime which is best for equities historically. After the bear market in the early 2000s, the indicator begins to climb during the bull market that follows and peaks in late 2007 during the Housing Boom before tanking during the Credit Crisis in 2008. In 2009 to 2010 the indicator climbs substantially as reflation follows the previous deflationary bear market. From late 2010 onward the indicator falls substantially and bottoms out during the COVID bear market in 2020 before rallying until the end of 2024 as the market reflated. All of these movements align with what we would expect to see given the major changes that have occured with inflation over time. Since this measure is constructed using data from 2014 onwards, it is interesting that it generalizes well going backward to 1990. This is because sectors have a fairly stable response to inflation expectations given their business models.
To test how well this tracks performance during periods with high CPI we can backtest how well different sectors perform when sector implied expected inflation is above the median. We can then then compare how well this aligns with long-term in-sample performance during inflation going back to 1973. For sectors we will use the Sector SPYDRs from State Street and the Vanguard REIT ETF (VNQ).

Next we will backtest how each sector performed when the indicator was above the 200-day median going back to 1990. The assumption is that even a lagged value should be a leading indicator of CPI and inflation. If that is the case then we should see a strong agreement between backtested sector rankings and long-term sector performance rankings when CPI was high.

The rank agreement between historical performance of sectors during high inflation and actual performance during predicted high inflation is very high correlation of .92 which is quite remarkable. This means that using the sector implied expected inflation is a good real-time indicator for forecasting trends in inflation. The biggest relative disparity is for the Materials sector which performed better during inflation in the past then the testing period. Deeper investigation reveals that as the composition of the sector has shifted over time from precious metals and mining toward industrial chemicals, gases, and specialty materials, reducing the weight of traditional miners due to drift as well as GICs changes making precious metals a smaller sub-industry. Another disparity was that Utilities performed better during expected inflation than in the past. Perhaps the simplest explanation is that the missing third variable here is interest rates. The rate on a 10-year bond was over 8% at the beginning of the testing period vs 4% at the end of the testing period. Since Utilities are highly interest sensitive this was a positive tailwind which exaggerated results. As we mentioned earlier there were also structural changes in the industry such as deregulation which allows them to raise prices in accordance with inflation while simultaneously having less exposure to high input costs from oil and gas and more clean energy. All other ranks were either the same or off by one.
Now lets revisit the Growth and Inflation Matrix and see how each sector performs during periods where expected economic growth is high or low vs when expected inflation is high or low. To do that we will use the 200-day simple moving average for the S&P500 (SPY) to capture the expected growth dimension (above trend/sma is UP, Growth is UP, below is DOWN), and as before we will use the 200-day median of the expected inflation time series so that the periodicity of each indicator is aligned for the sake of consistency. First let’s look at broad stock market performance and median sector performance in each macro regime:

The broad stock market using the S&P500 as a proxy performs best in the “Goldilocks” regime which is characterized by expected growth being above trend (rising) and expected inflation being below trend (falling). However when we look at the median sector, performance is best in the Reflation regime which is characterized by expected growth and inflation both being above trend. What this means is that there is likely a larger disparity in sector performance in the Goldilocks regime where there are some sectors that do much better than others. In contrast, during Reflation most sectors perform well as this regime typically occurs coming out of corrections or bear markets or when the market overheats at the end of a bull market. In Deflation which is characterized by both expected growth and inflation being below trend, the broad stock market and median sector perform poorly and almost exactly the same. This indicates that there is less disparity in sector performance during this regime. This result is predictable as deflation or disinflationary periods tend to occur during bear markets or corrections. However the worst regime was Stagflation where expected growth was below trend while expected inflation was above trend. This result was surprising since Deflation is historically the worst regime for equities but the difference in performance is small. Bear in mind that we are using expected growth and inflation and expectations can change frequently. This means that expected deflation might contain multiple false signals vs looking at in-sample periods where we know exactly what happened historically.
Overall we can conclude that the expected growth component is the most important dimension for market and sector performance- equities perform better in bull vs bear markets regardless of inflation expectations. The picture below shows how expected growth and inflation macro regimes have evolved over time:

Notice that most of the 1990s and 2010s were Goldilocks conditions. The major Bear Markets such as 2001-2002 and 2008 and 2022 showed a combination of Deflation and Stagflation. Typically Stagflation deteriorated into Deflation in Bear markets . Most corrections in contrast were associated with Deflation. This helps to reconcile why Stagflation showed lower returns than Deflation.
Now lets get more granular and take a look at sector performance by macro regime:

As we might expect, the Technology sector performs best during Goldilocks conditions followed by Consumer Discretionary. Both are growth oriented sectors that perform well when consumers have more purchasing power During Relationary periods as expected the Energy sector is the best performer followed by Real Estate, this again aligns with expectations. During Stagflationary periods the defensive but less inflation-sensitive Health care and Utilities perform the best, this also makes logical sense. Curiously during Deflationary periods Materials and Consumer Discretionary perform the best with Consumer Staples showing solid performance as well. Why would Materials perform the best? Perhaps it is because input costs from inflation will fall faster than their revenues which are tied to other commodities and have longer-term contracts. This may also capture a mean-reversion effect as Materials is also a good early cycle performer. As for Consumer Discretionary this might also be due to a mean-reversion effect where the market expects a return to consumer spending (Consumer Discretionary is historically a very good early cycle performer). In either case the excess performance of Materials and Consumer Discretionary are not signficantly different from Consumer Staple performance which historically has been the best sector to hold in market downturns and makes more logical sense as a safe haven.
The table below shows a more detailed breakdown of sector performance by regimes and also by sub-regimes along with S&P500 performance for comparison:

As we might expect, the Technology sector is the most sensitive to growth expectations and performs the best and the worst when expected growth is up or down. Materials surprisingly shows the least sensitivity to growth expectations, perhaps reflecting the shift in composition to becoming more like a defensive industrial sector. As expected, the Consumer Discretionary is the most sensitive to inflation expectations and performs the best and the worst when expected inflation is up or down. The Technology is not far behind and is also highly sensitive to expected inflation. Consumer Staples shows the least sensitivity overall to expected inflation given its stable demand and pricing power. When expected inflation is up we see that Energy and Real Estate both do very well. But the standout performer when expected inflation is up is Utilities which is even more unusual because it is in the Negative Portfolio Beta Portfolio- meaning it is likely falling relative to the Positive Portfolio Beta for which Energy is the largest component. Many of the reasons for this we have already discussed. That said historically Utilities have done fairly well during inflation relative to other sectors (4th best historically) but this outperformance is likely anomalous.
Creating a Simple Growth and Inflation Sector Timing Model

Now we are going to create a simple strategy to capture performance across regimes. Using a combination of empirical data and economic logic I chose the following sectors for each regime:
The first two regimes (the green regimes) are the most obvious because they align with both data and logic. They are the major return generators in the strategy: Technology is an overwhelming statistical favorite duing Goldilocks conditions and logically it makes sense to perform best in that regime. Energy is clearly the odds on favorite when inflation is expected to rise and especially when the market is rising during Reflation. The defensive regimes (red and orange) are meant to reduce risk and preserve capital as returns are difficult to generate when the broad market or expected growth is down. The choices here are less critical because there is much less disparity in sector performance. Holding Consumer Staples in both regimes (Stagflation and Deflation) is a simple solution and doesn’t detract much from model performance historically. During Deflation from a logic perspective it is most prudent to choose Consumer Staples (3rd best) even if it wasn’t the best performer in that regime. We know from the previous post on business cycles that it is the best performer during either slowdowns or recessions. Furthermore, the sector shows stable demand for its products with good pricing power and has the lowest sensitivity to inflation. For these reasons it is a much easier choice than Materials or Consumer Discretionary which performed better but do not have the same defensive characterisitics nor do they have strong logic for inclusion. In Stagflation we will choose the Healthcare sector because it is historically a top performer in that regime and also during expected inflation. From a logic perspective, Healthcare offers essential products and services that are often paid for by insurers. Furthermore the Healthcare sector has historically the second best overall performer in slowdowns and recessions and has lower interest rate sensitivity than Utilities which was also a top performer. A long-term study by MSCI using global sectors from 1975-2021 concluded: “Among sectors, health care has done favorably in stagflation.”
Now that we have our choices for each regime we can backtest the historical performance of using this simple sector timing model:


The strategy substantially outperforms the S&P500 with slightly higher volatility but lower drawdown risk. The returns of the strategy are much more asymmetric- higher relatively in bull markets and lower relatively in bear markets. Surprisingly this simple strategy isn’t that sensitive to parameters (it isn’t optimal either), but for a production strategy more work still needs to be done to make it robust and to reduce taxes and trading costs.
Overall the strategy produces very high returns especially with equity sectors that have historically been quite noisy. After employing a two factor growth/inflation matrix and seeing the sector performance disparity by regime, it is easier now to understand why simple relative strength strategies do not perform very well with sectors. Macroeconomic regimes change in a nonlinear way- they do not occur in a set or highly predictable order and they can change very quickly. Using a static time-based lookback will gather data that often straddle multiple regimes at once and can only perform well if a regime lasts a long time.
There is a good case for using the Growth and Inflation Sector Timing Model even if it has a higher risk than most are comfortable with: the strategy has the advantage of always being in the market, and therefore faces less timing risk than a tactical model. Even if you choose to get defensive when the market goes up, the defensive sectors will still likely outperform a cash or bond allocation. Overall the strategy can be considered a return enhancer as part of investor portfolios without creating the typical drag of risk management strategies. That said we have had a really good 15-year run in equity markets leaving the strategy vulnerable to market downturns and hence more muted expected performance. Furthermore there there are much better assets to hold in various macro regimes if we expand the opportunity set.
Iterative PSD Shrinkage (IPS)

**UPDATE: it was recently brought to my attention by Roman Rubsamen who does an excellent job of curating the large body of research in optimization and mathematics and of Portfolio Optimizer that the same general methodology was created and extensively tested by Higham in 2016 (the shrinkage targets in IPS and correlation thresholds are different). Higham’s paper demonstrates the usefulness of the general approach on large matrices and the speed of convergence for different variations. While I was not aware of this paper when I created IPS I am very pleased that there is strong validation for this approach in the literature. Roman’s excellent Portfolio Optimizer is a free Web API that provides access to advanced portfolio optimization algorithms and is really worth checking out.
In the previous post the framework for Generalized Downside Implied Correlations was introduced. You can use this correlation matrix derived from joint risk metrics to replace or augment/blend with traditional correlations for use in analysis or optimization. The challenge is that the resulting matrix may not be positive semi-definite (PSD) or well-conditioned. The solution is to find a reasonably fast and simple solution that can gradually adjust or shrink the correlation matrix while still ensuring PSD.
Shrinkage methods are important because they produce a well-conditioned, invertible matrix, which is crucial for applications like optimization. Traditional shrinkage methods such as Ledoit-Wolf seek to improve covariance matrix estimates. Linear versions primarily reduce eigenvalue variance by shrinking the sample covariance matrix towards a target (like the identity matrix). Nonlinear versions use a spectral transformation, often involving the Hilbert transform, to adjust the magnitude of each eigenvalue individually. This data-driven approach seeks to minimize both variance and bias in the eigenvalue estimates, leading to more accurate covariance matrices. Imagine the sample covariance matrix as a noisy picture. Linear Ledoit-Wolf tries to ‘smooth’ this picture by blending it with a simpler, clearer picture (like a blurry version of the original). Nonlinear Ledoit-Wolf, on the other hand, uses a more sophisticated technique. It analyzes the noise in different parts of the picture and adjusts the smoothing level accordingly. This targeted approach allows for a clearer and more accurate representation of the underlying reality. While the theory generally supports PSD results, numerical issues in the implementation can sometimes lead to very small negative eigenvalues. While both methods reduce the chance of negative eigenvalues, they fail to guarantee PSD.
The Iterative PSD Shrinkage (IPS) combines both gradual shrinkage with explicit PSD enforcement using a post-shrinkage check such as the Cholesky decomposition.
The primary goal of this method is to adjust the correlation matrix as little as possible by shrinking to the target matrix to preserve the original relationships while still ensuring PSD. The framework is quite simple, flexible and computationally efficient. As a result it is well-suited to specialized correlation structures such as reverse-implied downside risk correlations. Iterative PSD shrinkage is fast but generally not the fastest method compared to either shrinkage methods that use analytical solutions or other methods for ensuring a matrix is Positive Semi-Definite (PSD). However, it does not require the distribution assumptions that analytical methods assume and converges quickly while offering more control and precision.
Iterative PSD Shrinkage/IPS Steps
- Start with Your Data:
- Gather the sample covariance or correlation matrix (S) from your data
- Define a Target Matrix:
- Create a target matrix (T) to shrink toward. Or automatically determine the target within the algorithm based on average correlation in the universe.
- Examples:
- Constant correlation matrix: A simple, average correlation structure. This is ideal when the average correlation of all assets in the universe is homogeneous or >.5
- Identity matrix: Assume no correlation between assets so correlations =0. This is ideal when the average correlation of all assets in the universe is heterogeneous or <.5

3. Set Initial Shrinkage Level λ Lambda:
- Calculate an initial shrinkage parameter using a simple heuristic for shrinkage intensity that captures variance or noise/instability in the matrix scaled by the strength of correlations:

4. Combine Matrices:
- Shrink SSS toward T using a weighted blend:

5. Check for Positive Semi-Definiteness (PSD):
- Test if the matrix is PSD using Cholesky decomposition. If the decomposition fails due to having negative eigenvalues the matrix isn’t PSD.
6. Adjust λ Lambda:
- Increase Lambda by a small amount using alpha and repeat the blending process. This shrinks the S closer to the T.


7. Repeat Until PSD:
- Continue adjusting Lambda iteratively until the adjusted matrix is PSD.
8. Finalize the Matrix:
- Once PSD is achieved, the resulting matrix is your shrinkage-enhanced matrix. This is the final matrix used for optimization.
You can build more complex methods within IPS such LW shrinkage or even using projections onto the nearest correlation matrix that is PSD which is used in Higham’s Algorithm. You can also change the shrinkage intensity heuristic to use the difference between observed and historical correlation matrices. The general idea of IPS is to constantly test and adjust iteratively to ensure PSD. Using cross validation or walk-forward testing can help determine the best choice of alpha and also the best methods/targets to use within the IPS framework. However this simple approach provides the crucial advantage of being easy to understand and implement.
In the previous post I introduced a Drawdown Implied Correlation (DIC) that is a joint time-series measurement which converts maximum drawdowns into a correlation coefficient using a simple formula derived from portfolio math. The DIC had some unique features such as a “point-in-time” reference to the exact point of maximum drawdown, and a “triple reference” which averages the calculation over three separate reference points to improve stability. The DIC also used drawdowns from all-time highs using values outside of the reference window. This makes the metric more idiosyncratic and more difficult to interpret within the traditional framework used by analysts and portfolio managers. Fortunately the framework introduced can be generalized to any well-established downside risk metric. The Generalized Downside Implied Correlation (GDIC) captures joint time-series dynamics which is important for managing portfolio tail risk. The GDIC can be formulated as follows:

The final result of the GDIC is then bounded between 1 and -1.
*Note that the threshold reference point for returns/drawdowns in this case is tied to portfolio data. For example if the CDaR is 90%, then to find the CDaR for Asset A and Asset B you would use the drawdowns referenced for each asset referenced to the date that drawdowns for Portfolio AB exceeded the threshold. For a triple reference you would average the results of using all 3 time series separately as a reference.
This is a very compact and simple framework that allows you to create a custom correlation matrix that reflects downside risk using joint time-series dynamics. The resulting matrix can be used for research and analysis as well as stress-testing. Here are several metrics worth considering:
CDaR or Condtional-Drawdown-at-Risk/ CVaR or Conditional-Value-at Risk:
Cheklov,Urysaev and Zabarankin, 2003: Portfolio Optimization with Drawdown Constraints
Krokhmal, Palmquist and Urysaev, 2001: Portfolio Optimization with Conditional Value-at-Risk Objective and Constraints
A former colleague Enn Kuutan wrote an excellent primer on CDaR and CVaR in his master’s thesis.
Michael Kapler of Systematic Investor previously wrote a post on using CDaR and CVaR tools for implementation in R on his excellent blog to implement these metrics in the context of portfolio management.
ERoD or Expected Regret of Drawdown:
Ding and Urysaev, 2021: Drawdown Beta and Portfolio Optimization
UI or Ulcer Index:
Martin and McCann, 1987: The Investor’s Guide to Fidelity Funds
All of these metrics are excellent candidates for the GDIC. But there is no need to make a single choice: you can also combine the downside implied correlations from multiple different metrics to form one cohesive matrix that reflects all of the loss dynamics you wish to capture. What is lost in precision is gained in flexibility, allowing the integration of diverse information into a cohesive and adaptable framework. In a subsequent post, I will compare and contrast these different metrics and their advantages/disadvantages.
A Simplified Downside Correlation Estimate
It is important to recognize that the GDIC is an approximation or a heuristic method to derive correlation. Unlike volatility there is no iron clad mathematical proof of a linkage between these nonlinear metrics and how they combine to form a correlation. However we showed the previous post using the Drawdown Implied Correlation that there is a clear mathematical link and that this approximation is still useful. While not mathematically equivalent to Pearson correlation, these implied correlations are interpretable and internally consistent for use in analysis or risk-based optimization. Extreme downside scenarios are often driven by nonlinear relationships (e.g., tail dependence) that traditional correlation fails to capture. Downside metric implied correlations can help approximate this tail dependence without requiring complex copula modeling, providing a simplified but effective representation of how assets behave under stress. As a result this heuristic provides a very useful and unique insight into joint risk dynamics.
Applications
In addition the generalized downside implied correlation matrix can be used along with the corresponding downside risk measure (ie if correlations are derived using CDaR, use CDaR in place of volatility) or matched with standard volatility to create a variance/covariance matrix. The inputs can then be used very conveniently in a “Pseudo-Optimization” framework (or approximation) using standard Markowitz/Mean-Variance mathematics and optimization methods. The benefit of this approach over standard linear programming is that it is much faster/more scaleable, can handle multiple objectives, and is potentially more robust because it is less overfit to past data. The process of deriving downside implied correlations and volatilities inherently smooths extreme risk patterns by distilling them into a quadratic framework. This smoothing can reduce overfitting and provide a more robust solution under different market conditions. In a sense, this is a more generalized solution that also preserves the integrity of downside metric data (the implied correlations use the original time series sequence of returns and baseline) without having to approximate it with a piecewise linear approach which preserves the relative rankings. You can then apply random matrices and resampling/Monte Carlo simulation to make the optimization more robust. One caveat is that for the purpose of optimization you still need to ensure the Generalized Downside Correlation Matrix is Positive Semi-Definite or PSD. In the interest of brevity I will present my algorithm for fast PSD adjustment in a part 3 of this series.
Drawdown Implied Correlations (Part 1)

Diversification is a concept that is critical to most asset managers and traders. The foundation of this body of research is built upon the Pearson correlation coefficient, which is the most popular metric to determine whether adding an asset to a portfolio might enhance diversification. Despite its widespread use, most investment practitioners recognize its limitations. Some of the flawed assumptions include : 1) the assumption of linearity: correlations assume a simplistic linear relationship between assets, overlooking the complexity of real-world market dynamics 2) the assumption of normality: correlation assumes that returns are normally distributed despite the evidence that asset returns have “fat tails” 3) the failure to capture tail risk: correlations fail to adequately capture tail risk during times of financial crisis or market stress.
But the greatest danger that the correlation metric presents is a false sense of security- leading investors to believe their portfolios are diversified (and thus insulated from systematic risk) when in reality they are not. For example, a negative correlation between two assets might suggest effective diversification, but this metric ignores the direction and magnitude of returns. Both assets can lose money simultaneously, even while maintaining a negative correlation, leaving the portfolio exposed to significant losses. This highlights a critical flaw in relying solely on correlation: it focuses on the relationship between returns rather than their actual behavior or contribution to portfolio risk, leading to decisions that fail to achieve real diversification.
After analyzing some basic asset class data, I noticed that 2022 was a very unusual year. Both stocks and bonds had simultaneous declines while inflation hit multi-decade highs. Traditional asset allocation failed, and for the most part classic tactical asset allocation also suffered losses. I noticed a chart posted by Callum Thomas of the Weekly S&P500 Chart Storm that showed just how unusual market conditions were:

Many people will look at this chart and think that this is just an anomaly, but to me this is yet another clear example of why you can’t just blindly assume that bonds will protect your equities in a crisis. It also shows that backtesting and model creation can be biased by certain data samples to provide a false sense of security (imagine only using data from 1970 to 2021 to create your model!). Here is a chart showing how the drawdowns in stocks vs bonds evolved during this period along with major Federal Reserve rates/cuts highlighted (note in 2018 there were 4 rate hikes but only the start is on the chart):

Note that most of the drawdowns for both assets seems to be driven by rising rates which makes sense. During COVID the Fed cut rates and bonds managed to provide diversification during the large stock market drawdown. This suggests that the diversification benefit of holding bonds to balance the risk of stocks has become conditional on expectations for future Fed policy and inflation. But in the absence of complex macroeconomic models we typically rely on a dynamic or rolling correlation to measure diversification. I took a look at the correlations to see if they signalled this diversification failure and to my surprise they were actually negative to barely positive through most of 2022!

So much for traditional Diversification!
The problem is obvious, but what is the alternative? The inspiration for this post is actually based on a conversation from nearly 12 years ago that I had with a former colleague- Dave Abrams- that used to work at CSSA. He once asked me if it was possible to derive a better correlation coefficient using drawdowns. The general idea was that what traders and investors are really interested in is whether the drawdowns from one asset are offset by gains in the other asset or vice versa. We didn’t end up creating anything based on this discussion but it was a great insight.
Drawdowns are definitely the key to a practical solution. They represent the true historical reality of investment risk rather than estimates from a normal distribution. Drawdowns focus specifically on the risk of capital erosion and the investor’s capacity to weather those losses. They provide crucial information that traditional measures often overlook: 1) the potential for catastrophic loss, 2) a quantifiable measure that captures both tail risk and the emotional toll of losing money 3) the “sequence of returns risk” or the difficulty of making a financial recovery after large losses. This makes drawdowns an essential component of risk analysis and portfolio management, particularly for long-term investors concerned with both the return and preservation of capital. In contrast to correlations and volatility, drawdowns are nonlinear and path-dependent making them complementary for risk analysis.
The next question is obvious: how do we convert this valuable information into a correlation? Without pausing to cover the existing research and how this is unique (we will cover that in part 2) let’s first dive in and demonstrate how we can derive a simple equation that anyone can use and calculate the drawdown correlation from only a single reference or data point. But first let’s “correct” a standard volatility measure by reconciling empirical versus theoretical drawdowns. It is well understood that the theoretical/normal distribution estimate for drawdowns can be very different from the empirical or actual maximum drawdown. By capturing this difference we can get a different estimate for “true” volatility. A very basic model for expected maximum drawdowns (EMD) would use the mean or drift minus some multiplier times volatility times the square root of the time measurement. If we use a multiplier as 2, then we are saying that the loss we expect is a 2 standard deviation event which is quite rare. Regardless of the multiplier used, here is the formula from which we will “reverse imply” ( in this case use basic algebra) the volatility or “Drawdown Implied Volatility” (DIV) by solving for it by replacing the EMD with the empirical maximum drawdown.


This measurement can be useful on its own in the context of portfolio inputs or used within a trading indicator. But that isn’t the goal of this post, instead we will now use this volatility as the basis for solving for correlation by again “reverse-implying” it. To do that we will start with portfolio math since the Drawdown Implied Correlation (DIC) is in fact a “joint” or “portfolio” derived measurement which differentiates it from other metrics. After all it is common sense that in diversification we want to see how two assets combine in a portfolio context to see whether we reduce drawdowns above more than just the average of their individual drawdowns. This is in essence what the DIC is all about.
To do that we need to create a new portfolio time series (AB) that is the equal weight of two assets being compared (A and B). With this simplification we can now provide weights and use standard portfolio math. Let’s dive into the simple derivation by isolating the correlation from the portfolio volatility formula:

This formula is still relatively involved since it has nested formulas for DIV. But fortunately it can be simplified:

Before showing some examples using drawdowns let’s verify how this implied formula works by substituting daily historical volatility (HV) and comparing it to the daily correlation coefficient:

As you can see they are exactly the same. The only minor difference when calculating a dynamic DIC is that if you use drawdowns from all-time highs versus only drawdowns contained within some lookback window then it isn’t exactly the same because you are using a cumulative measure for drawdowns that contains information outside of the lookback window. You can certainly use drawdowns entirely within a lookback window to keep the measure mathematically consistent, but it is recommended that you use a much bigger window for calculation to avoid a lot of noise. Regardless using drawdowns from all-time highs will slighly change the final values in such a way that they can be more negative than -1 which is why you need to bound the DIC between 1 and -1 to provide a practical correlation measure.
Another detail to mention is that volatility is an average metric (average of squared deviations from the mean) and as such in its basic form it is calculated at the most recent date in the lookback window being measured. In contrast, a maximum drawdown is inherently tied to a “point-in-time” at which the maximum occurs (the trough) so for accuracy you need find the date when the maximum occurs within the lookback window as the point of reference. The goal of “point in time” is to align the measurement of diversification at the trough which is what investors actually experienced at the point of maximum stress rather than a period-based measure that looks at the maximum drawdown on the most recent date in the lookback window. The graph below depicts the point-in-time reference:

Now let’s elaborate on these details:
Point-in-Time Reference:
- Step 1: Drawdown Calculation for Each Asset:
- For Asset A and Asset B, calculate the drawdowns from their respective all-time highs over a rolling window (e.g., 60 days).
- For the joint time series (AB), calculate the combined drawdown from all-time highs over the same 60-day window.
- Step 2: Find Maximum Drawdown for AB:
- Identify the maximum drawdown for the joint time series (AB) over the 60-day rolling window.
- Retrieve the corresponding drawdown values for Asset A and Asset B on the same day that the maximum drawdown for AB occurs.
- Step 3: Compute the DIC:
- Calculate the implied correlation between the drawdowns of A, B, and AB on the specific day.
- This gives the DIC for the pair of assets based on the maximum drawdown for the joint time series (AB).
The Key Point: Portfolio Drawdown vs. Individual Asset Drawdowns
The DIC uses portfolio drawdowns which capture the path dependent dynamics from combining two assets. When constructing a portfolio with multiple assets, the portfolio’s drawdown series (the peak-to-trough losses of the combined portfolio) behaves differently than the individual drawdown series of the constituent assets. This difference arises from how the assets interact in a portfolio.
Why Portfolio Drawdown Is Different:
The drawdown of the portfolio is not simply the sum or average of individual asset drawdowns; instead, it reflects the combined behavior of the assets as they interact over time. Two or more assets in the portfolio may experience drawdowns at different times or to different extents, and their drawdown implied correlations will directly influence how the portfolio’s total drawdown evolves.
For example:
- If two assets experience drawdowns simultaneously, their joint drawdown will be greater than what you would expect from either asset alone, this will lead to a measurement of high correlation.
- If the portfolio drawdown is moderate to low compared to the individual assets’ drawdowns this will lead to a measurement of low correlation.
Therefore, the drawdown of the portfolio can reflect behavior and interactions between assets that individual asset drawdowns and returns cannot capture. This is the key reason correlating individual asset drawdowns will not fully explain the portfolio drawdowns.
This can be demonstrated using a simple example. Using the method described above we will compute the DIC using large drawdowns over the past 5 years to demonstrate the calculated values versus the inputs from Step 1 and Step 2. The only difference is that we will use the market (S&P500) instead of the portfolio as the reference point and isolate significant drawdowns (>10%) over the past 5 years. Note that using a threshold is one of many different ways to compute the DIC. The standard approach is to use a rolling lookback period similar to the Pearson correlation requires additional calculations that I will discuss shortly. In this example it is interesting to compare using the Pearson correlation between drawdowns vs the DIC. The table below shows the difference between the Pearson correlation of drawdowns for stocks and bonds vs an average of the DIC values over the same 3 periods. The DIC is a joint measurement and reflects the daily compounding and sequence of returns of the portfolio of both assets, the DIC is calculated for each date and averaged. Both measure different relationships:
- Regular correlation: Pure A-to-B relationship.
- Drawdown Implied correlation: Portfolio-centric A-to-B dynamics.
Both are valuable, but their interpretations diverge. Using both metrics together often provides richer insights into asset behavior.

Notice that the correlation of drawdowns is much more negative than the Drawdown Implied Correlation, suggesting strong diversification benefits that don’t match actual investor experience over the same time period. The DIC (the average of the DIC values on each date) shows that in the last two market drawdowns that bonds have been positively correlated with stocks and only provided diversification to equity investors during the COVID drawdown in 2020 when the Fed cut rates. Clearly using the DIC can help provide alternative or complementary analysis to computing standard correlations. Another interesting advantage is that the DIC can be calculated using only one drawdown point while you need a minimum of 3 data points to compute a correlation between drawdowns.
Next, lets compute the “standard” version of the DIC which uses the max drawdown over some window and compare it to the rolling Pearson correlation of returns over the same window length. Note you can certainly use the top % or drawdowns above a threshold as well. But because we are only looking at maximum drawdowns with this variation, in order to create a rolling daily measurement I suggest a slight modification to the original calculation by using a “triple point” reference. This means we are going to look at three reference points which represent the maximum drawdown for each asset and the portfolio. The purpose of a triple reference is to get as much information as possible from a shorter window and increase accuracy while reducing indicator volatility. The graph below depicts a visual of the triple point reference:

The triple point reference introduces a layer of robustness by also calculating DIC at the point of maximum drawdown for A and B, in addition to the joint series AB and averaging the three results. This helps capture correlations more comprehensively and reduces bias by considering all assets’ drawdown behavior. If only the joint series AB is considered, the correlation will reflect the behavior of both assets combined during the maximum drawdown period. However, this doesn’t capture how A and B behave individually during drawdowns. By referencing the max drawdowns of A and B separately, you gain insights into how each asset is contributing to the joint drawdown and whether they are truly behaving as diversifiers or correlated assets during stress periods. By considering the drawdowns of A and B individually in addition to AB, you reduce the bias that might be introduced when only looking at the joint series. The maximum drawdown for AB could be influenced by an unusually strong movement in one asset, which might not reflect the risk dynamics between A and B themselves. By averaging the DIC from the three scenarios (max drawdown of AB, A, and B), you smooth out this potential bias and get a more robust measure of correlation.
Triple Point Reference:
- Step 1: Find the point of Max Drawdown for Asset A:
- Now, repeat the process but for Asset A as the reference. Find the maximum drawdown for A over the same 60-day rolling window.
- Retrieve the corresponding drawdown values for Asset B and AB on the same day that the maximum drawdown for A occurs. Calculate the DIC using the exact same formula.
- Step 2: Find the point of Max Drawdown for Asset B:
- Similarly, find the maximum drawdown for Asset B over the same 60-day window.
- Retrieve the corresponding drawdown values for A and AB on the same day as the maximum drawdown for B. Calculate the DIC using the same formula.
- Step 3: Calculate and Average DICs:
- You now have three DICs: one from the maximum drawdown for AB, one from the maximum drawdown for A, and one from the maximum drawdown for B.
- The final DIC is the average of these three DICs, providing a comprehensive view of the correlation during drawdown periods for both individual assets and their joint performance.
This methodology ensures that DIC reflects correlations during significant drawdown events at the point that they happen during adverse market conditions, which is crucial for portfolio risk management.
Now let’s take a look at what the 60-day DIC looks like versus the 60-day Pearson Correlation for Stocks and Bonds in 2022 to see if we notice an improvement:

What we notice is that the DIC is positive for most of 2022 and rises much faster than the traditional correlation which provides an early warning signal. This is what we want to see in the context of diversification and risk management.
The initial results were encouraging, so I decided to substitute the DIC values manually into a correlation matrix including Stocks, Bonds and Commodities to see if it would assist with improving the minimizing variance portfolios during 2022. For volatility I used the standard volatility metric in both cases, however you could use Drawdown Implied Volatility (DIV) with the DIC in a more drawdown centric optimization. To stabilize the correlation matrix, I used Ledoit-Wolf Shrinkage to shrink the DIC correlations to the identity matrix ensuring that the diagonals remained equal to 1. I verified that the eigenvalues were positive using the Cholesky Decomposition. In Part 2, I will present a very fast algorithm to ensure PSD. Here is how the drawdown implied correlation matrix improved performance over using Pearson correlations during a challenging year:

This limited example shows that the first half of the year year when correlations were negative and the DIC was positive this provided a substantial advantage, while later in the year when both correlations converged the portfolios showed very similar performance. It is important to keep in mind that the DIC can only help if there is an asset in the universe that is a true diversifier which in this case was commodities. This highlights the timeless wisdom that universe selection is extremely important. In 2022, bonds and stocks both had deep drawdowns so in a simple two asset case there is nowhere to go regardless of what the correlation is indicating. Portfolio math for two-assets shows that a more positive correlation will increase the allocation to the lower risk asset (100% allocation as the correlation approaches 1) while a more negative correlation will balance risk between both (risk parity with a -1 correlation), but if both assets have steep drawdowns that are similar and you identify a positive correlation it mathematically doesn’t matter very much for portfolio outcomes.
In terms of practical use, the combination of the Drawdown Implied Correlation matrix and the regular correlation matrix can be valuable as it can provide more information than either metric can provide individually. Using more drawdowns (the top “n” drawdowns or top % or drawdowns that exceed a percentile threshold) and a longer time period for the DIC as well as combining multiple windows for both drawdowns (maximum drawdowns over the past “n” days vs all time highs, or only considering drawdowns within the window) and their measurement period could improve the robustness and quality of the information that it provides. In part 2 we will discuss the existing research and where DIC fits, and also show a fast method to create DIC matrices that are PSD along with some new applications. In the meantime, I wanted to thank all of my readers for your support and valuable contributions over the years. I wish you all a very Happy Holidays and an awesome New Year in 2025!
Business Cycle Sector Timing
The business cycle is a pattern that captures changes in economic activity over time. The changes in the business cycle occur in a sequential or serial manner, moving through a predictable sequence of phases. These cycles are consistent but vary in both duration and intensity. The phases of the business cycle are:
- Expansion: This is the phase where the economy is growing. During an expansion, economic growth is positive, leading to increased production, job creation, and rising prosperity
- Peak: The peak is the highest point in the business cycle, where the economy is operating at or near its full potential. Economic indicators may start to show signs of slowing growth, but it’s still a time of relative prosperity.
- Contraction (Recession): After the peak, the economy starts to slow down. This phase is known as a contraction or recession. Economic growth becomes negative or significantly slows, leading to reduced consumer spending, lower business investments, job losses, and declining production. This is generally a period of economic hardship.
- Trough: The trough is the lowest point in the business cycle, where the economy hits bottom. Economic indicators may show very weak performance during this phase.

Notice that the Peak and Trough are followed by a Slowdown and Recovery. This part is important because peaks and troughs (tops and bottoms) are only known after the fact. Most of the time in the business cycle is spent in the four phases of Recession, Recovery, Expansion and Slowdown. It is logical to assume that different sectors of the economy will perform better in these four phases since they have different degrees of economic sensitivity. The original post in The Visual Capitalist was an amazing summary of S&P500 sector performance during these periods of the business cycle going all the way back to 1960. The data is summarized below:

Clear and predictable patterns begin to emerge and it is even more helpful to summarize the sector performance using normalized z-scores showing the deviation from average sector performance on a scale from 0 to 100% during each phase:

Recessions and slowdowns favor holding consumer staples which makes sense because they are products that people buy regardless of the state of the economy such as food and cleaning products. Recoveries (and also expansions) favor real estate likely due to the fact that it has more debt and hence leverage (and therefore earnings operating leverage) to a rebound in economic activity. As economic activity rebounds vacancies begin to shrink, demand causes rents to climb and the price of land and buildings also tend to rise. During expansions the fastest growing companies are naturally going to be technology companies because they are the easiest business models to scale quickly and gain millions of consumers with the least amount of capital invested.
But how do we capture this in a quantitative strategy? Notice that timing the business cycle with four phases looks a lot like a trend-following strategy since we are following changes in GDP. One of the best ways to capture market expectations for changes in GDP in real time without using lagging data is to simply look at the S&P500 index. An expansion is naturally a period when both short and long-term trends are going up. A slowdown is when the long-term is up but the short-term is rolling over. A recession is when the long-term and short-term are down. A recovery is when the long-term is down but the short-term is up. This simple definition avoids having to define the four phases in economic terms but most importantly because it is impossible to predict when these phases will happen until after the fact. Every single phase of the business cycle MUST occur following changes in the trend of the market. While false signals are to be expected, this approach will be able to profit from changes that occur the few times that the signals are correct. Here is a simple strategy to capture this below:

Here is the performance of this strategy over the last 23 years. It would be interesting to see how this performs going further back using either daily or monthy data.

Here are the impressive summary stats of this strategy which is 100% always in the market holding equities:

The performance of this business cycle sector timing strategy substantially outperforms the S&P500 with lower volatility. Perhaps with tactical overlays the risk-adjusted performance can be further enhanced. Having tested a lot of momentum strategies on sectors with disappointing returns the explanation is likely that they capture a lot of noise and fail to capture economic logic. There are a lot of possible ways to improve this business cycle strategy and also ways to make it more robust. Next week when I return from vacation I will try to post a follow up showing different ways to implement this strategy and also present some additional ideas. In the meantime I will not be responding to comments.
Adaptive Momentum on Major Asset Classes
In the last post on Adaptive Momentum I presented a backtest on the S&P500 via SPY. Since this was an exploratory post, I had not tried the methodology on other asset classes. I wasn’t sure how effective it would be on markets such as commodities since the leverage effect was less likely to be present. I was also curious to see how the indicator would perform on bonds and international equity markets. The results showed once again that adaptive momentum was a more profitable approach than traditional methods which was encouraging. The data tables are presented below:



Overall the results are very encouraging. Some experimentation shows that significant improvement can be made with parameter optimization and that a broad range of parameters perform quite well (default settings were by no means optimal). I like this approach as it allows for a more flexible momentum response. In contrast using fixed lookbacks or all lookbacks in a portfolio didn’t seem to address the basic issues associated with adapting to market speed. The signals for using this methodology will be added to the Investor IQ website at CSS Analytics at some point in the near future for hundreds of tickers. The performance of existing analytics on this site this year has been outstanding so this is just another tool to use. If you haven’t already joined please subscribe to this free resource using your email address.
How Should Trend-Followers Adjust to the Modern Environment?: Enter Adaptive Momentum
The premise of using either time-series momentum or “trend-following” using moving averages is the same only the math differs very slightly (see Which Trend Is Your Friend? by AQR): using some fixed lookback you can time market cycles and capture more upside than downside and therefore improve performance vs buy and hold OR at the very least improve return versus downside risk. The problem lies in the “fixed” portion of the description: markets as we know are non-stationary and business cycles can vary widely.
In 2020, the COVID-19 selloff was unprecedented in its speed and ferocity relative to past corrections. The chart below from “Towards Data Science” shows a comparison of the drawdown depth and number of days it took to get near the bottom.

What you can also gather from the chart above is that corrections in 2015 and 2018 were also relatively fast selloffs and that this may be a defining feature of the modern environment. One can speculate rationally as to the cause of this situation; monetary policy driven asset bubbles via low interest rates that artificially inflate assets like balloons that expel quickly when the air is removed, and/or computerized trading that take advantage of sellers by moving prices rapidly away as pressure increases. Regardless of the reason, the reality is that markets seem to take the escalator up and the elevator down in today’s day and age. Traditional methods that rely on linear and static time-based lookbacks have been doing quite poorly which is not surprising. An article chronicling the struggles of trend-followers was posted on Bloomberg

As you can see CTA’s have not been having an easy ride as of late and their struggles seemed to start around 2015. I know what some of you are thinking: tactical long-only trend-following with ETFs is not the same as long/short hedge funds that trade futures. You are right and wrong: right in the sense that long only ETF has a tendency to profit from asset class positive drift, whereas long/short has no such favorable tailwinds especially with low interest rates. But wrong in the sense that this doesn’t discount the timing component of returns which has been unfavorable for equity indices. The problem is more straightforward when you consider how trend-following generates excess returns. In a fantastic post by Philosophical Economics he illustrates exactly what is responsible for trend-following P/L:

The bottom line is that if markets are moving faster than the moving average oscillation period (roughly half the lookback) then you will lose money via whipsaw. This is made worse if the oscillation period is asymmetric such as when it takes longer to go up than down. Most trend-followers or tactical managers employ a 1-year or 6-month lookback. These can be too long if the drawdown materializes within less than a quarter. Furthermore using holding periods that are monthly rather than daily inspection also introduces more luck into the equation since drawdowns can happen at any time rather than on someone’s desired rebalancing period. Savvy portfolio managers use multiple lookbacks and holding periods in order to reduce the variance associated with not knowing what the oscillation period will look like. However this does not address the core problem which requires a more dynamic or nonlinear approach.
In a recent paper by Garg et. al called “Momentum Turning Points” they explore the nature of dynamic trend-following or time-series momentum strategies. They call situations where short-term trends and long-term trends disagree to be “turning points” and the number of these turning points determine trend-following performance. AllocateSmartly provides a fantastic post reviewing the paper and shows the following chart:

A greater frequency of turning points is consistent with a faster oscillation as described by Philosophical Economics. As you can see when the number of turning points increases performance tends to decrease for traditional trend-following strategies which tend to rely on longer term and static lookbacks.
Garg. et al in the paper classify different market states that result from short-term versus long-term trend-following signals:
Bull: ST UP, LT UP
Correction: ST DOWN, LT UP
Bear: ST DOWN, LT DOWN
Rebound: ST UP, LT DOWN
In reviewing optimal trend-following lookbacks as a function of market state the paper came up with an interesting conclusion:
“The conclusion from our state-dependent speed analysis: elect slower-speed momentum after Correction months and faster-speed momentum after Rebound months“
Unfortunately their solution to make such adjustments relies on longer-term optimization based on previous data. Even if this is walk-forward there is a considerable lag in the adjustment period. There is a simple way to account for oscillations that may occur more rapidly and potentially in a non-linear fashion. To state the obvious the concept of a “drawdown” is itself a nonlinear variable that is independent of time and is the most directly tied to investor profits which makes it a good candidate for making adjustments. Furthermore small drawdowns (corrections) based on the analysis presented in the paper above require longer lookbacks, while large drawdowns (that precede rebounds) require shorter/faster lookbacks. If we can use a relative measure of historical drawdowns within an adaptive framework this should more directly solve the problem. In this case we don’t care about how many turning points there are (which is the assumption made by choosing a static lookback) but rather how to adjust to them in a logical way.
The Simple Solution
The first step is to create a series of drawdowns from all-time highs (it isn’t critical to choose all-time, 1-year highs work well too). Then find the empirical distribution of such drawdowns using some lookback using a percentile ranking over the past 6 months (again not critical can use longer or shorter or a combination). I use the square of this value for the simple reason that we want to ignore small drawdowns and focus on larger drawdowns to make lookback length adjustments (remember corrections require long-term lookbacks and rebounds after large drawdowns require short-term lookbacks). Next choose a short-term trend-following lookback and a long-term lookback. In this case I chose 50 and 200 which are often followed by market participants via their respective moving averages but again the parameter choice is not critical. One practical point is that it is inefficient to use a really short-term lookback for tactical trading such as 20 days. We can calculate the optimal alpha of a moving average using the exponential moving average framework as follows:
Percentile Ranking of Drawdowns ^ 2 (squared) = P
Short- Term Alpha= ST
Long-Term Alpha= LT
Optimal Alpha= P*ST+(1-P)*LT
We then calculate an adaptive moving average using the optimal alpha which looks like this:

Notice that the adaptive moving average gets slower as the market makes new highs and faster after large drawdowns exactly as we would expect. The result is that it permits both earlier exits and entries into the market. The latter is far more important given the tendency for the market to make “V-shaped” recoveries. But to test this theory we need to backtest over a long time period and compare to traditional static lookbacks. Instead of using fixed holding periods we will do daily signal generation at the close. Since an adaptive moving average is not as effective as a simple moving average as a low pass filter it is important to filter the price when using a traditional price vs moving average strategy. To filter price I take the 10-day moving average of price (could be 3 could be 15 doesn’t matter that much). Note that the price (or filtered price) vs moving average in an exponential moving average framework is mathematically equivalent to momentum. Therefore using a price vs ema in this context is basically an adaptive momentum calculation. Here is what Adaptive Momentum looks like vs typical static lookbacks:

And for the quants out there here is the performance table:

Adaptive Momentum is the best performer in terms of CAGR, but most the biggest difference is in risk-adjusted returns (sharpe) and higher moments (skew and kurtosis). Adaptive Momentum has more positive skew and lower kurtosis indicating higher upside/downside capture and lower tail risk. What is most impressive is that it does so with nearly the same number of trades as 12-month time series momentum. If you look carefully at the line chart you can see that Adaptive Momentum does much better in recent years than the static lookbacks which we would expect. Overall performance is impressive and this can be considered as a practical approach for tactical asset allocation. An interesting note is that if you dispense with the percentile ranking in the calculation and simply increase the lookback to the maximum when the market is making new highs, and decrease the lookback to the minimum when the market is not making new highs you get pretty similar performance. Overall, the strategy doesn’t exhibit much parameter sensitivity at all.
While this simple solution is not a perfect approach, it certainly does make intuitive sense and produces worthwhile results. Given a choice between static and adaptive/dynamic I would personally take this type of approach for real-life trading.
Mean-Reversion Trading Strategies in Python Course

This post contains affiliate links. An affiliate link means CSSA may receive compensation if you make a purchase through the link, without any extra cost to you. CSSA strives to promote only products and services which provide value to my business and those which I believe could help you, the reader.
In the last post I interviewed Dr. Ernest Chan who is the author of the Mean-Reversion Trading Strategies in Python Course that I will be reviewing in this post. Readers interested in enrolling in the course can follow this link and receive an additional 5% off by using the coupon code: CSSA5
The course is put together by Quantra/QuantInsti which provides algorithmic trading courses in a slick e-learning format for a wide variety of different topics including Momentum Trading Strategies which I covered in a previous post.
Review of Mean-Reversion Trading Strategies in Python

A long time ago when I first started trading using quantitative methods, I tried implementing statistical arbitrage strategies with a friend of mine using only an Excel notebook with a data feed. While I had a reasonable understanding of what to do, I didn’t have either the technical knowledge, practical experience, or programming/operational skills to do things properly. While the backtests look great, real-life trading made them look like a mirage in an oasis. Not surprisingly, this “half-assed” operation was a failure. If only I could have gone back in time and taken this course I probably would have had a much better chance of succeeding. The course teaches you both beginner and advanced stat arb techniques and shows you how to work with Python code and connect directly to Interactive Brokers.
The course teacher- Dr. Chan- is not some ivory tower academic or financial economist discussing arbitrage opportunities on paper or the classroom chalkboard, he is an actual hedge fund manager with a successful track record. As a result the course incorporates all the important reality-checks; from the obvious such as transaction costs, to the often overlooked such as considering short-selling availability and borrow costs, to the arcane such as importance of having non-negative weights in index arbitrage to avoid added exposure via stock specific risk. In my opinion this is why quants should strongly consider taking courses on Quantra/QuantInsti if they want to at the very least avoid beginner mistakes that cost a lot more money than the cost of taking a course.
The course starts out defining stationarity and then relates this to a mean-reversion trading strategy. Various statistical tests such as the Augmented-Dickey-Fuller (ADF) test are presented along with both the requisite mathematics as well as code for calculation in Python. Then basic mean-reversion strategies are presented such as those that use Bollinger Bands or Z-scores that can be applied to stationary time series. This is extended to creating an actual portfolio of positions and how to manage this using signals to also calculating P/L. I liked the fact that an explanation was provided for why you would use statistical tests for stationarity prior to testing trading strategies since the majority of traders move immediately to backtesting.
Dr. Chan then discusses the Johansen test as a more versatile test to ADF and shows how to use the eigenvectors calculated for a wide variety of applications. The cool stuff included how to run arbitrage for triplets (3 stocks at a time) to basket arbitrage. Half-life calculations and their applications was also an important part of the content (this is often explained poorly by many sources but not in this case). I liked that the course covered risk management and also how to deal with broken pairs- something that everyone needs to know. Guidance was also provided for the best markets for pair trading. The course concludes with cross-sectional mean-reversion strategies which have been covered a lot on this blog. Python code for everything is covered in this course as well as how to apply turnkey approaches immediately in Interactive Brokers.
Overall if you are looking to get into stat arb whether at the firm level or for yourself this is a great primer. While it doesn’t show you any secret sauce, it gives you all the technical and practical coding knowledge as a foundation to developing your own. So if you are already very experienced this course probably isn’t for you. But if you are experienced but lacking in the technical/practical side you should definitely consider taking the course. There is also a Quantra community to help answer questions and build networks. Hats off to Dr. Chan and the very talented team at Quantra/QuantInsti for putting this course together.
An Interview with Dr. Ernest Chan

In the last post I reviewed the Momentum Trading Strategies Course by Quantra (a division of QuantInsti) which I reviewed as part of a recent educational journey to improve my quantitative skill set. The next course that I will be reviewing is Mean-Reversion Strategies in Python which is taught by Dr. Ernest Chan. I have personally read Ernie’s book “Machine Trading” which is very well written and full of interesting and practical ideas. I have also been a follower of his very popular blog which was a pioneer in revealing statistical arbitrage strategies such as pairs-trading. Dr. Chan is a thought leader and industry expert and anyone who is in the quantitative field has inevitably come across his work in one form or another. I reached out to interview him to get a few insights into how to think about quantitative models in the modern era. For those that are unfamiliar with Dr. Chan’s work I have provided his very impressive (and extensive) industry and educational credentials below.
Industry Background: Dr. Chan is the Founder of PredictNow.ai, a financial machine learning SaaS, and also the Managing Member of QTS Capital Management, LLC., a commodity pool operator and trading advisor. His primary focus has been on the development of statistical models and advanced computer algorithms to find patterns and trends in large quantities of data. He has applied his expertise in statistical pattern recognition to projects ranging from textual retrieval at IBM Research, mining customer relationship data at Morgan Stanley, and statistical arbitrage trading strategy research at Credit Suisse, Mapleridge Capital Management, and other hedge funds.
Educational Background
Dr. Chan is an industry expert on ‘Algorithmic Options Trading’ and has conducted seminars and lectures on many international forums. Besides being a faculty in QuantInsti, his academic distributions are available on Quantra and on major web portals. Dr. Chan is also an adjunct faculty at Northwestern University’s Master’s in Data Science program. His courses and publications on finance and machine learning can be found at www.epchan.com. Ernie is the author of “Quantitative Trading: How to Build Your Own Algorithmic Trading Business”, “Algorithmic Trading: Winning Strategies and Their Rationale”, and “Machine Trading”, all published by John Wiley & Sons. He maintains a popular blog “Quantitative Trading” at epchan.blogspot.com. Ernie received his PhD. in Physics from Cornell University.
Interview with Dr. Ernest Chan
1) What do you think about traditional factor investing and trading strategies that use technical indicators? Can they be profitable in the modern environment?
We don’t use machine learning to generate trading signals, but rather to determine the probability of profit of the existing trading signals generated by a basic, traditional quantitative strategy. This strategy can be a factor model or one based on simple technical indicators. This probability of profit can then be used to determine the order size, which can be zero if the probability is too low.
Factors and technical indicators are still crucial for the basic strategy. I don’t believe that machine learning can replace human intuition and understanding. In fact, it should be used to enhance such understanding and risk management. The input to a machine learning algorithm is nothing but factors and technical indicators.
2) If you could choose between a Momentum and Mean-Reversion approach to trading which would you choose and why?
I would trade both. Otherwise the portfolio would not be market neutral since momentum strategies are typically short beta while mean reversion strategies are long. Also, momentum strategies are long “gamma” and “vega”, while mean reversion strategies are short. Note that I put quotation marks around such options Greeks because we are not really trading options nor implied volatility. I am using these terms loosely to indicate an increase in tail movements and realized volatility.
3) Why should traders strongly consider using machine learning in their trading versus hand-coding their own quantitative systems or using more simple statistical tools? For traders that aren’t familiar with coding what do you think is the best way to get started?
Traditional quant strategies are too easily replicated by other equally intelligent traders, hence they suffer more rapid alpha decay. ML strategies have so many parameters and nuances that no two traders can possibly have the same strategy. For traders who are not experts in machine learning or programming can start with a no-code machine learning service such as predictnow.ai.
4) What do you think is the biggest challenge for newbies trying to design their own machine-learning models?
Machine learning requires abundant and correctly engineered features as input. I have seen many newbies trying to use 4 or 5 inputs to a ML algorithm. They should instead be using at least 100 inputs.
5) Do you have a preference in terms of the type of machine learning model you use such as Neural Networks vs KNN or Decision Trees? If so why?
Decision trees, or the more advanced version called random forest, is the preferred ML method for trading. That’s because it doesn’t have as many parameters to fit as a neural network, thus reducing the danger of data snooping bias. Also, the output of a decision tree is a bunch of conditional decision rules, which are much easier to interpret than the nonlinear functions that neural networks use. On the other hand, KNN or logistic regression are too simple – they don’t capture a lot of the nonlinear dependence between different input features and the output return.
6) Many traders and market commentators have noticed that markets seem quite a bit different than in the past. The market seems to move much more quickly and reacts to news in ways that are counterintuitive. Given your vast experience with algorithmic trading what new trends or insights have you gathered in the last few years? Have you made any specific adjustments or recalibrated your models accordingly?
Market patterns often deviate from the “norm” over a short period (e.g. 6 months-1 year), but they often revert to the norm. One needs to diversify so that some strategies are enhanced during such periods, even though others are hurt. Such regime changes can also be detected or predicted to some extent by machine learning.
Thanks Ernie for the interview!
Momentum Trading Strategies Course
This post contains affiliate links. An affiliate link means CSSA may receive compensation if you make a purchase through the link, without any extra cost to you. CSSA strives to promote only products and services which provide value to my business and those which I believe could help you, the reader.
One of the biggest barriers to creating a quantitative strategy is knowing how to code. The other barrier is having sufficient theoretical and empirical knowledge. Getting a degree in finance can help with the latter, and a computer science degree can help with the former but if you want to be able to do both you often have to start from scratch which can be very intimidating. I recently took the Momentum Trading Strategies Course by Quantra which is unique because it teaches you both the background theory and empirical research as well as how to code examples in Python– currently the most popular language for algorithmic traders. Given that my coding skills are limited to using Microsoft Excel, this course was especially useful and I even learned a few new things on the research side. Readers interested in enrolling in the course can follow this link and receive an additional 5% off by using the coupon code: CSSA5
Note: This course is currently priced at $179 but will return to its normal price of $499 on November 2.
Before getting to my review of the course below it is important for readers to know a little bit more about the service and the players behind the scences:
Quantra is a learning platform for algorithmic trading courses, where through advanced interactive & hands-on learning technology offers content curated by some of the top thought leaders in the domain of algorithmic trading including;
1) Dr Ernest P. Chan
2) Laurent Bernut
3) Dr Terry Benzschawel
4) National Stock Exchange (World’s Biggest Derivatives Exchange)
5) Multi Commodity Exchange (India’s Leading Commodity Exchange)
6) Interactive Brokers
7) Forex Capital Markets (FXCM)
The parent company of Quantra is QuantInsti, which was founded by one of India’s biggest HFT firms; iRage, is today one of the world’s most prominent algorithmic & quantitative trading & research institutes with a user base in 180+ countries.
Review On Momentum Trading Strategies Course
First I have to say that this is a really comprehensive course with very slick technology for the e-learning community. The course took me a couple days to complete which was longer than I expected but it also went into far greater depth than I expected as well. To get the most out of the course you should also read the recommended research articles and also work on coding the examples.
It starts off very basic- almost too basic for those familiar with momentum- but gradually builds and gets more advanced with each segment. The topics covered early on include answering what momentum is and why it exists as an anomaly. By the time you get to the fifth section you are being introduced to Python and how to work with commands and loading in data for analysis. You then cover more advanced topics like how to use the Hurst Exponent to cross-sectional arbitrage strategies in futures that exploit roll returns.
In this comprehensive curriculum it seems like every major popular paper on momentum is neatly summarized and the course also covers important topics like Momentum Crashes and risk management. Each segment has examples linked to using Python. There are also multiple choice questions that are there to test your memory and comprehension of the material. As you reach the end of the course you are introduced to even more practical topics like how to automate trading strategies and link to broker APIs.
Overall I was very impressed and I think this is exactly the kind of e-learning alternative that both students and traders/investors need to make their dreams of having their own automated strategy a reality. In subsequent posts I plan to continue to share my learning journey by trying new courses and will provide readers again with a review. Hats off to the team at QuantInsti for being an innovator in this space.


