Details of a Noise-Value Strategy

PDF Version: P123 Strategy Design Topic 2B – Details of a Noise-Value Strategy

Topic 2A spelled out an idea for a strategy based on the notion that in the stock market, Price = Value + Noise, or P = V+N, as well as a plan for how we articulate the strategy. We’ll do the details here.d9e79dc8-a1c6-40b1-8dae-0b8e7b82602b

The screen and ranking system that will be created can be found in the Group section of the forum or via these direct links:


Ranking System:

If you wish to use them or revise them according to your own preferences, save copies of each in your account:

Tests and variations will be discussed in Topic 2C.

Brief Restatement of the Idea

Recognizing that noise usually accounts for some portion of a stock’s price, we’re going to try to identify stocks whose prices currently reflect abnormally low levels of noise and for which we  see potential catalysts that could cause noise to increase in the near future. That adjustment should serve as a source of extra return.

Housekeeping Items

I like to get some busy-work out of the way up front.

Backtest Settings

I’m going to do some changing later, but I’ll start with some default preferences:

Max Period

0.25% slippage

Risk statistics Period – Monthly

Rebalance Frequency – 4 Weeks

Max. No. Stocks – 20

Probably the most significant choices are for 20 stocks and 4-week rebalancing. These are my starting choices because . . .

  • 4 weeks is a good interval between 1 Week (which uses the freshest data but may not allow for enough tie for ideas to play out in the market) and three months (a nice opportunity for ideas to play out, but a choice that can allow new data suggesting we should sell to sit there for a while).
  • 20 stocks is a good number that provides reasonable diversification (I’m not interested in statistical correlations, they take care of themselves; I’m interested in diversifying away to potential for oddball data items that we can limit but not eliminate with 100% certainty) and keeping turnover under control (though I personally trade with FolioInvesting in a tax sheltered account so for myself, I really don’t care about turnover per se).

If you see a model from me with other parameters, it means I made some thoughtful choices after having started with these defaults.

It almost doesn’t make a difference what interval I start with; I’ll look as sub-periods as I go along.

Main Settings

  • Universe – Prussell 2000

This is not simply a universe option; it’s an inherent element of the strategy. Noise is more likely among stocks that receive less attention from the media and the Street, so I want to target an area where low noise is a more substantial aberration.

Note: Since I’m not using the either the Rank or Ratings function in the screen, the results would not change if I left the universe at “Company Fundamentals – All” and used Universe(PRussell2000) in a screening rule. The choice of universe will impact the numeric rank, but not the sequence.

  • Ranking System – “R2000 low noise ranking method”

This is the name of the ranking system I created for this model. Details will be presented below.

  • Max No. of Stocks – 20

I could have set this in the backtest area. I can do it here. What’s important is that it be done.

  • Benchmark – iShares Russell 2000 ETF

I want a Russell 2000 benchmark in order to see if taking the trouble to execute this strategy makes sense considering I could easily get small-cap exposure simply by owning the ETF. And I choose the ETF, in lieu of the index itself, because I want an apples-to-apples comparison with the strategy, performance of which will reflect the impact of dividends. Nowadays, we have the choice of Russell 2000 w/Div. So I suppose my continue use of the ETF is a matter of old habit.


Notice I’m working on this before I show you the ranking system. That’s because this is the way I do it in real life. As explained in Topic 2A, I tend to have much more conviction regarding the screens. If I see that a screen isn’t working, I abandon the idea. I don’t even think about ranking until after I see that the screen is OK.

Now, let’s get to the rules:

  • country(“usa”)
  • gics(40)=false // no finance
  • IncTaxExpTTM>0
  • OpIncAftDeprTTM>0
  • ComEqQ>0
  • PfdEquityQ>=0
  • DbtTotQ>=0

The above rules are designed to eliminate situations involving data that my not be compatible with my ideas. This is an important part of any strategy. Mr. Market awards no brownie points to strategies that consider every possible stock. The risk of oddball data items pushing companies that violate the spirit of the law into our results can never be reduced to zero. But we should still try to do what we can, within reason, to cut the probability (and hope diversification can carry us the rest of the way, or at least as far as we need to go).

Next, from Topic 1D, you know that I’m going to calculate PctV with a formula that uses cost of capital. It’s not a pre-packaged formula. In fact, it’s not even a formula at all. It’s like DDM in that it’s a fabulous piece of theory that’s nightmarish in the real world. But unlike DDM, we really need it, or at least some rational approximation, in order to make ideas work. (Part of our reason for backtesting will be to show us if out application of spit and chewing gum was tolerable).

I’m going do handle WACC through the following sequence of ShowVar rules:

  • ShowVar(@DbtCost,(close(0,#tnx)/10)+3)
  • ShowVar(@PfdCost,@DbtCost+1)
  • ShowVar(@CostEq,@DbtCost+5)

As you can see, everything starts with the 10-year Treasury. I assume the cost of debt is 3 percentage points above the treasury, that the cost of preferred is 1 percentage point above the cost of debt, and that the cost of equity is 5 percentage points above the cost of debt.

  • ShowVar(@Capital,DbtTotQ+PfdEquityQ+ComEqQ)

The above rule defines total capital as the sum of debt, preferred and common equity.

Next, we’ll define the capital-structure weights. These should be self-evident:

  • ShowVar(@PfdWt,PfdEquityQ/@Capital)
  • ShowVar(@DbtWt,DbtTotQ/@Capital)
  • ShowVar(@EqWt,ComEqQ/@Capital)

Finally, we’ll define our WACC, or @CostCap. In plain English, it would be:

WACC = (Debt % * Debt) + (Preferred % * Preferred) + (Common % * Common)

In Portfolio123 language, it’s:

  • ShowVar(@CostCap,(@DbtWt*@DbtCost)+(@PfdWt*@PfdCost)+ (@EqWt*@CostEq))

All that just to get cost of capital! Phew. And that’s a simple approach: We cold have, but didn’t, try to define each company’s cost of each capital item. Than again, we could have gone the other way simply plugged in a single seat-of-the pants number for WACC. Generally, my limited testing to date suggests the market is a bit sensitive to whether or not we try to adjust WACC by a company’s individual capital structure, but much less so regarding the cost of each cap item for each company. But it’s not as if the simplest approach would be a disaster. Give it a try.

Here’s a quick refresher on how we’re going to calculate PctN:

  1. I’ll define NOPAT as Operating Income * (1 – Tax Rate).
  2. I’ll define Value as NOPAT / WACC (income/rate, analogous to naively valuing a bond simply as interest/rate). This is a very very very rough and conservative estimate of the company’s value. It does not include an allowance for growth, so I have nicknamed it “standstill value.” (I’m assuming all growth expectations are part of the noise. You can change this assumption on your own.)
  3. I’ll define Noise as MktCap – Value, or: MktCap – (NOPAT/WACC)
  4. Then, not surprisingly, I’ll define PctN as Noise/MktCap

Let’s translate the above to Portfolio123 language:

  • ShowVar(@NOPAT, OpIncAftDeprTTM*(1-(TaxRate%TTM/100)))
  • ShowVar(@FGV,MktCap-(@NOPAT/(@CostCap/100)))
  • ShowVar(@NoisePct,(@FGV/MktCap)*100)

And now, finally, the last screening rule; the one that expresses our decision to limit consideration to stocks for which noise accounts for less than 25% of market cap.

  • @NoisePct<25

Here, for your convenience, is a re-presentation of the screening rules (with interspersed comment lines to make it easy to follow what different parts of the screen do  — a handy device for staying on top of complex screens):

  • // refine universe
  • country(“usa”)
  • gics(40)=false
  • // eliminate potentially troublesome data items
  • IncTaxExpTTM>0
  • OpIncAftDeprTTM>0
  • // define components of capital
  • ComEqQ>0
  • PfdEquityQ>=0
  • DbtTotQ>=0
  • //define costs of each capital item
  • ShowVar(@DbtCost,(close(0,#tnx)/10)+3)
  • ShowVar(@PfdCost,@DbtCost+1)
  • ShowVar(@CostEq,@DbtCost+5)
  • // define capital
  • ShowVar(@Capital,DbtTotQ+PfdEquityQ+ComEqQ)
  • // define capital structure-weights
  • ShowVar(@PfdWt,PfdEquityQ/@Capital)
  • ShowVar(@DbtWt,DbtTotQ/@Capital)
  • ShowVar(@EqWt,ComEqQ/@Capital)
  • // compute cost of capital
  • ShowVar(@CostCap,(@DbtWt*@DbtCost)+(@PfdWt*@PfdCost)+ (@EqWt*@CostEq))
  • // define NOPAT
  • ShowVar(@NOPAT, OpIncAftDeprTTM*(1-(TaxRate%TTM/100)))
  • // define Noise
  • ShowVar(@FGV,MktCap-(@NOPAT/(@CostCap/100)))
  • // define PctN
  • ShowVar(@NoisePct,(@FGV/MktCap)*100)
  • // eliminate stocks for which PctN is above 25
  • @NoisePct<25

The screen is available in the Group area orthrough this direct link:

After having thusly established my screen (the portfolio size is around 300 stocks), I test to satisfy myself that the basic idea worked. If this portion of the idea does not show potential to pan out, there’s no point going further. But for now, I’ll skip this step to keep the model specs in one place separate from the testing. (Top secret: I already know it works.)

Creating the Ranking System

The idea for this ranking system is to find reason to believe there’s a catalyst that might, sooner rather than later, inflate the stock price with a more typical level of noise than the sub-25% we see for the stocks we’re going to be ranking. So how mighty we express this in terms of rules?

This is one of those times I wish I were more proficient in technical analysis than I am. If I were good at it, I’d go for it. But I’m not. I need a Plan B.

Analyst data is not technical analysis, but it is a reasonable first cousin. And I have experience using it. So I’ll go that way.

  • The first thing that comes to mind along these lines is a set of datapoints indicating analyst bullishness. So we’re talking about low scores for AvgRec (remember, 1 is most bullish and 5 is most bearish), and estimate revision.
  • But what about estimate revision? Yeah, it does indicate bullishness. But how much bullishness? Are analysts really falling in love, or like, with the stocks – really? Or are they raising estimates because they have to; because the company raised guidance and all the other analysts are conforming so they have to also? There’s nothing necessarily wrong with that. But how about this: working with the long-term EPS growth-rate projections.
  • Nobody really cares about or pays much attention to long-term growth estimates. They’re a step-child of Sell-Side work product. So, when analysts take the trouble to notice they are there, and take additional trouble to revise them, that says something.

So here are four rank factors expressed in Portfolio123 language:

  • LTGrthMean – higher is better 25%
  • Chg LTG (( LTGrthMean- LTGrth13WkAgo)/abs( LTGrth13WkAgo)) – higher is better 25%
  • AvgRec – lower is better 25%
  • Chg Avg Rec(AvgRec/ AvgRec13WkAgo) – lower is better 25%

Interesting. That expresses bullishness. But it seems so . . . blah, ordinary.

Actually, I’m trying to do more than discern bullishness. I’m looking for stocks likely to transition from the dust-bin to the spotlight. I expressed half of that, the spotlight. But what about the dustbin?

Here’s an idea. We won’t just look for stocks that are loved now. We want those same stocks to have been scorned not so long ago. Check this out:

  • AvgRec13WkAgo – higher is better -50%
  • LTGrth13WkAgo – lower is better – 50%

If I had factors that would be lest me measure data from 13 weeks ago versus 26 weeks ago, I’d use it to add some revision factors. But life is what it is. I’ll use what I have rather than cry about what I don’t have. Beyond that, isn’t the idea intriguing? We’re going to mix and match two things in our ranking system: a stock that’s well regarded now, but which was not revered 13 weeks ago. That sounds like the sort of scenario we’re looking for as we sort our list of low-noise stocks that have the potential to get noisier.

I’ll combine them all into a single ranking system and adjust the weights accordingly:

  • LTGrthMean – higher is better 12.5%
  • Chg LTG (( LTGrthMean- LTGrth13WkAgo)/abs( LTGrth13WkAgo)) – higher is better 12.5%
  • AvgRec – lower is better 12.5%
  • Chg Avg Rec(AvgRec/ AvgRec13WkAgo) – lower is better 12.5%
  • AvgRec13WkAgo – higher is better -25%
  • LTGrth13WkAgo – lower is better – 25%

This ranking system is available to you in the Group section of the Community, or through this direct link:

Moving On

Now, I’m ready to test the model.

What the . . .

Test the model? What about the ranking system? Am I not going to test that before settling on it? And what about those ho-hum default-ish weights? Assuming the ranking system works at all, which it may or may not, might it perform better if I varied the weights?

I’m not interested.

This is not about trying to find the best possible ranking system. I’m not trying to come up with an impressive performance chart. I want a noise-value strategy. So rather than tinker with the ranking system to maximize something I don’t care about maximizing, I’ll just move on to evaluate it as an inextricable part of a complete strategy.

I’ll cover that in Topic 2C. In one sense, you don’t have to read it. You can copy the screen and rank into your account and test as well as I can. But I suggest you go on to Topic 2C, which is being posted together with this and Topic 2A. My goal there would be not so much to conduct the tests but to show how I evaluate what I see, and decide what to do about rebalancing period and number of positions.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s