Working With Analyst-Related Data

PDF Version: P123 Strategy Design Topic 5 – Working With Analyst-Related Data

Who Are Analysts, What Do They Do, and How We Got Where We Are

If you’re going to intelligently use analyst-related data in you models, it’s important that you understand what it is and what it isn’t. Toward that end, it would be useful, and possibly even interesting, to look at some history.

In the beginning . . .

At one time, the kind of people we now think of as Wall Street analysis was little more than a semi-free-lancing catch-as-catch-can sort of thing as some brokers or customers’ men (the early names for what are now known as registered reps, or account execs) took greater interest in and spent more time on in-depth company analysis than others. Eventually, these extra-geeky brokers managed to shed their customer accounts and get paid by brokerage firms to do “research” full time.

FB73AABC-7A8C-443B-A77E-08168CC516BD

These people, analysts, studied financial statements, read industry trade publications, talked to industry experts and eventually started interviewing corporate executives. In the early days, companies didn’t quite know what to make of this sort of thing. Back in the early ‘80s, the CFO of one company told me that the whole thing was ridiculous – there was no way I could know what the company was going to earn. At another firm, the secretary to the CFO reacted to my request that she mail me a 10-Q (that being the fastest way I could get it, assuming they sent it by first-class mail as per my usual begging) that she’d first have to check with Mr. So-and-so to see if she was allowed to release it to the public. When companies did talk, more and more as time went on, their egos were all full display and they tried to talk the company’s prospects as high upward as possible.

Getting companies to comment on earnings estimates was a very iffy proposition. Early on, they wanted no part of it. “That’s your job,” they’d say.

As the market became increasingly institutionalized (i.e. more money moving more quickly at the same time in response to news), companies came around to the view that they ought not completely ignore analyst estimates. They would not talk numbers, or even give a clue to their own thinking. Instead, if you say “I’m looking for such-and-such,” they’d answer “You’re in line with the others,” or “You’re higher/lower than others.” They didn’t care whether the “consensus” (an ad-hoc undefined word) was too high or too low: They were mainly interested in trying to keep analysts in more or less the same ballpark. That led later on to companies hinting subtly to where that ballpark was actually located. That led to the invention of guidance. And 2000’s Regulation FD, which once and for all outlawed the giving of in formation to individual analysts that wasn’t disclosed publicly, led to where we are now – the formalization of the guidance process. Even now, no company is required to give guidance. But the SEC has made it clear that it is in favor of the process.

So far, I haven’t discussed how estimates get made. That’s because there is no process. Everybody does his or her own thing. Pre personal computer, people would graph sales against GDP, things like that. But as companies became diverse and complex, that fell away. One time, the Investor Relations VP (a profession that grew during the 1980s and 1990s) whose company just brought in a new CFO called and asked, on behalf of the new guy, me to send a copy of my model. I froze. I had no idea what to send him. Eventually, I said “Marty, my model is I ask you!” Then I panicked. Was I the only one who had no model? Yikes. So I asked what the other analysts were doing. His answer: “They just laughed.”

The moral of all this: Analysts have no idea what any company can earn . . . ever. The early generation cared about digging into the business dynamics. As institutional investors became important, analysts stopped caring about stock recommendations; the institutions did that on their own. The reason earnings surprise and revision matter today is not because analysts were wrong (that’s to be expected) but because of the way they serve as windows into the company’s understanding and assessment of its own business, and the accuracy thereof and the need for changes in thought (and change is what moves stocks). 

Wall Street Research Gets Real

As Research Departments grew over time their budgets became a meaningful brokerage firm expense items, raising an important existential question of sorts: What the heck is a Research Department? Is it a cost center? Is it a profit center? If the latter, how can revenues be attributed to it?

The firms did not come up with immediate clear-cut answers but stumbled and bumbled their way along a both-of-the-above-maybe response. Wall Street firms are not known for charitable sensibilities, so a pure cost center seemed out of the question. But practically speaking, it was likely to prove impossible to actually get anybody to purchase research reports or consultation time. So the Street evolved toward seeing analysts as indirect revenue contributors in that the more and better research a firm offered, the easier it would be for their reps to attract and retain large profitable customers. Hence the marriage of Wall Street research and Sales, and the moniker, that holds to this very day, the “Sell Side” (this being in contrast to what became known as the “Buy Side,” which refers to analysts who work for and in support of portfolio managers who do the actually buying and selling.

As time passed, Wall Street firms discovered that research as a competitive differentiator wasn’t just strategic boilerplate. It was real. The best analysts were clearly and demonstrably able to attract more clients to trade at their firms. There was no requirement that a client who cherished a Paine Webber bank analyst trade bank stocks at Paine Webber instead of Merrill Lynch, but it would have been a huge breach of etiquette to bolt like that and would likely cause the Paine Webber sales force to stop sending bank-stock reports to that client and not allowing their superstar bank analyst to consult with him. The bank analyst, meanwhile, was happy to fall into line because he and his peers discovered that their compensation was largely becoming tied (through the discretionary annual bonuses that could dwarf their salaries may times over) to what the Sales department said they were worth to the firm.

By the way, all this was analyst specific, not firm specific. It was perfectly fine for the client hooked on Paine Webber bank research to trade airline stocks at E.F. Hutton because of that firm’s superstar airline analyst. The only legitimate response available to Paine Webber would be to try to lure the airline guy to it, or to find it’s own competitive airline superstar. This led to a Wall Street Research arms race. Bonuses grew and eventually, got so big, the Street came to realize that the skyrocketing expenses weren’t going to be covered by commissions earned from Aunt Tillie and Uncle Fred trading 100 shares of U.S. steel once every five years. Exacerbating that was the fact that Aunt Tillie Uncle Fred were being pitched by newly created discount brokers like Charles Schwab who slashed commissions in exchange for the absence of advice from brokers who couldn’t remember what they looked like or research reports they threw away because they really didn’t understand them anyway. So the sales effort focused on where the money was, the institutional clients. Not only did they get reports mailed to them, they also got phone calls from reps letting them know when analysts said something worthwhile at morning meetings, phone consultation with analysts, in-person meetings with analysts who would usually go to them, whether by taxi or plane, and invitations to occasional events hosted by analysts at rented hotel ballrooms accompanied by powdered eggs and bacon grease (these were breakfast meetings; everybody needed to be back at their desks when the market opened) and an analyst speech in which he or she discussed their favorite stocks and then took Q&A.

For the Street, the good part of serving institutions was that they had a lot of money to spend. But the bad part of serving institutions turned out that they had a lot of money to spend – which, when the institutions came to realize that, led them to be much more demanding in terms of what analysts they’d talk to (meaning that an analyst’s ranking under various poll-based beauty contests such as Institutional Investor All Stars became huge), how much service (sucking up) they insisted upon, and how much, or rather how little, they’d want to pay (the Institutions discovered that well-heeled customers had pricing power). That required the Street to amp it up further on the spending (luring All stars was expensive, and rendering the service needed for an analyst to get and keep a ranking was likewise expensive; we’re now beyond just big bonuses – they need staffs of assistants too).

Financing the arm’s race

Eventually, the spending got so frenzied, the Street figured out it wasn’t going to work on trading alone. Enter the Investment Bankers, who had competitive issues and arms races of their own. How the heck can Mr. Banker persuade General Bombast to let his firm underwrite it’s new equity issue when it’s all going to trade through the same specialist on the same exchange and when my firm’s underwriting fee had already been cut to the bone? “I’ve got it, he thinks. I’ll get my First Team Institutional All Star Defense Analyst to issue a bullish report and hold a bunch of meetings pumping the stock!”

The head of Investment Banking says “Great idea!”

The analyst says “F*** off, I think the stock is a piece of crap. There’s no way I’m going to pump it.”

Head of IB to the Research Director: “Have you explained to your Defense asshole where the money for his bonus is likely to come from?”

Research Director to Analyst: “Issue the #$*@ report!”

Analyst to Research Director: “Like hell I will. Goldman is looking for a Defense Analyst. I’m II First Team All Star. I’ll just go there.”

Research Director to Analyst: “Good luck with that. Goldman’s bankers are competing with our bankers for that underwriting so they’ll demand that you do the same thing. So, too, will every other firm.”

Analyst to Investment Community: “General Bombast is being raised to Strong Buy and the Price Target has been increased to 350.”

Two weeks later, after General Bombast reached 400:

Analyst to Investment Bankers: “I need to recommend profit taking.”

Investment Banker to Analyst: “Then you need to sell your Fifth Avenue luxury co-op and see if you can move into a rent stabilized apartment somewhere, perhaps in East Flatbush.”

Analyst to Investment Community: “Strong Buy reiterated for General Bombast. Price target raised to 600.”

Fast forward to eight months after the successful General Bombast offering:

Analyst to Research Director: “The last two quarters for General Bombast really sucked and the next one isn’t looking so great. The stock is down to 275 and I think it’s going lower. I really need to put a “Sell” on this piece of sh**.”

Research Director to Analyst: “Like hell you will. Banking needs to be abler to pitch them in the future.”

Analyst: “But I can’t stay at 600 and Strong Buy. How am I going to stay First Team II All Star in the next survey?”

Research Director: “Aw hell. OK, how about you go from Strong Buy to Buy and maybe, if banking thinks we can live with it, we go to Hold in three months. Leave the target alone for now. It’s not as if anybody who’s not a moron looks at it. Maybe in three months you can go to 550-575.”

Analyst: “OK. That’s a plan. Also, I’ve got to cut my estimate for the next quarter.”

Research Director: “What the @#%! I though we solved this. Banking wont let you cut your estimate.”

Analyst: “They have no choice. Nobody really does estimates from scratch any more. The company is calling us and telling us to cut the numbers. We have to do it and if IB doesn’t like it, they can call the CFO and whine to him.”

Research Director: “All right. Cut the damn estimate. But make sure the report is still bullish about the long term.”

Enter The Outside World

And then, CNBC happened. And then, the Internet happened. And then the newly-created financial media, starving for content that would help them in their own arms races (i.e. to lure eyeballs), discovered that analysts would be a great solution. It’s content for which they don’t have to pay since publicity hungry firms and their analysts would (and did) trip over themselves seeking exposure. If an analyst isn’t an II All-Star, he or she could still attract business if he or she is telegenic! Another arms race gets under way.

Aunt Tillie and Uncle Fred may not recognize that the Defense Analyst whose General Bombast report they used to wrap fish three years ago is the same one Maria Bartiromo is flattering on CNBC (Actually, Uncle Fred could care less about the analyst; he’s too busy focusing on Maria) but they are smart enough to know that if intelligent people are saying nice things about SpareJetParts.com stock on CNBC, it has to be worth buying. So they go to CheapTradesRUs.com (they ditched Schwab long ago) and do their thing.

And then 2000 came along and Uncle Fred and Aunt Tillie discovered that stocks were allowed to go down. And then Elliot Spitzer came along and told Aunt Tillie and Uncle Fred that they weren’t at fault for loosing money. It was the telegenic analyst whose last name they couldn’t pronounce or spell and whose firm they hadn’t traded with in years.

And we know the rest: Research can’t work with Investment Banking any more and the latter can no longer influence their compensation. Companies can no longer feed information to individual analysts (a boon to the conference-call industry). Analysts can’t pump stocks they don’t believe in, or at least they have to work harder to conceal doing that. As a result, Research has taken on more of a cost-center flavor than it has had in many years, meaning the firms need to get more earthy about budget, meaning in turn that employment in the Sell Side has been decimated. The number of analysts is down as is the quality, while the workload per analyst has gone up as staffs have likewise been sashed or eliminated. Many telegenic former superstars are now Investment Advisors, teaching, working in hedge funds, working on the buy side, sitting in their basements pumping out Seeking Alpha articles for $30 a pop, going to med school, going to law school, doing venture capital, etc. The new generation of analysts is a lot less telegenic and often they rely on outsourced numbers crunchers for estimates and “models.” Meanwhile, every Tom, Dick, Harry and Quant is looking at and/or fiddling around with estimates, recommendations, etc. data being peddled by sophisticated data firms, especially given the way some folks like the Zacks brothers discovered they could make money in the markets by doing so. And so, too, have some Portfolio123 users and hopefully, you will too if you haven’t already been doing it.

The Current Impact of Analyst Research 

Clearly, the glory days of the sell side are long gone. Even so, data based on analyst output can continue to help us identify instances of security mispricing.

As usual, let’s start with our core, the Dividend Discount Model and for now, I’ll substitute E (earnings) for D (dividends). As a refresher:

  • P = E / (R – G)
    • P = Price
    • E = Earnings
    • R = Required rate of return
    • G = Expected rate of earnings Growth

So where do analysts fit here? There are three possibilities.

  1. Ratings (Recommendations) can serve as a shortcut to the entire formulation. This assumes the more bullish the rating, the more appealing and potentially profitable the misalignment between P versus E, R and G. For this to work, we’d have to be seeing that shares of more bullishly ranked companies are outperforming shares of less bullishly or bearishly rated companies. Whether or not this works is an ever-appealing topic for research, conversation or debate. At the very least, though, we might expect this data to suggest the ebb and flow of noise.
  2. Analyst data can serve as inputs for a full-fledged valuation model the model, or some approximation thereof; i.e. in the real world, some sort of discounted cash flow (DCF) approach. The Current or Next Year estimate may serve as E, the starting point for the multi-year projections. LTGrthMean number can be used as an input for G.

This, however, is not done often. DCF models are incredibly cumbersome and unreliable (and for those reasons, I’m not a big fan). Investors who are sufficiently determined to slog through them are likely to have strong enough conviction and belief in their own personal wisdom to make them reluctant to plug in analyst numbers. Such individuals are more likely to want to develop assumptions on their own.

  1. Analyst data can serve as a sentiment indicator that is used, not so much as a DDM input but as a proxy for whether or not G and/or the whole relationship between both sides of the equation is more or less probably aligned such as to justify bullishness or bearishness on the stock.

In terms of logic, this is a close sibling to what we can do with AvgRec, where we’re using analyst bullishness as a proxy for favorable, albeit not-specifically-specified number in the DDM. The difference, however, is in how we’re relating it to the DDM. Under the first approach, we accept, at face value, the analyst rating as they are labeled. This approach drills deeper down. We might, for example, use a bullish analyst rating score as a proxy for a favorable G figure.

It makes a difference in how we build our models. Under the first approach, we’d do all we want or can by saying something like AvgRec<2 or Rank by AvgRec lower-is-better. Under this drill-down approach, we’d use those factors only in places where we’re thinking about the DDM formulation and finding ourselves needing to address G. So it’s not about AvgRec<2 as a stand-alone rule. It’s part of a complete set of rules that also addresses, directly and/or through proxies, P/E and relationships involving R (i.e. company quality). And as long as we’re thinking in terms of proxies, we can get creative. Maybe AvgRec isn’t good proxy. Maybe the relationship between AvgRec and AvgRec1WkAgo or AvgRec13WkAgo is better. Or perhaps changes over time in CurrQEPSMean, LTGrthMean, etc.

Assuming, as I do, that the third approach is the one we’ll be focusing on, you’ll find that the rules and formulations are probably familiar to you already. I’ve seen enough in the forums and enough in publicly-visible models to know that in general the p123 community knows how to construct good analyst-related right rules and factors. Our focus will be on how to include those factors in logically economically sensible ways, ways that support a DDM inspired model.

An Important Caveat

Remember from the history above that the Sell Side has undergone considerable trauma over time. While the most of the biggest events took place before our 1/2/99 backtest/sim base date, the impacts they had unfoled more gradually, in many cases after the 2008 crisis the aftermath of which led additional pressure on research budgets. I can’t say for sure whether or to what extent this impacted the quality of analyst output and its relationship to stock movements (and, hence, to its efficacy as DDM proxies). But common sense tells me we can’t rely on a default assumption that all things are as they were. If ever there is a need to test separately on smaller samples, and potentially give more credence to a 5Y or 3Y test over what is seen in a MAX test, this is it. Remember, you can’t assume the biggest sample is best. It’s the most relevant representative sample that should be considered.

Data Based on Analyst Work Product

Nowadays and thanks largely to consolidation in the data industry, data items relating to analyst work product is typically collected, formatted and licensed by the same firms that collect, format  and license fundamental data. The structures of the databases differ, however, given the regularity with which are revised and eventually become superseded by the release of the results actually achieved by the companies. Fortunately for investors, the complexities relating to the differing database structures and license terms are addressed behind the scenes, by the data vendors, by platform-level users such as Portfolio123 and by web sites that display financial information. When developing strategies, we can enjoy the luxury of working with analyst data, fundamentals and price-related data seamlessly using whichever we want or whatever combinations we want.

The most readily available estimates items, the ones available on Portfolio123, are as follows:

  • EPS Estimates: These items are typically submitted by analysts through various technology interfaces to the data firms that collect and license the information. The numbers we see are typically the mean (the “consensus” among all analysts publishing estimates), the high estimate and the low estimate, as well as the standard deviation of estimates. For annual estimates, we also have median figures. The data firms have various automated and manual protocols to assure that the estimates are “apples-to-apples;” i.e., that they are uniform in the way they treat unusual items (these are typically excluded), and so forth. The data firms also work to assure that the estimates they have are all bona fide and to identify and eliminate those that are stale. Note, though, that they can vary in terms of which analysts are polled and in how they define staleness that warrants removal of an estimate from the aggregation. 
  • Ideally, each data firm obtains, with respect to each company and from each analyst, estimates for the current (in progress) fiscal year, the next fiscal year, the current (in-progress fiscal quarter) and the next fiscal quarter. Classifying this way is most practical since companies have differing fiscal years so what for one company might be a second quarter would be a fourth quarter for another.
  • Because revisions are such important matters of information, old pre-revision numbers are not eliminated from the database nor are they downgraded in importance after revisions occur. Hence an EPS Estimate for a particular period will be presented as it currently stands and as it stood one week ago, four weeks ago, eight weeks ago and 13 weeks ago.
  • Sales Estimates:We have a dataset for Estimated Sales similar to the one we have for EPS, except that the Sales estimates are not for quarters; only for Current Year and Next Year.
  • Long-Term Growth Rate Projections:This, by far, is the most difficult item with which we deal. For one thing, it’s challenging (to put it politely) to see into the long-term future. Even beyond that obvious question, it’s tempting to assume that if the consensus growth-rate projection is 15 percent, then the company’s EPS is expected to grow 15 percent. But how do we define “long term?” That, actually, is the easy question: It’s generally assumed the long term means a three-to five-year horizon. But what is the base upon which the 15 percent is calculated? Is it the EPS reported in the last fiscal year, the current fiscal year, or the next fiscal year? Getting the right answer to that question is important in and of itself if we want to use the 15 percent annual growth assumption to calculate an estimate of future EPS, for a discounted cash flow model, for example. Suppose a company’s normal earning power is in the vicinity of $2.25 a share, but that unusual conditions (say a bad business environment rather than non-recurring losses) cause earnings to average $0.03 per share. A 15 percent growth rate computed from a starting point of $2.25 a share will produce a very different result than a computation that uses $0.03 as a base. Analysts aren’t always as careful and attentive to this as we might hope, a reminder of the “soft” nature of the data points relating to estimates in general and long-term projections in particular.
  • With regard to Long-term growth forecasts, data firms supply current mean, high and low growth-rate assumptions, the standard deviation, and do so for the present, a week ago four weeks ago, eight weeks ago and 13 weeks ago.
  • Breadth of coverage is definitely an issue when it comes to long-term projections since there will be many cases in which analysts estimate EPS for the current and next fiscal years but fail to supply any long-term projections.
  • Price Target: This item, recently added per requests from some subscribers is by far the fluffiest data item we provide. Nobody knows when that price is supposed to be reached. Answers are all over the place. And the numbers do not incorporate any market expectation, a major defect considering that stock-market movement accounts for a large portion of each stocks movement. So as genuine legitimate future price expectations, these data-points have absolutely zero value (a reason why I opposed use of this data for such a long time). But there are always opportunities to live and learn: I’ve seen some potential for this information to be used in ways that tease out Street sentiment (parallel to what we get with estimate revision). Numbers of analysts that issue target prices may also be of use in measuring potential noise.
  • Historical Quarterly EPS:These represent the EPS figures that were actually reported by the companies. They often differ from the EPS figures you see in company reports in that the latter tend to contain many unusual items. Historical EPS are figures that have been re-stated by data vendors to be on the same (apples-to-apples) basis as the estimates for the period.
  • Quarterly Earnings Surprises: These are the differences between the Historical Quarterly EPS and the Mean EPS figure that prevailed as of the day the company reported earnings for the period. Given the tendency of many companies to issue interim guidance that prompts analysts to modify estimates as the reporting date approaches, the magnitude of surprise is often no longer as great as it was before the issuance of guidance became prevalent. Nevertheless Surprises still occur regularly and these data-points can still be potent in the analysis of noise.
  • Number of Analysts: Foreach quarterly or annual EPS estimate and for each Long-Term growth-rate projection, the data firms typically supply the number of analysts whose submissions are included in the calculation of the means. This, obviously, is a direct and important measure of the extent of coverage.
  • Number of Revisions:Estimate-revision factors are often expressed in strategies in terms of whether current estimates are above or below estimates as of a certain number of weeks ago, or as numerical or percentage differences between the current estimate and the estimate as of a specified time in the past. In addition the data firms supply the number of analysts making revisions up or down in a particularly estimate within the past week or the past four weeks.
  • Average Recommendation:This data-point bears no relationship at all to earnings. It’s a score that measures sentiment among sell-side analysts regarding a stock. We’re not so much concerned with whether they advise clients to actually buy or sell the stock, assuming the clients would even follow such advice which is often not the case (institutions make their own decisions based on their own criteria and use analysts as sources of deep-level detailed company information – more convenient than reading 10-Ks, etc. and dozing off on the horrifically monotonous conferee calls). We can, however, work with where an analyst rates a stock on a five-part best-to-worst scale. And realistically, given the prevailing Wall Street cultural setting, analysts do not often use the two worst categories. So for much of the time and for many stocks, we are dealing with a de factothree-part best-to-worst scale. (This is something the financial media and new-era gurus did not know when they discovered analyst ratings back in the mid 1990s and later went nuts about the dearth of Sell ratings.) Depending on how an analyst ranks a stock based on the formal five-part scale his or her firm uses (regardless of the verbal labels assigned to each rating), the data vendors assign a score to one (best) to five (worst), with most stocks winding up with a rating between one and three. Each score is then multiplied by the number of analysts issuing a rating at that level. These figures are then summed and divided by the total number of analysts issuing ratings. The result is the AvgRec data-points we see on Portfolio123 and elsewhere. Here’s a simple example:

Table 5-1

Stock A
 

Score

Number of Analysts Weighted Score
1 (Best) 2 2
2 8 16
3 13 39
4 0 0
5 (Worst) 0 0
 
Totals 23 63
 
Average Score 2.74

Table 5-2

Stock B
 

Score

Number of Analysts Weighted Score
1 (Best) 8 8
2 12 24
3 6 18
4 0 0
5 (Worst) 0 0
 
Totals 26 50
 
Average Score 1.72

We can conclude from Tables 5-1 and 5-2 that the sell-side analyst community is, on the whole, considerably less enthusiastic about Stock A than Stock B. And we do so notwithstanding the absence of any Sell recommendations. More importantly, we can use these AvgRec as of the present and also as of one, four, eight and thirteen weeks earlier, to directly incorporate analyst sentiment (i.e. noise) in our quantitative strategies.

Using Analyst Work Product

Given the nature of and the substantial structural upheaval in the environment that produces analyst data, it’s important to be especially careful about historical backtest or simulation time periods. The tables to be presented here will draw on five-year tests that began around mid-April 2011.

Analyst Recommendations

As shown above in Tables 5-1 and 5-2, weighted numerical scores supplied by data vendors make it easy to discern analyst opinion about a stock and differences of opinion, notwithstanding the dearth of Sell recommendations and regardless of nomenclature used by analysts to describe their ratings (Strong Buy, Outperform, and so forth). The question, however, is whether and to what extent these opinions matter.

The big issue is whether or to what extent analyst recommendations are worth considering. Early research suggested recommendations (e.g., AvgRec) per se were not very useful, but that changes in recommendations might have be helpful as a measure of sentiment. We need to update this.

Table 5-3 starts with the basics. Sorting a PRussell3000 universe excluding companies for which AvgRec is NA, I tested two portfolios, a bullishly ranked groupdefined by the rule FRank(“AvgRec”)<25  and a bearishly ranked group: FRank(“AvgRec”)>75. (Remember, analyst rating scores are lower-is-better.)The table summarizes the overall 5-year test performance.

Table 5-3

Ann’l Ret % Stan Dev % Annl Alpha %
Bullish Ratings 6.51 19.48 -7.29
Bearish Ratings 7.04 16.48 -5.47
Benchmark – R3000 ETF 11.26 13.36 – –

That is consistent with older research suggesting that such scores are not useful. Moving on, we’ll consider changes in ratings over various time intervals. Each model will compare a current AvgRec to a past AvgRec. Bullish portfolios will be those in which that change in recommendation is in the bottom 10% of our ascending sort while bearish portfolios will consist of situations where The change ranks in the top 10%. Specifically:

  • Bullish: FRank(“AvgRec/AvgRecXWkAgo”,#previous)<10
  • Bearish: FRank(“AvgRec/AvgRecXWkAgo”,#previous)>90

Test results are shown in Table 5-4.

Table 5-4

Ann’l Ret % Stan Dev % Ann’l Alpha %
Bullish Changes  – 1 wk 2.59 17.51 -10.06
Bearish Changes – 1 wk 7.14 17.28 -5.94
Bullish Changes  – 4 wk 2.36 17.90 -10.25
Bearish Changes – 4 wk 0.18 18.27 -12.29
Bullish Changes  – 13 wk 5.33 18.23 -7.57
Bearish Changes – 13 wk 1.12 18.74 -12.09
Benchmark – R3000 ETF 11.26 13.36 – –

First, we note that contrary to what the contemporary financial-media culture may lead one to believe, there is still something to be said for patiently allowing events to unfold and have their impacts on share prices. While it’s possible the motivating factor(s) behind a well-conceived rating can play out in as little as a week, it seems much more likely than not it will require a month or more to work its way into the stock. The sense of immediacy suggested by the media (move quickly in response to the change) is pure nonsense.

Speaking of motivating factors,there is the potential for analyst sentiment to correlate, possibly highly, to other factors that more directly impact stock prices. If you’d like to pursue this topic in detail, you may want to check out Narasimhan Jegadeesh, Joonghyuk Kim, Susan D. Kirsche, Charles M.C. Lee, Analyzing the Analysts: When Do Recommendation Add Value?59 The Journal of Finance 1083 (No. 3 June 2004).

Here’s another important observation concerning Table 5-4. Even when analyst recommendations appear to succeed in differentiating between stocks more or less likely to do well in the future, it does not appear that the strength of the factor is worthy of use on its own. The best AvgRec-change variation presented in the Table still trails the benchmark by a wide margin, especially on a risk-adjusted basis.

Meanwhile, the passage of time since the aforementioned Jagadeesh Kim study is such that we would need to be carefully about taking their conclusions at face value. But the main insight, that data relating to recommendations can be useful when used in conjunction with other factors that are known to influence stock performance, remains valid.

Here’s an example.

I created a very simple model that screens for PRussell3000 stocks that are not NA for analyst data and that are ranked above 80 under the Portfolio123 Comprehensive: QVG (Quality-Value-Growth) ranking system (quality, value and growth). From among those, I select the 15 stocks that rank highest under the Basic: Value ranking system. I’ll be coming back to this a few more times in this topic so I’ll call it the Control Model. Table 5-5 summarizes the five-year backtest test results.

Table 5-5

Ann’l Ret % Stan Dev % Annl Alpha %
Control Model 10.38 26.06 -6.28
Benchmark – R3000 ETF 11.26 13.36 – –

As you can see, the Control model is not suitable for prime time. It’s performance slightly trails the benchmark but with substantially higher volatility. It addresses all three elements of the Dividend-Earnings Discount Model (the relationship between price and earnings, growth and quality-risk). So in this regard, it’s fine. But it’s not good enough to use. There is considerable room for improvement in specification. We learned before that Value, if overused, can push us toward genuine dogs. Maybe use of the Growth aspects of the ranking system is not a satisfactory way, by itself, to address the growth component of a prudent model, a component that needs to be looking forward. So let’s add a forward-looking sentiment component to our screen:

  • FRank(“AvgRec- AvgRec13WkAgo”,#previous)<10

A historical growth rank alone is deficient in that it doesn’t alert us to the possibility that an unfavorable changer in trend could be lurking. But that scenario becomes less worrisome if we limit considerations to companies that ranked most bullishly in terms of 13-week change in Analyst Recommendations.

Table 5-6 shows the 5-year test of my new Control + Rating Upgrade model.

Table 5-6

Ann’l Ret % Stan Dev % Annl Alpha %
Bullish Upgrades 13W Only 5.33 18.23 -7.57
Control Model Only 10.38 26.06 -6.28
Control + Upgrades 13W 12.83 23.25 -1.44
Benchmark – R3000 ETF 11.26 13.36 – –

We’re not all the way home. But use of recommendation upgrades definitely enhanced the Control model, and paves the way for further, probably minor, improvements that could carry us the rest of the way.

Estimate Revisions

A naïve starting assumption, based upon observation of the investing world, should lead one to assume that the direction of estimate revisions is likely to be a useful indicator of future share price performance. Besides the extreme degree of attention it gets in the media (much more so than ratingds per se), we also are aware of Zacks, a company that pretty much built its business and its brand through modeling based on estimate revision.

So let’s check it out. Table 5-7 summarizes 5-year backtests of a series of models that start with the PRussell3000 universe, eliminate companies for which estimates are NA, and then rank estimate changes over various time frames.

  • Bullish Revisons:

o  FRank(“(CurFYEPSMean-CurFYEPSXWkAgo)/abs(CurFYEPSXWkAgo)”,#previous)>90

  • Bearish Revisons:
    • FRank(“(CurFYEPSMean-CurFYEPSXWkAgo)/abs(CurFYEPSXWkAgo)”,#previous)<10

Notice that I’m measuring the percent revisions based on the (a-b)/abs(b) formula rather than just a/b. I do that because negative signs might throw the results off if I use the latter simpler formulation.

Table 5-7

Ann’l Ret % Stan Dev % Ann’l Alpha %
Bullish Revisions  – 1 wk 4.84 20.73 -8.32
Bearish Revisions – 1 wk -0.76 17.82 -15.16
Bullish Revisions  – 4 wk 4.82 18.97 -9.37
Bearish Revisions – 4 wk -2.25 21.53 -16.59
Bullish Revisions – 13 wk 6.73 18.86 -7.22
Bearish Revisions – 13 wk -3.74 22.14 -17.83
Benchmark – R3000 ETF 11.26 13.36 – –

There’s good news and there’s bad news.

The good news is that estimate revision, even crunched as simply as was done here and even when viewed over a 1-week time horizon, succeeds in differentiating stocks likely to do well versus those more likely to falter. The bad news is that it’s not worth using in our modeling, at least not done this simply and standing by itself. Even the better Alphas are terrible and even the best variation (13 weeks) still bets pummeled by the benchmark.

Knowing what we know, however, about the Dividend-Earnings Discount Model and intuitive appeal of using upward estimate revision as a proxy for the expectation of good things from the model’s G (growth) component, we won’t give up. Revision has to be able to contribute. Logic says so.

We’ll give it a try by creating a new Control+ model. This time, we’ll limit consideration to Control-model selections that also rank in the top 10% based on 4-week Current-Year EPS estimate revisions. Table 5-8 shows the five-year backtest results

Table 5-8

Ann’l Ret % Stan Dev % Annl Alpha %
Bullish Revisions 13W Only 6.73 18.86 -7.22
Control Model Only 10.38 26.06 -6.28
Control + Revisions 4W 14.94 22.10 +1.47
Benchmark – R3000 ETF 11.26 13.36 – –

Voila. It should have worked. And it did work. Might we still be able to improve on it? Yes, of course. But again, we’re clearly going in a good direction.

Surprise

This has got to be huge. This has to be incredibly meaningful. If it isn’t a silver bullet, then the financial media, which presents earnings news in terms of whether reported results beat or missed expectations, is clueless. We can’t go around assuming that! (Wink, wink.)

So here are the tests we’ll run. You know the drill. Start with PRussell3000 and eliminate companies for which Surprise is NA. Then, create bullish and bearish test portfolios based on the ranking all surprises and picking stocks that fell in the top 10% or bottom 10% respectively.

  • Bullish EPS Surprises: FRank(“Surprise%Q1”,#previous)>90
  • Bearish: EPS Surprises FRank(“Surprise%Q1”,#previous)>90

 

  • Bullish Sales Surprises: FRank(“SalesSurp%Q1”,#previous)>90
  • Bearish: Sales Surprises FRank(“SalesSurp %Q1”,#previous)>90

Table 5-9 shows the five-year backtests.

Table 5-9

Ann’l Ret % Stan Dev % Ann’l Alpha %
Bullish EPS Surprise 5.25 18.83 -8.48
Bearish EPS Surprise -1.22 21.01 -15.31
Bullish Sales Surprise 4.29 19.50 -9.54
Bearish Sales Surprise -0.77 20.57 -14.40
Benchmark – R3000 ETF 11.26 13.36 – –

Here we go again. Surprise helps (Hooray for the media!) but not nearly as much as it would take to make it worthwhile relative to a bland Russell 3000 ETF (Boo, hiss . . .).

As usual, let’s add strongest surprise to our Control model (recalling, again, the logic for doing so – the assumption that strong surprise has a bearing on whether expected future G, growth, is likely to help the stock).

Table 5-10

Ann’l Ret % Stan Dev % Annl Alpha %
Bullish EPS Surprises Only 5.25 18.83 -8.48
Bullish Sales Surprises Only 4.29 19.50 -9.54
Control Model Only 10.38 26.06 -6.28
Control + EPS Surprise 11.80 21.52 -1.54
Control + Sales Surprise 16.14 20.52 +0.74
Benchmark – R3000 ETF 11.26 13.36 – –

Not surprisingly, combining Surprise with the control Model was useful. We had reason to expect this going in.

Here, however, there are a couple more wrinkles.

The bump we got from EPS Surprise was less than what we saw elsewhere. Bear in mind that surprise is a very soft data point. Analysts are guided to revise estimates as warranted during the course of the quarter. Surprise is the difference between the reported number and the most recent version of the estimate, the one that has already been guided up or down. So surprise doesn’t mean nearly the same thing it meant decades ago, when it first started to be tabulated (long before companies started giving detailed guidance and back when many gave no guidance at all). Today, Surprise is just a watered-down version of Estimate Revision. That’s reflected in its impact on the control model. Why use the watered down version when you can use the real thing, revision itself.

That said, we see that Sales Surprise made a much more meaningful contribution than did EPS Surprise. Realistically, though, this may reflect the relative newness of the item. Data providers have been collecting and disseminating Sales Estimates for a long time, but until recently, it was limited to institutions willing to pay up for it. Its availability in the wider investment world is a novelty at this time.

While it’s possible there may turn out to be genuinely unique sentiment information in sales surprises as opposed to EPS surprises, we have to be on our guard for the possibility that this factor may lose some power in a few years, as its use becomes more widespread and as trades it inspires get more crowded. But that’s no reason to refrain from using it now if it works for you. There’s no rule that says you have to keep the same models going forever and ever.

Conclusion

I haven’t addressed each and every item in the Analyst-Data pantheon. But I think the message is clear. What analysts do, and the data that reflects is, is relevant. But it can’t be used naively. Analysts see the same fundamentals we see. They see the same economic factors, market climate, industry conditions, etc. So rather than seeing analyst data as a standalone thing, it’s more constructive to see it as a piece of a larger DDM-based puzzle, the part that addresses G (growth) in a way that historic data alone is not able to do, that addresses it not literally but as a sentiment barometer that can serve as a proxy for a favorable, albeit unquantifiable input, into the G term of the E/(R-G) formulation.

This is an important topic, whether you choose to use this data or not, because of the way it drives him the proxy-nature of our application of DDM. I said before that it’s not a formula we can set up in a spreadsheet but, rather, a set of ideas. Getting comfortable with the way something like Surprise or Estimate Revision can fit in is a big step to using DDM, not as a literal computation but as a logical underpinning for ideas that can be as creative as you want to make them.

Finally, let’s not forget the N in P=V+N. We know anecdotally and from the tables presented here that stocks do react to what analysts do. While that alone can’t necessarily make for a strategy, it can serve as a proxy for the future direction of Noise. You may want to refresh your memories on this by reviewing Topic 2B, more particularly, the ranking system I created based on LTGrthMean and AvgRec to rank stocks according to the potential for increased bullish noise in the future.

Next, we’ll tackle momentum, another important non-fundamental proxy for the G in DDM and raw material for modeling based on noise.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s