PDF Version: Choosing Portfolio123 Designer Models Topic E – Risk
Previous topics gave you a sense of how to use information presented on the platform to differentiate among Designer Models models. For the most part return and risk were considered together since that is the way we assume you will – hopefully at least – approach these models, or any other strategies you consider. This topic will confine itself to risk alone.
Defining Risk – In Human Terms
Risk is the potential for you to be disappointed in a future outcome.
Is it the same as uncertainty, or the possibility you will be surprised at the outcome (which could be much better than expected)? Well yes and no. When we speak of risk and worry about it, we’re not worried about making too much money. Upside volatility – aw who cares, we’ll just wink and pocket the windfall. What we really fear is downside volatility.
The challenge, though, is that is that almost anything we do that paves the way for us to be pleasantly surprised also opens the door to disappointments. This is so in finance. It’s also so in every other walk of life. Do you want to avoid the risk of being hurt in a relationship? Stay a hermit and take no risk. What’s a foolproof way to avoid business failure? Don’t start a business. Most of us realize we can’t experience anything good unless we expose ourselves to the potential for bad things.
So risk evolves, from the potential for disappointment, the purest human definition, to the potential for surprise, even pleasant surprises. It’s not that we don’t enjoy upside volatility. But realistically, it’s hard to imagine us getting chances to experience it unless we expose ourselves to the bad stuff. That, the analysis of uncertainty in a symmetrical way, is the foundation upon which traditional financial risk is built.
Defining Risk – In Financial Terms
In finance, i.e. in investing, risk is the potential for an actual return that varies from an expected return. That paves the way for defining risk in terms of volatility; the more volatile a stock, the greater the potential for it to vary more widely from what we expect, and therefore the greater the risk.
- So R = V, or Risk = Volatility.
- You can also say R = SD (Risk = Standard Deviation); standard deviation is a more convenient way of restating volatility in terms that are comparable, in magnitude, to average percent returns.
- You can also say R = B (Risk = Beta), which uses, instead of raw volatility, a number that tells us how volatile something is compared to the market.
In recent years, though, there has been discussion of measuring downside volatility only. We have DDev, Downside Deviation; the Sortino ratio, Value at Risk MaxDD, etc.
Gerstein’s Gospel on Risk #1:Forget about asymmetrical measurements that look only at the downside.
That’s naïve. Its statistics divorced form the essential reality of that which is supposedly being measured. It feels good. It’s emotionally satisfying. But objectively, it’s pablum.
That’s because the very factors that cause extremely bad outcomes can and do just as easily cause extreme gains when circumstances break well and vice versa.
This isn’t to say you should chase after bad Sortino ratios. Actually, it tells you the opposite. For asymmetrical analysis to work, we’d need mirror-image-upside-Sortinos, or Maximum Melt-Ups, and we need to fear those stocks every bit as much as we fear those with bad MaxDD.
In fact, we need to worry even more about upside volatility. We already know about baggage and are on high alert when we see big drawdowns. On the other hand, we’re likely to cherish the super-gains and allow ourselves to be lulled into not realizing how they could just as easily post mega losses.
The solution: Be leery of the fancy asymmetric risk measures and focus on the basic two-way volatility based approaches.
Gerstein’s Gospel on Risk #2:Risk is not about historical stock return data or analytics based thereon. Risk is about the underlying characteristics that caused the stock to behave as it did, and which can plausibly be expected to impact its performance in the future.
Beta, standard deviation, etc. are nothing more than exogenous summaries (report cards) that tell us how the student, oops, the stock, did in a specific “marking period.”
Assume a stock moves up and down, a little or a lot, based on movements in company earnings (a very reasonable assumption). So data showing what Beta was in the past (volatility relative to the market) is probably showing us how volatile earnings (or at least earnings expectations) were relative to average earnings among companies included in the market benchmark. (Similar factors will also influence the volatility of market sentiment, and hence, PE, and sales, and PS, etc.).
So the way to, manage risk going forward is to manage the potential volatility of earnings, sentiment, etc. This sounds like a daunting task,. Actually, though, it isn’t. It’s very manageable. It’s what fundamental analysis, particularly Quality-related metrics, is all about.
Imagine two companies, A and B (I am creative with my naming). A is in a very hard to predict business, say gold mining or air transport while B makes toothpaste, a very stable business. Suppose A adds fuel to the fire by having a lot of debt on its balance sheet. That means A has less ability to reduce costs as and when sales slump, which they tend to do, often. So as the economy ebbs and flows, A’s EPS is more likely to bounce up and down wildly while B’s is likely to remain pretty stable. Do I really need Beta to tell me that B is a much less risky stock?
Actually, Beta might mess up give me a wrong answer.
Assume A finds a lot of great gold reserves that can be mined and processed at low cost. The day A happens to have announced this turned out to be a day during which a Fed governor said something that suggested a heretofore undiscussed strategy to push interest rates upward; the result is that the market tanks. Quants notice that A had a negative correlation to the market. Add in a bunch more episodes like this (and even a bunch on the flip side such as A taking a dive when it turns out the new mine needs a billion or so in environmental remediation that happens to announced at the same time the Fed governor issues a press release announcing that rates would not be raised thus sparking a big market rally) and voila, A shows up as having a very low, or even negative beta leading quants to tout it as ideal for widows and orphans. But notwithstanding the very conservative-looking Beta, we know A is really a hyper-speculative dumpster fire.
The relative riskiness of A and B needs to be assessed, not based on historical share-return-based analytics, but on the nature and inherent characteristics of their respective businesses.
Assessing Risk In Designer Models Models
When it comes to measuring risk, the Designer Models platform does two things for you:
- It gives you some conventional risk measures. Speaking for myself, as you now see, I’m not a fan of these, at least not when they are used conventionally. But old habits die hard, so it would not be realistic for us to eliminate them. And besides, there are times when we really do want report cards explaining the past, and those metrics do that well.
- It gives you new metrics you can use to measure risk in a more fundamental way.
So here’s what we’ve got:
- The Risk Score, which consists of . . .
- 90-Day volatility
- 90Day Max Drawdown
- Average No. of Holdings
Volatility and Drawdown are conventional old-habit risk metrics.
Average number of holdings is a different kind of measure, one about which we haven’t talked and one that is not usually discussed at all except abstract statistical terms.
The correct number of stocks for your portfolio depends on your confidence in what you’re doing (I’m assuming you’re already confident enough in stocks to warrant being in the market at all). If you have 100% confidence in your stock-picking approach, there is no reason for you to do anything other than put 100% of your assets into your favorite stock.
Diversification is a function of one thing and one thing only; the fact that we don’t have 100% confidence in anything, as we shouldn’t considering that we’re dealing with the unknown.
The fewer stocks you own, the more confidence in yourself you’re expressing. That’s all there is to it. We also associate smaller numbers of stocks with increases in risk through basic logic: The further out on a limb you go in your embrace of the unknown future, the more open you become to disappointment. Any time something does go wrong, and we know absolutely positively that things do go wrong, it will impact a bigger portion of the portfolio.
- In Statistics . . .
- Max Drawdown
- Standard Deviation
- Sharpe ratio
- Sortino Ratio
- Correlation with S&P 500
- Beta
- Alpha (annualized)
Again, these are old standards. As report cards go, this is a pretty good collection. But as you consider these metrics, do so in connection with the ones mentioned below, which can tune you into the Whys And Wherefores of what you saw and, thus, help you develop rational expectations about the future.
- Capitalization
As discussed in Topic C, Size is a very substantive factor. Larger stocks tend to be less risky than smaller stocks all else being equal, because of the way large companies are built; because of the differing fundamental profiles. That’s why you often hear big-cap outperformance during tough times described as “a flight to quality” or “a risk-off period.” Conversely, smaller stocks tend, all else being equal, to outperform when the market is in “risk on” mode.
You can, if you wish, filter by size. If you’re willing to take on this risk, you can at least see which models have delivered better returns relative to others who’ve taken comparable Size risk.
- Liquidity
This is similar to Size to the extent Size and Liquidity are often correlated. But liquidity adds one important piece of information that may not be fully captured by size; trading-execution risk. However good you or a strategy designer are at figuring out what to buy and sell and when to trade, that’s all for naught if you can’t reliably execute the trades at prices that are reasonably in line with what you expect.
During the initial versions of Designer Models (when they were called Ready-to-go), there was much Forum discussion on liquidity and problems being experienced by those trading low-liquidity models, and many proposals were bandied about. Several designers addressed this by limiting the number of subscribers a model can have (i.e. the number of people that are likely to execute the same trades on Monday mornings).
Realistically, there’s nothing we can do to rescue a subscriber from getting burned by liquidity should that be what Mr. Market wants to do. No model designer can control what other designers do (i.e. how many other sets of subscribers to different closed models are chasing the same stocks) and more importantly, nobody at Portfolio123 can protect you from the impact of what others outside of our platform do. We’re not the only ones who model, and others seeing the same data can do chase the same stocks.
So illiquidity is not a problem we can solve. It’s risk factor you can choose to expose yourself to or shield yourself from. The risk is that your results will vary meaningfully from expectations due to the actions of market makers, Portfolio123 investors other than yourself and often trading other models, and traders from outside of Portfolio123 who see the same public data that our models use. If you take on this risk, the best you can do is tolerate performance deviations by being more flexible as to when you execute trades and even whether you execute all trades.
- Themes
This was covered in Topic D. We’re not interested in traditional industry or sector data simply for the sake of knowing. What we’re interested in is that aspect of this content that is captured by Themes, mainly the potential stability or volatility of earnings stream.
- Style Ratings
These were discussed in Topics B1 through B4. If you want to reduce risk, probably the single most important thing you can do is emphasize models with relatively high Quality scores, as discussed in Topic B3. The highest-risk style is Momentum. So before subscribing to a model that has produced high returns, check it against other models with generally comparable Momentum scores. If you are concerned about risk, it’s important that you to know the extent to which the model achieved good returns as a result of a momentum bet, something that can be dangerous as the market moves from one regime to another, versus other factors.
Special Note on Hedging
In theory, this is supposed to be a risk-control technique. But in practice, it has often failed to show itself to be effective in this way.
The idea is to reduce or eliminate exposure to stocks when the market is bad. As an idea, it’s perfect. The problem is in execution.
It’s east towing up with simulation results that are inflated by models that were out of the market at the right times but based on timing rules that were created with 20-20 hindsight and which have not yet been really proven as having potential for success in the future given the variety of market crises to which we’re exposed.
In principle, we’re very much in favor of hedging. Our only concern is that you not be misled by views on its efficacy that are distorted with models designed after the fact. So if you want hedged models, its fair game for you to rely on Designer Descriptive statements and/or answers to questions you pose to satisfy you that their hedging ideas stand a good chance of working in the future.