Recently a former IDC researcher made headlines for revealing “how the sausage gets made.” This particular revelation comes in response to a recent inaccurate report by the IDC and Gartner that said Mac sales were falling when Apple’s official numbers actually had Mac sales increasing by double digits.
So, the mantra became, preserve the growth rates; to hell with the actual numbers. Even the growth rates are fiction.
I am picking on the most controversial statement first. It would appear that the numbers have absolutely no credibility at all. The numbers are just some made up malarky. This statement has caused people to go into a mild uproar and raises a two questions:
- What systematic biases do analysts have?
- Can we learn anything useful from analysts?
Psychological research has shown that fields that have arbitrary or delayed performance feedback are more likely to exhibit overconfidence bias. Those most affected are clinical psychologists, doctors and nurses, engineers, entrepreneurs, lawyers, and investment bankers.1 Financial analysts, like investment bankers, are charged with evaluating companies or industries and making predictions about those companies or industries. Contributing to analysts’ overconfidence is the fact that predicting the future is hard.2 Past predictions are frequently not checked to see if they actually came to pass. Predictions that end up being incorrect are easily attributable to unknown forces. It is tempting for an analyst to think, “my predictions were not wrong because I made an error. My predictions were wrong because unknowable forces acted on my prediction.” This type of thought process can absolve (at least in their mind) the analyst of any blame.
Analysts are consistently overoptimistic about the market and firm prospects. To put this in perspective, by mid-2000 buy recommendations were approximately 75 percent of all recommendations whereas sell recommendations were just 2 percent of all recommendations.3 Most academics, and I would imagine most practitioners, do not care about the level of the rating, focusing instead on changes in rating. Analyst overoptimism seems to be included in market expectations as stock downgrades accompany the largest market movements.
Analysts are also shown to be prone to a well known psychological bias know as the representativeness heuristic. The representativeness heuristic leads people to make decisions based on stereotypes or applying a small subset of data to the broader whole. Analysts treat stocks that are perceived as growth stocks the same. They likewise would continue to treat a winning stock as a winner even after evidence suggests the situation has changed. Attempts to address these biases through regulation has only has limited impact.4
So can we gain anything useful from these analysts? The short answer is yes but it comes with a caveat. I alluded to it above; you have to know how to put their buy/sell recommendations, earnings forecasts, products sales forecasts, and any other forecast in context. Understand what the limitations that analysts have. For example, Regulation Fair Disclosure, which went into effect in 2000, prevents analysts from being fed information from company insiders. Consider this point in context of the numbers the IDC publishes. IDC cannot legally get information directly from publicly traded companies about their sales numbers. How does IDC, or any other similar firm, draw their conclusions?5
In most quarters, the team starts with OEM guidance and, depending on the country, does some by-country cross-checking. However, for the US team, we just did some systematic adjustments to the vendor guidance and called it a day. For example, we knew that lots of Macs were transshipped from Miami to Latin America. So, we took some percentage of Macs (Apple, of course, never helped; in fact, even objected, saying it wasn’t so) and reallocated them from the US to a smattering of Latin countries, effectively modeling the market but with no low-level data.
This process should not surprise anyone. They can’t get information directly from firms, so they use a convoluted process to guess. An alternative strategy would be to send someone to sit in a store an count sales. But that has been scorned too. So even though “in the end, the process was political” it didn’t mean that there weren’t parts of the numbers people could trust.
I used to tell customers which parts of the data they could trust, essentially the major vendors by form factor and region. The rest was garbage.
The industry itself was aware of these issues, but agreed to maintain the fiction because it was convenient. Most vendors kept their own numbers, but referred to IDC for public purposes.
How does this IDC analyst’s experience mesh with what we have shown in broad based studies?
First of all let’s compare analyst performance against company managers. This is really the reason any analysts exist. They exist to tell us something beyond the calculating disclosures of a firm. Putting analysts against managers sets up a clear asymmetric information problem. Managers/insiders should know the most about the firm and its prospects. Analysts are at an informational disadvantage when compared to managers.
Research has shown that analyst forecasts are more accurate than manager forecasts when macroeconomic factors move with the fortunes of the firm. Managers are more accurate when a firm is experiencing “unusual” circumstances or situations that make the management’s internal knowledge of the operations of the firm more important. Overall, analysts are more accurate than managers about 50% of the time. So, perhaps analysts are not at an informational disadvantage relative to managers. They obviously have different information sets, but both appear to be useful in forecasting the firm’s prospects.6
We can cross check these academic results against what the researcher revealed above. The researcher essentially was claiming that IDC uses broad economic conditions to adjust their numbers. This is exactly the type of process that leads analysts to the get numbers right. This is the lens through which we should view analyst forecasts.
Analyst recommendations also are particularly informative when they are against the common trend. Analysts tend to herd in their recommendations.7 Breaking from the crowd to issue a contrary recommendation carries with it a lot of reputational or career risk for the analyst involved. This is why all star analysts are more likely to be contrarian than other analysts. And, yes, analysts get rated by their peers and classified as all stars.8
I am not defending IDC or any other group of analysts or forecasters. They paint themselves into corners with marketing materials and other statements that claim their numbers are more accurate than they really are. When analysts are inaccurate they should own it.
The point of this post is to suggest that analysts are not being paid to do nothing or simply make up numbers. Sure you can have instances, maybe I should say many instances, where analysts (expand that list to say researchers, managers, journalists, etc) have a hypothesis beforehand and set out to find facts in support of that hypothesis.
Analysts can add value. You just have to know what kind of filters you have to use on any information they generate.
NOTE: Stay tuned for Part II where I discuss financial analysts and why they don’t seem to understand Apple.
Barber, B. M., & Odean, T. (2001). Boys will be Boys: Gender, Overconfidence, and Common Stock Investment. Journal of Banking and Finance, 116(1), 261–292. doi:10.1162/003355301556400↩
Understatement of the year?↩
Barbera, B. M., Lehavyb, R., McNicholsc, M., & Trueman, B. (2006). Buys, holds, and sells: The distribution of investment banks“ stock ratings and the implications for the profitability of analysts” recommendations. Journal of Accounting and Economics, 41(1-2), 87–117. doi:10.1016/j.jacceco.2005.10.001↩
Mokoteli, T. M., & Taffler, R. J. (2009). Behavioural bias and conflicts of interest in analyst stock recommendations. Journal of Business Finance and Accounting, 36(3), 384–418. doi:10.1111/j.1468-5957.2009.02125.x↩
If a company did directly disclose their sales figures to IDC they would have to simultaneously release those numbers to the public. That doesn’t sound like a great way to maintain any kind of competitive advantage.↩
Hutton, A. P., Lee, L. F., & Shu, S. Z. (2012). Do Managers Always Know Better? The Relative Accuracy of Management and Analyst Forecasts. Journal of Accounting Research, 50(5), 1217–1244. doi:10.1111/j.1475-679X.2012.00461.x↩
Herding is a particularly bad problem in a field like finance where everything is compared against some, possibly arbitrary, benchmark. In many areas of finance absolute performance does not matter; only relative performance matters. This, of course, is a topic for another post.↩
Bradley, D., Liu, X., & Pantzalis, C. (2014). Bucking the Trend: The Informativeness of Analyst Contrarian Recommendations. Financial Management, 43(2), 391–414. doi:10.1111/fima.12037↩