Archive for Rating Systems

Investments 101; What we can learn from Chess Grandmasters

Posted in Behavioral Finance, Manager Selection, Uncategorized with tags , , , , , on March 17, 2009 by evd101

By Erik L. van Dijk

 

Two years ago I had the pleasure to sponsor of a chess team. Not just that: the pleasure even extended to becoming Dutch champion with the team. Even some of the big teams in Russia saw us as serious contestant for the European Club title. Unfortunately we didn’t get the funding right so as to compete with the Russians in that competition. But that we made it to Dutch champion was nice. And to a certain extent not so complicated: analyzing my budget (the cost side), I explored what good grandmasters would cost per game. And then we hired some of the strongest players in the world, using the FIDE (= World Chess Federation) rating list. This rating list was based on a methodology developed by the Hungarian professor Arpad Elo. Basically what that rating system does is giving players points for victories and penalties for losses, while at the same time incorporating differences in playing strength based on earlier achievements. The latter is important: if I join a chess club as a new youngster, and happen to be in the same club as a famous grandmaster, obviously my draw against that guy is not really a draw when trying to calculate my rating score. It is a sensation that should be rewarded with an increase in number of rating points. The same token, the grandmaster should be penalized for this unexpected lousy draw. Et cetera. If you then continue with that performance calculation system for many years, rating all the players in the game, you get a very nice system that will give you quite some accuracy with respect to expected  tournament results. Result: Grandmasters are really Grandmasters! The nice thing was that – both as a sponsor and representative of the board of the Dutch Chess Federation – I met with quite a few of those geniuses. And geniuses they are. Nothing like King Kong beating the professionals here. Top grandmasters can play blindfold chess against you as an amateur having a board, seeing the pieces, in not just one game, but dozens of them! Yep…the incredible quality of a top specialist.

Now back to investments. It looks so simple. All investments are about return (the positive variable), stability of returns (also positive), risk (negative to the extent that it is bigger than what an investor is willing to take), risk preference (who is the investor, what can he afford, what is his investment style, et cetera), investment type, correlation with the rest of the investments in portfolio (the lower the better), sensitivity to outliers, et cetera. Yep, everything in there can be measured to quite some extent.

Now, how come we find it so difficult to distinguish between good and bad investors or investment opportunities? If you talk to the average professional investor and tell him about the chess grandmaster, his first thought is that chess is a far simpler activity than investments and that it is being played by nerdish, mono-focused guys. Most of the time they tell you so, without even really knowing the game. And in the end, it cannot really be true. Take the following example: if we simply ignore the factors in the ”model” that you are using (either an implicit model when you are a fundamental style investor, or explicit when you are a quant), then at the end of the year you either beat the benchmark (be it some kind of index, or your required rate of return if you want to define the game in absolute space instead of relative space), are about equal (i.e. a draw), or you lost (underperformance).

The averave chessplayer, when talking to him about investing, never assumes that his game is more complicated, but it is striking that brilliand amateur investors (read: chess players) do often have very smart things to say about the problems facing investors. Reason: they recognize it as another kind of game. And chess grandmasters are game specialists. But investments defined the way we did before are a kind of game too. It is you out there against the competition, i.e. the other investors / the market. There are certain rules, there is a (hopefully level) playing field, there are various competitions (Dutch equities, US bonds, Asian Real Estate and so further), so play if you wanna play.

The main difference is also not the decision models used. Sure, there are a lot of important factors to take into consideration when analyzing your investments. But top-level chess is multi-factor as well. It is just that the factors are different ones. And the number of opportunities in investing is not necessarily bigger than that in chess. Equity strategies, or hedge fund strategies can be compared with playing ‘open’ positions with either not too many pieces on the board anymore and/or pawn structures where nothing or not too much is blocked. Blocked, closed structures are more like fixed income strategies, et cetera.

No, the real difference is that somehow big investors can make far more money than top grandmasters. And investors work in firms/structures led by shrewd managers that are all about indeed making a lot of money, not just for the investor, but for themselves as well. The average chess grandmaster is far less money savvy, and often more interested in playing a nice game, enjoying a nice location when playing a tournament, et cetera. How does that translate into differences between investing and chess? There is more at stake when being ‘bad’. Clients will withdraw money from your portfolio, bad returns translate into lower fees on existing client money, your boss might fire you, et cetera. So basically, whereas chess players didn’t really mind Professor Elo and the FIDE to develop that nice rating system, there is some kind of tendency among the average player (and that is the majority, just like in chess) to avoid becoming the bad guy. Result: smoke screening and a lack of performance measurement in a robust way.

Parties like Morningstar, Lipper and some of the better institutional manager selectors like Bfinance, Russell and ourselves (albeit all with slightly different decision models and goals) try to change that. But, professional investors are most often dealing with money from third party clients. Chess players on the other hand play for themselves. Sure, sometimes they do participate in team matches, but even there you can see that this ‘collectivity’ aspect doesn’t really change things, because it is in the end about the impact on the rating. And rating in turn decides on your position in the World Rankings.

The confusion about performance in investments has been further articulated by the fact, that clients and their advisors are sometimes not really certain of what they want and how they want it. Indeed, that is related to what we said earlier about ”Good Reasons”. They often assume that naive simple performance graphs over the last 2-5 years tell the story about how good or bad a manager is. And the traditional overconfidence aspect is also there: when we ourselves as investor go through a nice series of good performance, we often assume that we are a great investor. But as we saw in an earlier presentation, you need 8 years before 100 naive King King investors flipping coins might be de-masked as charlatans.

So we truly need more objective, Elo-like analysts of markets. A role that we at Lodewijk Meijer try to fulfil. Not to be the standard criticaster standing at the sideline, blaming the professional asset managers for doing a bad thing. It is a fantastic, difficult, impressive thing to outperform the market. So we are full of admiration for the truly strong ones, but only if they do have the investment philosophy and detailed analysis to show that it was skill and not luck. In other words: just like it was in chess, the strong guys like me, the not-so-strong guys that want to learn might still like me (maybe I know some place where they can improve skills in the area that led to the defeat they just experienced!) and some of the sincere bad guys might for the same reason still like me.

But….who really won’t like me, is the big investor trying to look bigger and better than he is. We will do all we can to de-mask him, warn potential clients. Not just in the interest of the client, but also in the interest of the asset manager. If ther is one thing to be learned from markets, then it is that being in there for the quick buck will in the end always hurt you. Tulipomania in the Middle Ages, Daytrading without having either a structured model to exploit small inefficiences and/or a position as broker so that your costs are lower, Insider Trading, Stock Market Fraud, short-term decisions based on short-term factors whereas you are a long-term investor…they all have led to disaster for the investors following that kind of strategy.

And even if you work hard, avoid overconfidence and specialize, you still need to make sure that you do not put all your eggs in one basket. Diversification – one of our core themes as incorporated in the Markowitz-Van Dijk approach to asset allocation – is part of the story as well. Even the strongest of grandmasters, be they Fisher, Karpov, Kasparov, Kramnik or Anand, they all have a so-called repertoire in which they play more than one opening so as to be sure that a) they are less predictable; and b) in case one or the other opening is somehow not functioning, that at least the results in the other opening might probably be uncorrelated.

From the grandmasters mentioned Russian Anatoli Karpov was probably the ”laziest” with the narrowest reportoire. So why was he world champion for such a long time? Was it luck, because the rest was bad? No, I do not think so. He was very much aware of the fact that he was not the hardest worker on new ideas at home. But he also knew that his endgame strength was fantastic. So even if the position during opening and middle game was somewhat inferior, he might still turn the tide in the endgame. Knowing that, he in a way optimized his strategy by focusing on a simple game plan, exchanging pieces relatively quickly, avoidance of hectic turbulent positions with potential for dangerous attacks and sacrifices, all to just get to the endgame quickly. Isn’t that like top specialists in low-beta, low-volatility asset classes? E.g. Fixed Income or Money Markets. On the other hand guys like Kasparov who play every game for a spectacular attacking win, have to work harder at home so as to know much more by heart before the game. Similar to what we can expect from an equity or hedge fund manager, for instance. Or a specialist in Emerging Markets, where often the data set available is still too short.

So, as long as we do not have a rating system like the Elo system in investments, it pays to compare asset managers (either at the firm level or at the level of individual products/lead portfolio managers) with chess players. If they like that or not, we don’t care. It will help us avoid all kind of pitfalls in what is in the end a similar type of game.

From to Chess to Credit Crisis

So why then the credit crisis? Can we still use the metaphor of chess when looking at the crisis? The answer is yes. When alleged specialists turn out not to be specialists, i.e. the players in our championship weren’t the best, there is a big chance that organizers and owners/sponsors (us as end-investors) might freak out and put pressure on our non-delivering ‘stars’ that weren’t stars after all. OK, there are excellent specialists out there that couldn’t help us avoid the crisis, but at least they will ex-post know that the basis for their giant status was laid in periods like the one at hand. When the going gets tough, the tough gets going.

In the mean time we have to go through the motions and make sure that we filter out the bad guys. As long as the ideal rating system is not there, we might use ”bonus” and ”fee levels” as a proxy. Research has indicated that the best money managers are not the ones charging the highest fees. On the contrary, true specialists want to attract a large portfolio so that they can earn their excess return fee over a larger Assets Under Management base. Attracting a large portfolio is easier when fees are not too high. Lousy players pretending to be good know that the likelihood of earning that performance bonus is small. Therefore it is better to charge a high ex-ante fee.

And about performance bonuses in banks. Nothing wrong with it if the bonus is related to TRUE outperformance. But too often do we see that bonus structures in big financial institutions are not related to a definition of performance that is in the end in the best interest of their clientele. Not even is it in many cases based on benchmark levels that are really difficult to beat. And in some cases there are no high watermarks assuring that lousy performance in the past will have to be compensated for in the future, before being entitled to a new bonus.

Obama was right to be angry to many bankers and to investigate what he could do. What we can do in the mean time is use eagerness to score bonuses or fees as a negative proxy with respect to future performance.

I hope that this chess-based reflection gives you some basic ideas concerning what you could look for if you do not have a full-fledge rating system.

Note:

Within the chess world there is a lot of discussions going on, indicating that even there people are not fully satisfied with the Elo rating system. However, it is far better than nothing, and far better than what we have in the financial world. There is still a long way to go. And it is good to see that there are a lot of intiatives going on that will help you to distinguish good from bad much better than you could in the past. I will be more than pleased to brainstorm about this more, if you desire.

In our next entries we will of course go from metaphor back to daily market movements and hands-on analysis of products and markets.