Monday, January 18, 2010

When quants fail

I am sometimes asked why I am so anti-quant. That's because there are many circumstances when quantitative analysis fails. I can cite two examples off the top of my head. The problems shown in the first example can be managed, but the second instance highlights a more serious problem with quantitative analysis and modeling in general.

The trees or the forest?
Avner Mandelman wrote a commentary about a company he analyzed. Everything seemed fine at first and from his description. The numbers would pass any quant screen or model:

It sounded promising, so first I checked out management. None had a criminal record, spats with former investors, or bitter divorces pending.

Next I read the filings. The auditor was reputable, the lawyers good, the footnotes few, revenue recognition plain, inventories slim, patent disputes nil, and debt non-existent.

What of the technology? I asked an engineer I knew to check it out for me - it was fine.

So I called management and arranged to meet the chief executive officer, the chief financial officer and the marketing guy. All seemed smart, hardworking, and honest.

Yet an investment in the stock would have fallen apart because of a flaw in the company’s business model. For the full details of the story, read more about it here.

The moral of this story is that any good fundamental analyst is looking at the trees, but the good quantitative analyst is better at looking at the forest. The former will beat the latter on a stock story virtually every time. That’s why quants size their stock bets accordingly to diversify away stock specific risk (residual risk in geek-speak) in the models so that what is left is largely a model bet. Accordingly, a typical quant stock portfolio will have 150-200 holdings, whereas a fundamentally driven one will have far fewer.

These principles are encapsulated in Grinold’s Law of Active Management. I would warn, however, that the application of Grinold's Law has subtle nuances that good quants should be aware of (see my previous comment here).

What about model assumptions?
The other risk for quants is that their models are just plain wrong. As Kid Dynamite puts it in his post: “It's not a crime to have more information than the guy on the other side of the trade/bet”.

He went ont to illustrate his point with his interview that he once had with Susquehanna [emphasis mine]:

Anyway, the interviews with Susquehanna were the most mathematically rigorous of any I've ever encountered. While most firms seemed content that as a math major from MIT I probably had some chops, Susquehanna wanted to see them. I'll never forget the first question in the interview, where the interviewer asked "what is the expected value of the number of heads if I flip a coin 1000 times." DYKWTFIA ?!?!? "500," I replied confidently. "And what's the standard deviation?" He handed me a pencil and paper and told me to take my time. I managed to grind out the answer (nope, I couldn't do it right now, 11 years later, but I can look up the methodology online (SQRT (n*p*(1-p)) and find that it's about 16). He then asked me for a 95% confidence interval of the number of heads one could expect in extended repetitions of 1000 flips - easy - 2 standard deviations, or a range of 468 - 532. Finally, he offered me even money on a series of coin flips where he'd bet that the total number of heads would be more than 532. Layup, right? I just did the math and knew it was a 40-1 prop. "Ok, I'll take it," I told him confidently.

The interviewer proceeded to explain to me that I knew the math - and that he KNEW that I knew the math, after all, he'd just watched me derive it. Why then, would I expect him to be offering me such a great wager? "Because you were testing me?" I hoped. No - it was because he had a guy on the floor of the CBOT who had trained himself to flip coins with a much better than 50% success rate for a desired outcome. The moral of the story was that you should always assume that the person on the other side of the trade thinks THEY have an edge too. The interviewer then asked me, and I swear this happened, although not in these exact words, "So let's say you calculate the fair value of an option to be $1.50, and you're in the crowd trying to buy 10,000. The market is relatively thin, and you are buying a few hundred options at a time. Suddenly, Goldman Sachs walks in and offers you 10,000. What do you do?"

"Take 'em!" The young, confident, and soon to be Kid Dynamite in me replied, "I know they're worth more, I've done the math." The interviewer shook his head, and said that GS wouldn't be selling them to me out of their generosity - that GS clearly had a different view, and that I should try to think of where my analysis could be wrong. Did I miss a dividend? Was there an imminent earnings event? Had news come out? This annoyed me greatly. "How can you ever trade then, if every time you trade you think that you might be on the wrong side of the trade or that your counterparty has more information than you do?" I was perplexed. The interviewer explained that it's not every time, and it's not every trade, but you should certainly be wary of eager and smart counterparties willing to put up sizable trades, and you should make darn sure you've triple checked your work.

The typical profile of a “top” fresh quant is one with a Ph.D. out of a top school. People like that are left-brained smart and book smart. The problem is that they tend not to be Street Smart but investing and trading are behavioral in nature. Therein lies the problem. These kinds of misalignments in skill sets can lead to catastrophic failure if there is no adult supervision. I wrote before that:

The greatest quant failure occurred in the 1960s and it was caused by Robert McNamara and the “whiz kids” in their conduct of the Vietnam War. They incorrectly framed the problem and focused on the wrong metrics. The results scarred an entire generation and altered American foreign policy ever since. As an example, you can find an analysis of differing analysis of a battle of the Vietnam war here at Fabius Maximus' blog.

To answer the original question of why I am so anti-quant, I'm not. There are circumstances when quantitative analysis fails. The unfortunate thing is that many in the profession don't recognize those limitations. There is an article in the New York Times entitled Do you have the 'right stuff' to be a doctor? It goes on to say that personality matters in medicine, a profession that is similar to being a quant, which requires someone not only be book-smart but cognitive-smart.

Great investors not only understand models, but they internalize models and know when and when not to use them. Great quants should do that too.


iQuant said...

"All models are wrong, but some are useful" (George Box, Statician.) At the end of the day it is the job of the Quant to separate the useful from the useless models, given the reality of facts. Good Quants choose good models as good Doctors choose good treatments. As a good friend who told me "You can eat any mushroom you find in nature. Some of them, only once..."

walt said...

Mandelman link is broken.

Cam Hui, CFA said...

The broken Mandelman link seems to be a problem with the Globe & Mail website. Please try again later.

Don Giovani said...

I think the "failure of the quants" is not due to wrong answers, but to wrong questions. The example of coin flipping is very illustrative. The quant correctly answers all the questions, but fail to question the SENSITIVITY of his answers to HYPOTHESES. Namely, how the odds he calculates would change if, say, the probability of tails were 51%, 60% etc. And even more so, how the OUTCOME would change: decision, P/L etc. Obviously, our quants are not trained to ask these essential questions. Correctly trained quants would be able to answers these questions, which after all are maths as well.
As I hate designating a scapegoat, I'll blame the whole chain:
1) Quants should ask themselves questions they are not asked for, and - this is less easy than it seems (personal experience!) - force their non-quant colleagues to listen to these.
2) Quant hirers should - for their own sake! - ask them questions about limitations of models, sensitivity of answers to hypotheses and other model risk questions. Ignoring them, then blame the quants is about as stupid as going to the doctor, giving partial info and blaming the doctor for wrong treatment.
3) Quant eductaion in financial engineering programs should especially focus on those questions. These are much more difficult than what is usually taught (there are indeed some lectures - not enough - on model risk and hypothesis testing, but a real assessment of the whole chain from statistical estimation to decision making and P/L is very rarely done).
When people will stop expecting the Moon and throw away the baby with the bath water and it doesn't come, then perhaps will quant work be beneficial and respected for its true value. Otherwise, I will still think that, when a trader, a banker, hires a quant, he doesn't pay for the result, but rather for a potential scapegoat to blame in case things go wrong...

AM said...

There is no question that quantitative analysis is useful. However, there is a big difference between "quantitative" and "purely mathematical". Too many quants just see the markets through mathematical eyes and fail to understand what is going on. Look at credit derivatives ... an amazing amount of fancy math was spent on them, and everybody failed to see that the sub-prime loans would cease to be diversified the moment interest rates raised ... There are lots of quants who just do not get it, but love their math. When you interview for a job in finance, you should be asked questions about the markets, not about math....

MCKibbinUSA said...

Subjectivism is still a fact of life, and while many classically trained statisticians and mathematicians tend to reject subjectivism, the role of judgment in quantitative anlysis is increasingly apparent. As argued by David Vose (2008):

"For the risk analyst, subjectivism is a fact of life. Each model one builds is only an approximation of the real world. Decisions about the structure and acceptable accuracy of the risk analyst's model are very subjective. Added to all this, the risk analyst must very often rely on subjective estimates for many model inputs, frequently without any data to back them up."