Thursday, February 24, 2011

Who programs Watson's children?

The field of artificial intelligence has come a long ways from the early days of ELIZA. Now that Watson has bested top humans in the game of Jeopardy, a task in which a computer program has had to deal with the vagaries of natural language, we can definitively state that computers can pass the Turing Test. For newbies, the Turing Test was proposed by Alan Turing in the early days of computing, where:
The Turing Test is a test of a machine's ability to demonstrate intelligence . A human judge engages in a natural language conversation with one human and one machine, each of which tries to appear human. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test.
This was a great day in the field of artificial intelligence and the lessons learned have huge potential for medical and other commercial applications. But does that mean that the machines are destined to take over?


Computers are idiot savants
This is a topic that is near and dear to my heart as what is being proposed is a more generalized application of quantitative analysis for investing. As I have written about in an earlier post, you have to be intelligent in how you apply quantitative techniques to investing. You can't allow your systems to go fully on autopilot but apply your own market knowledge, experience and intuition to the process.

Computers are idiot savants. They will do exactly what you tell them to do and no more. The responsibility of the human designer is to understand the programs blind spots and override the program under the right circumstances. Others quants, such as Paul Wilmott, have hammered on this point over and over again.


How computer models fail
For instance, consider this example from Bronte Capital, which points out that the reported numbers on a Chinese company doesn't seem to add up. Would standard multi-factor stock selection models pick up on such a discrepance? Not likely.

Also consider this example of how Jon Danielsson explained how risk models failed when they were needed the most:
Models are least reliable when needed the most
The consequence of these issues is that the stochastic process governing market prices is a very different during times of stress compared to normal times. We need different models during crisis and non-crisis and need to be careful in drawing conclusions from non-crisis data about what happens in crises and vice versa.

This means that when we most desire reliable risk forecasts, i.e. during market turmoil or crises, the models are least reliable because we cannot get from the failure process during normal times to the failure process during crises. At that time the data sample is very small and the stochastic process different. Hence the models fail.

From a modelling point of view, this suggests that it may be questionable to use fat-tailed procedures, such as extreme value theory, to assess the risk during crises with pre-crisis data. Techniques such as Markov switching with state variables may provide a useful answer in the future. At the moment such models are few and far between.
In the human, rathe than machine realm, Rick Bookstaber showed that the Chinese education system of rote learning, which is based on the Ming Dynasty's tradition of civil service exams, can produce human automata with limited usefulness [emphasis added]:
What is the end result of this vestige of the Ming approach to education? Well, we can look back to the end result in the Ming itself. Those who passed the examinations and entered into the elite offices had the classics down cold. But they didn't know much else. How could they, given the efforts and focus required of these examinations? And while I don't have much to go on, my guess would be that they were not exactly off the charts in terms of what we now popularly call emotional IQ. But the history of the period suggests that for all the laudable screening, those who succeeded to office often did not succeed in the office.

My experience is that this process as it has been retained in the modern era leads to similar failings. That should not be surprising, because as with the Ming, there is little time for anything beyond the task. There is an incredible uniformity in the approach to problem solving, and the sorts of problems that can be solved. When I was a professor, I had two Korean students who handed in identical exam papers. They went so far as to work out the problems in the same steps, put a box around each problem, put identical work in the same place in the box. They both even underlined each of the answers twice. It was clear to me that one of them must have copied in distinctively uncreative fashion from the other. When I called them into my office and confronted them with their identical work, they really had no idea why I thought there was a problem. They had not cheated, they had been trained with painstaking precision to do things in the same way. Thus the form of their work was identical, the process of their solutions was identical, and their mistakes were as well.
Don't get me wrong. I believe that Watson's triumph represents a leap forward and Watson's metaphorical children will undoubtedly be highly productive. Nevertheless, we need be careful about the street smarts of the programmers of Watson's children.

No comments: