Ray Dalio’s $165 billion Bridgewater Associates will start a new, artificial-intelligence unit next month with about half a dozen people, according to a person with knowledge of the matter. The team will report to David Ferrucci, who joined Bridgewater at the end of 2012 after leading the International Business Machines Corp. engineers that developed Watson, the computer that beat human players on the television quiz show “Jeopardy!”I have much respect for Bridgewater and AI applications, but this latest initiative is unlikely to unleash an army of Terminator-like robots that relentlessly seek out alpha in the investment business.
The unit will create trading algorithms that make predictions based on historical data and statistical probabilities, said the person, who asked not to be identified because the information is private. The programs will learn as markets change and adapt to new information, as opposed to those that follow static instructions. A spokeswoman for Westport, Connecticut-based Bridgewater declined to comment on the team.
Let me explain why. The classic approach of evaluating an investment manager is through the 4 or 5Ps:
- People: Who are they? What are their pedigrees? What is their experience, legal and regulatory history, etc.
- Performance: What are the returns? What is the risk profile, etc.
- Philosophy: Why do you think you have an alpha? What distinguishes you from the other managers?
- Process: How do you implement your stated investment philosophy?
- Portfolio (optional): Does your portfolio reflect what you said about your investment philosophy and process?
The problem with AI systems that learn from past history and its mistakes is that they are notoriously difficult to debug. When something goes haywire, it becomes virtually impossible to walk back the steps that led to the misstep. Was the bad decision a bug, or a feature? It is very difficult to tell.
In the framework of the Ps, it becomes very difficult to have confidence in a investment process where you have no idea of what is going on, other than the fact that it is a black box. That is why AI systems are unlikely to be in widespread use - few institutional sponsors will trust them. In the absence of widespread adoption, we are unlikely to see a future army of AI-bots rooting out alpha.
Maybe I'm just too much of a fuddy duddy, but I have been around far too long to
Chaos Theory: A cautionary tale
Back in the early 1990s, there was also similar kinds of excitement in the investment community over Chaos Theory, which was also known as non-linear dynamics. Wikipedia explained Chaos Theory as follows:
One example of a principle that is initial condition dependent is Elliot Wave Theory, which is a charting technique known well to technical analysts. The interpretation of a chart can be highly dependent on how the analyst begins his wave count and different starting points can lead to very different conclusions (one reason why I've never understood EW very well).
A number of years ago, I spoke to the head of a large quantitative investment firm in Boston and their firm had done extensive work in non-linear systems. They had some success with it, but abandoned it after discovering that the risk-adjusted alpha that it generated was roughly in line with traditional linear factors such as the value, growth and momentum. These systems were just too difficult to control and diagnose, especially if investment results turned negative.
The moral of this story: Stick with something simple. (That's how you can live long and prosper.)
Chaos theory is a field of study in mathematics, with applications in several disciplines including meteorology, sociology, physics, engineering, economics, biology, and philosophy. Chaos theory studies the behavior of dynamical systems that are highly sensitive to initial conditions—a response popularly referred to as the butterfly effect. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for such dynamical systems, rendering long-term prediction impossible in general. This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as follows:The idea is that non-linear systems can be highly interconnected, but initial condition dependent. In one moment, a butterfly flapping its wings could conceivably cause weather havoc on the other side of the world, but in another, nothing would happen.
Chaos: When the present determines the future, but the approximate present does not approximately determine the future.
One example of a principle that is initial condition dependent is Elliot Wave Theory, which is a charting technique known well to technical analysts. The interpretation of a chart can be highly dependent on how the analyst begins his wave count and different starting points can lead to very different conclusions (one reason why I've never understood EW very well).
A number of years ago, I spoke to the head of a large quantitative investment firm in Boston and their firm had done extensive work in non-linear systems. They had some success with it, but abandoned it after discovering that the risk-adjusted alpha that it generated was roughly in line with traditional linear factors such as the value, growth and momentum. These systems were just too difficult to control and diagnose, especially if investment results turned negative.
The moral of this story: Stick with something simple. (That's how you can live long and prosper.)