Exo-Brain Power: Humans and Machines Learning Together

elliot_noma_edmBy Elliot Norma, Managing Director, Garrett Asset Management

Machine learning is changing the way we trade, and the skills needed to be successful; but the objectives are unchanged.

Humans will not be replaced by computers. As more evidence accumulates, we see the limitations of computers solving problems on their own. Self driving cars and the replacement of radiologists, for instance, have been slower to arrive than advertised. The legal and ethical issues also have become clearer over time as we struggle with questions of who (or what) is responsible for auto accidents and misdiagnoses of cancer.

The implications of technological advances extend far beyond the question of the recall, precision, and f1 statistics of classification. 

This is not a new situation. As early as 1954, Paul Meehl demonstrated that methods such as an equally weighted sum of high school grade points and standardized test scores provided a robust predictor of college performance that was superior to a human-powered selection process for college admissions. Despite these fi ndings, machine-dominated selection procedures have not been adopted. There is clearly a gap between the stated goal for assessing outcomes and the true criteria for determining success or failure. 

If the criteria for success are often fuzzy, then how do we make successful applications?

Our use of machine decision-making only makes sense when we consider artificial intelligence (AI) as the latest generation of tools that, as always, revolve around human users. The idea that tools such as language, pencils, and telephones are merely extensions of human intellect leads one to think of humans acquiring “exo-brain” power to accomplish tasks that would otherwise take an excessively long time to complete or tremendous amounts of organization. These tools don’t make the seemingly impossible possible by replacing humans but make our dreams accessible in a human time scale.

AI offers the power to change the way things are done. Humans still need to be in all parts of the process, be it determining whether we want to risk the firm on a large potentially lucrative trading position or determine whether the value of driving to the store on a snowy night exceeds the value of obtaining the item tomorrow.

Current uses of machine learning emphasize the application of standard models to new data that can provide alpha. There is less emphasis on applying the latest variant of recurrent neural models to minimally improve classification. More efficient ways to tune model parameters continues along with hardware improvements to accelerate the learning process.

However, marginal improvements from the application of more exotic models over standard models increase the fragility of the models. Their application often decreases the number of useful insights the model and its outputs provide.

BEST AREAS OF APPLICAITON
The most promising areas of model development are tools and methods to give humans insights when interpreting model outputs. These methods include data visualization and algorithms to restate model results in terms that provide insights to humans.

Data visualization methods include scatter plots and bar graphs, to heat maps and interactive charts that allow one to drill down into the data. Augmented reality might be the next step in the evolving methods.

The path for improving the interpretability of model outputs is less clear. One approach looks at ways to change the classification of images by obscuring parts of the image to show the sensitivity of the classifier in a region of the inputs.

In image recognition, the image is modified and then reclassified. Images are modified to obscure parts of the input or change characteristics such as colours and shapes. By testing these images, one can guess which factors the key ones are used by the model to make its categorization. This can also be thought of as creating a simpler local model, in the same way that the steepness of a particular climb can characterize the entire route traveled.

LIMITATIONS OF CURRENT TECHNIQUES
I find these methods useful; but they fall short on two aspects. First, these techniques make key assumptions about the features that are relevant in model prediction. For instance, in an image, these methods assume that spatially close pixels make up a neighbour. However, an alternative is that separate wave forms across the image are neighbours or some key regions are neighbours to all points. Only with the assumption that we know which characteristics define the neighbourhood can we then explore a neighbourhood using this method.

This view prevents us from exploiting the true power of these models to discover alternative features. However, alternative ways of viewing the world are what models exploit to improve classification.

Second, we know little about how humans actually make decisions, so it begs the question of what an acceptable explanation is. Brain mapping, for instance, indicates that we make decisions and only later consciously create a script of why we made the decision. Without further insights, we are in the dark about how to construct an explanation that works for both the computer and humans.

Regardless of the current best uses of machine learning, the application of models starts and ends with their ability to solve a trading problem. That problem can be: how do I develop and execute ideas before my competitors; how do I efficiently learn about events and news impacting my holdings; or how do I examine terabytes of data to extract key historical patterns.

Just as the use of charting software changed the nature of futures trading from open outcry to technical analysis, machine learning is also changing the way we trade, and the skills needed to be successful. However, the goals remain the same. Our need to control and understand the limitations of the tools is unchanged.

Related Articles

Latest Articles