TCA in an HFT world
Nick Nielsen, Head of Trading at Marshall Wace, and Stuart Baden Powell, RBC Capital Markets, discuss the effects of changing regulation and speed on performance measurement.
FIXGlobal: The industry has seen a multitude of dislocations over recent times. At the structural level, we have all watched the correlations between liquidity dispersion and net executable liquidity with interest. Indeed, as we speak, the largest MTF venue Chi-X Europe is reaching what could be described as “incumbency” in selected names, and has been the subject of a possible takeover. The initial principal positions of the owners have been re-enforced by equity ‘jump ball’ schemes, and with many of those principal positions approaching a four to five year holding period, it is certainly an interesting time.
Dropping a level, the strategic floor has opened up more debate as liquidity characteristics have altered, with differing execution strategies competing for a shortening alpha capture component, throughout an increasingly contested trading horizon. A significant subsection of those strategies is regarded as high frequency type trading. Many houses in some way, directly or indirectly interact with high frequency type flow, both in the lit and the dark; as some brokers cut upfront fees and begin to offer free end-to-end execution, we may see broker rebates for liquidity in the near future. As the sell-side positioning plays out, the buy-side has incorporated a series of checks and balances into the equation as a form of overlay evaluation.
Stuart Baden Powell: Nick, you run a desk that is widely regarded as at the fore-front of the buy-side. How do you think both your team and the buy side as a whole is adapting to interact with what is an increasingly majority component of European market liquidity?
Nick Nielsen: Over the past couple years, we have seen a very different liquidity dynamic, with significant liquidity exiting the OTC marketplace in Europe and moving towards a much more competitive electronic market making system. As a consequence, we have moved our execution from mostly manual to largely automated and electronic (mainly leveraging broker algorithms to engage with this electronic market making liquidity – both lit and dark) in an attempt to cut implicit and explicit costs of trading. Due to a more systematic process, our team has spent significant time building quantitative evaluation tools to understand the impact of this change in trading strategy and to give us the ability to incrementally make very informed changes to our process (consistent with a trend towards much more systematic decision making and evaluation for buy-side trading desks). These tools attempt to determine the effectiveness of the parameters/aggression that we automatically assign to the brokers (an assessment of our ability to choose the right trading strategies) and separate calculations to measure the brokers’ ability to perform to these parameters / benchmarks (for example, fill quality against an instruction like POV 10%).
In addition, with the use of FIX tag 30 venue values, we can further aid in lit vs. dark trading performance, evaluation, and decision making. In order to best capture very clean and meaningful data to analyze, we have worked very hard to standardize our interactions across counterparties (e.g. use POV algos at broker one and broker two instead of using new flavours of algos that are unique to a particular broker and thus, hard to determine whether benefits in execution quality are due to broker differences or strategy differences). We use the results from these evaluations to automatically feed into our execution aggression and broker selection optimizations. Along with increased CSA usage, these real-time feedback loops, help us to constantly evaluate and reward work on algorithms offered by our sellside providers in a meritocratic way.
I believe our experience and reaction to the evolving microstructure has been very similar to many other buyside firms. The trend is moving towards more automated execution, standardization of broker engagements to aid better execution data collection for analysis, and better analysis and use for strategy selection of execution data. The result has been quite beneficial to our bottom line – significantly reducing both our implicit and explicit costs of trading (both in absolute and relative terms to expected cost) – all a result of increased electronic market making that we can choose to interact with.
SBP: Several pertinent points there Nick; for me the TCA component is central to both the sell- and buyside to understand how orders are interacting with modern day liquidity. With most explicit costs being relatively deterministic in nature, the market impact component of the total cost (being the largest proportion and most difficult to measure) often acts as a differentiator across most TCA products. However, structurally, most market impact TCA services are based on similar mathematical frameworks, such as Bertsimas and Lo ; namely, having a level of basic variables (volatility, spread, etc.) with some having the more sophisticated variables, such as aggressiveness, over-layered on top.
If we move into the core of the engines, we find that most of the underlying theories rest on the notion that the general stock price movement is modelled as a random walk with market impact added as amplification. This random walk core, whilst well-accepted for manyyears, has with the advent of new trading strategies begun to look almost obsolete. The “excess returns” generated by a variety of trading strategies including, for example, the momentum/contrarian switch and mean reversion, have shown how prices have periods of both positive (momentum or trending) and negative (oscillatory) serial auto-correlation; if the prices were genuinely random, the score would be a flat zero. The decline in explicit transaction costs, for example, has made even smaller correlations economically viable and played a role in producing situations where correlations between equities and commodities have tripled in the last decade, as pricing inefficiencies are squeezed out of the market.
At a more granular level, there are “patterns” hidden beneath the theory of the random walk. If the random walk thesis that underpins the very foundations of the majority of TCA market impact models is no longer valid, the questions for end users begin to evolve. Asking “Is my TCA dynamic or static and able to track intra-day patterns in market conditions?” seems potentially trivial if the bedrock is not stable. Modified questions in today’s world could revolve around whether the underlying benchmarks and theories that we as an industry have relied on for so long, are indeed reliable and valid today. TCA systems are meant to measure performance of trading strategies and should be tailored to those strategies; strategies should not be tailored to pre-existing TCA systems.
Nick, it seems your team now has some advanced quantitative evaluation tools at your disposal and impressively rare that the impact has had a corresponding measurable impact on net figures; do you have any advice for how the buy-side should evaluate their execution partners going forward? Do you think it is an educational or technological card to be played here? NN: That’s an interesting question, Stuart. I think you’ve probably identified the two main issues - education and applying that knowledge effectively via technological methods. From speaking with other buy-side head traders, most implementation issues involve confusion about whether it is technology or their own inputs into the technology solution. I believe they tend to be a poor setup of what they are trying to record. Many of the third party TCA suppliers have excellent services, but struggle to deliver any real information or value add to trading desks, due to the lack of data recording precision, scope, and diligence/discipline to data cleanliness policies that many desks have pursued. I think the most crucial part of delivering a value add TCA report (to evaluate your own trading strategies and by extension, how good your provider is) is to understand what you are trying to get out of it and to derive and stick to policies for accurate data collection that are consistent with this goal. I’ve found luck by approaching the problem with a few principles:
- Separate portfolio construction benchmarking from trading desk decisions. Any creep of portfolio management (PM) tracking into trading adds volatility (and in some cases drift) to your shortfall calculations. For example, measure the trading performance from order submission to the trading desk from the PM (when the PM has delivered the instruction and left the order with the trading desk). Staging or sitting on an order for a PM in the trading system without the intention to trade should not exist so opportunity costs don’t factor into analysis.
- Benchmark the strategy selection decision differently from the broker’s performance. For example, a trader makes a conscious strategy decision to send an order to broker A as a 10% of the volume algorithm. The trading desk should be benchmarked against strike, to judge its effectiveness in choosing the strategy. The broker should be rewarded or penalized against participating relative to the instruction. Broker A in this case should be benchmarked against the VWAP of 10x the volume of your order size to measure the broker’s ability to perform against your instruction. The broker shouldn’t be given any reward or penalty from good performance against strike, as that decision was made by the trading desk.
- Maintain very clear instructions to brokers, by trying to standardize the types of instructions you give all brokers, watching how they react to such instructions and recording all of the parameters that were chosen. Limiting the set of different options and strategies brokers can be given generally helps with this exercise, as attempting to add too many variables that the trading desk can change makes generating statistical significance very difficult.
SBP: Thanks Nick, I think a lot of the buy-side will find those pointers of considerable use. The education piece is, as you say, critical; the more we as an industry can engage through suitable mediums, the better for the ultimate end user. On the technological and perhaps, regulatory side, an additional piece to agree on and implement is a suitable consolidated tape that effectively balances the adequacy/affordability equation. Industry engagement with the European Securities and Markets Authority (ESMA) is a preferred process here and can assist in feeding a higher quality of input into TCA products. As HFT type flow continues to play a highly proportionate role in wider markets, those that provide it, those that interact with it and those that measure it will have some significant adaptations to make to stay ahead tomorrow.