Capital Group’s Brian Lees is driving efforts to ask more questions of brokers, and for more data on where an order is shown before it executes, but can the buy-side handle the resulting deluge?
The current work you are doing on venue reporting analysis Our first push was simply to try to collect information about ‘where’ we were executing and a little bit about ‘how’ we were executing, namely, did we post or did we take liquidity. So having done that, the question was where do we go from there? And as such, the topic of requesting more data on where we didn’t execute and what order types were used started to be raised by some representatives on the FPL Americas Buy-Side Working Group. Some participants had already started down this road with brokers, asking for information relating to post-trade about where their orders were sprayed out to by the algorithms and what types of orders were placed on exchanges and also which exchanges they were on, etc. So that’s where the conversation began and that’s where we reached out to Jeff Alexander and Linda Giordano, because Barclays had already spearheaded this conversation.
What we are looking to achieve either in real time or post-trade, is whether we can standardise a format for brokers to tell us how our order interacted with the market, including when the order was placed, what order types were used, where it was placed in the markets and whether or not we got hits. The concern with this is not so much can we get it, because if we sign enough non-disclosure agreements we can get the information from the brokers. Some brokers have concerns about that information getting out and somebody reverse-engineering their algorithms, but from the buy-side perspective, I think the biggest concern is whether we can manage the volume of data that we would get.
The resources to store and analyse data and make some sort of good use of it With the original data that we were getting, on where the execution took place, we talked a lot about this with smaller firms who were using TCA vendors to help them analyse this information. With this type of information, if we went a step further, the brokers would not want us sending that out to TCA firms, because it shows their methodology for how their algorithms behave. I was in New York several weeks ago and took the opportunity to meet up with Jeff and Linda while we were there. We invited Jeff to join one of our conference calls for the buy-side committee, which he did, and he talked about what they’ve been proposing. He showed proposals for both the real-time collection of data, via FIX messages, actually proposing a whole new FIX message to be created for this purpose, which could then be sent in real time. Or, alternatively we could standardise a format for collecting the information post-trade which, as a spreadsheet, would then tell us what we want to see. We’re trying to standardise how you ask for the data and what format it is going to be in, by creating best practices for how to get the data from the brokers. That way the brokers don’t have to keep coming up with a different format for every client that asks for it. The best practices do specify that the ISO MIC codes would be the standard for identifying the exchange that you executed on, but we said nothing about what you should do with the data once you get it.
Exchange involvement in the conversation We did talk to some exchanges when we were first trying to standardise how to identify the exchanges, because when we first standardised the MIC codes, they did not cover all the exchanges, this was due to the fact that they hadn’t all registered with the ISO organisation and we wanted them to.
We had a little bit of trouble in differentiating the dark order books from the lit order books and some of the exchanges that have both. These exchanges consider themselves a hybrid book, and they didn’t want to be known as two different things. We didn’t have a way to differentiate the dark and the lit flow without introducing yet another FIX tag. That back and forth added to the conversation as part of the registration authority’s decision to come out with the new market segment concept, which says you can have an exchange defined and have child MIC codes that differentiate different segments of the market. We’re beginning to start conversations with exchanges about this topic, but that’s the extent to which we’ve had any discussion with them.
Broker willingness to participate in the process The first half of this, just getting the information about where you executed, the brokers didn’t have any problem, because it’s public record once it executes. When we started talking about the more detailed reporting, they did raise a concern about the information being sent out and NDAs so that, you, as a client, are not going to send the data out to a third party. But because other firms had already started down this road we talked about the purpose of this, which was just to have someone looking over their shoulder to make sure that they are acting in the best interest of the client and not potentially favouring rebates over best execution; they can’t really argue with that logic. Somebody should have some oversight as to whether or not the right decisions are being made.
Neal Goldstein, J.P. Morgan, Timothy Furey, Goldman Sachs and Greg Wood elaborate on the forthcoming FPL Risk Subcommittee’s Risk Management Guidelines including their extension to cover DMA, symbology and futures.
While margin checks do not fit into the typical pre-trade risk check, how can traders assimilate the risk limit functionality of FIX with their margin-level risk monitoring?
Neal Goldstein, J.P. Morgan:
Pre-trade risk checks are a key element of the comprehensive risk management strategy applied for business lines like prime brokerage. For electronic trading relationships where a client is offered leverage based on some level of collateral, real time positions for each client are usually calculated based on start of day, and intra-day drop copies of execution reports. A typical risk control is to link the post-trade position checks with the pre-trade checks applied at the gateway. If a client’s intra-day position approaches a level that exceeds the pre-arranged leverage or margin agreements, the post-trade system can send a cut off signal to the pre-trade gateway. The client would then be allowed to liquidate the position to reduce the long/short positions, but not go any further long or short.
The basic definition of DMA trading is that brokers provide access to a venue in the most efficient and effective way possible. What can brokers do to ensure they do not miss client risk limits, internal counterparty checks, rule 15c3-5 requirements, etc while maintaining speed of access?
Timothy Furey, Goldman Sachs:
Whether using algorithms, smart order routing and/or DMA to access the market, it is important to make sure that the rules are optimized and that automated testing and checkout processes are in place to verify that they are working. Appropriate risk controls are a key part of execution and are baked into the process. With all the advances in technology, development teams have the ability not only to better optimize the execution path for speed and efficiency, but also to provide benefits like automated testing to check that controls are functioning properly.
How important is symbology validation to equity risk controls? Can better technology remove fat finger errors from trading?
Greg Wood: Symbology validation is very important to any type of electronic order flow since the broker must clearly identify the instrument being traded by the client. An erroneous validation of a symbol could have serious repercussions in how the order is executed in the market, including inadvertent disruption to the market. One of the key rules of engagement when a broker certifies a FIX connection with a client or vendor is for both parties to agree what symbology is being used on the session and then not to deviate from that without a subsequent recertification.
Risk management technology is definitely evolving alongside trading technology to provide better controls for the way people are trading now. A simple fat finger check can prevent an inadvertently large order being sent direct to the market. However clients are increasingly using algos to trade large orders over a longer duration or using different types of interaction with the market. In this situation the fat finger check is deliberately large to allow the order to be submitted to the algo. The algo then needs to assess whether the parameters of the order - instrument, aggression, duration, time of day, etc - are suitable for the size of the order. If a large order has parameters that are too aggressive in comparison to the average daily volume of the instrument and the desired timeframe for execution then the algo should either reject or pause the order to avoid impact to the market. If this happens then the broker and client should discuss how to adjust the parameters of the order to avoid impact.
RCM’s Head of Asia Pacific Trading, Kent Rossiter, unmasks the Asian trading scene, sharing insights into how RCM navigates the unlit landscape, identifying the effects of dark liquidity and highlighting ways brokers can facilitate better buy-side decision making.
FIXGlobal: What are the main benefits of dark liquidity in Asia?
Kent Rossiter, RCM: One of the major challenges in Asia has always been accessing liquidity without other parties in the market taking advantage of your position and your need to complete the order. In cases where liquidity is scarce, knowledge that a relatively large order is being worked can expose investors to various risks. In such situations, it is advantageous for knowledge of the deal whilst it is being worked to be discreet until the order is filled. In dark pools run by brokers we can get priority on our orders through queue-jumping.
Dark pools support such an approach as they allow large block orders to be worked without showing size. In this way, trading in dark pools allows a trader to access a broker’s own internal order flow, without being gamed by the market that would otherwise risk non-fulfillment or less efficient pricing. As a result, size trading becomes the norm in dark pools and a trader gets to see blocks that may never have been available otherwise. With no information leakage we are not disadvantaged by the fading you see on lit venue quotes. From a personal perspective, the challenges that arise from dealing across a number of venues and the resulting increased use of technology make the role more exciting and satisfying.
FG: How do you limit information leakage in dark pools?
KR: With the exception of broker internalization engines, the trade sizes found in dark pools are often multiple of what they are on the exchange. So having fewer, but larger prints reduces information leakage, and in many cases we can get done on our size right away. Minimizing the number of times a print hits the tape reduces the chance of this footprint being picked up and working against the balance of your order. That said, broker internalization engines do their part well, keeping any spread savings among the two broker’s clients instead of giving it up to the general market.
FG: If you decide to seek dark liquidity, how do you decide between broker internalizers and block crossing networks?
KR: The type of dark venues being used for various trades (i.e. between block crossing networks and brokers) are different. As I mentioned, brokers for the most part are matching up little prints that otherwise would have been time-sliced in the general market, and when using these venues the goal is often to save a few basis points along the way while you work an order. You are not often micro-managing each fill, but through the process we are getting spread capture and price improvement. The type of stock you are often trading in these internalization engines are often larger, more liquid stocks; the type of orders often worked by algos.
Block crossing networks on the other hand, while still matching up electronically, are probably more confidential, and take up the function of what brokers still do upstairs - putting blocks together - so size is the real focus here. Both types of dark pools use the primary market for price sourcing since the vast majority of trades get printed at or within the best bid and offer. As the primary markets become too thin, it can cause price formation problems.
While it is not specific to the consideration of dark pools as an extra execution venue, we have to consider potential increased book out costs if we do use dark pools (except via aggregators, since we would only be using one counterparty), just as we have had to for years when deciding whether to execute a block with a single broker versus multiple counterparties. As dark pools proliferate there is an increased chance that we may not have part of our order in that pool at just the right time to take advantage of flow that may be parked there. Dark pool aggregators are aiming to provide the buy-side solutions to this.
In January 2010, the Tokyo Stock Exchange (TSE) launches its new, high-profile, Arrowhead exchange trading system for cash equities. TSE defines Arrowhead as its next generation trading system combining low latency with high reliability and scalability. However, to the surprise of many in the global trading community, Arrowhead will not include a FIX gateway. MetaBit Systems’ David Chapel examines the decision and argues that the Japanese exchange should reconsider the decision.
In September 2008, the Tokyo Stock Exchange (TSE) announced its plans for ‘Remote Trading Participant Services’ that would allow offshore firms, with no branch in Japan, direct market participation – a novel proposition in Japan. Initially, during the tender phase, TSE considered offering a FIX Gateway to the new Arrowhead system, however, a survey of exchange participants indicated that broker members had minimal interest in the global communication protocol.
The survey results came as a surprise to many, but a closer look at the TSE broker members shows it leans heavily towards smaller domestic players. Currently, TSE has 106 broker members, including foreign securities firms that have registered their local entities as a ‘General Domestic Trading Participant’, and 11 foreign members. Of these only around 40 members (split almost evenly between foreign securities firms, including those registered via their Japanese legal entity and domestic securities firms offering international trades) would possibly see the need for a FIX gateway.
Given the advantages that a FIX gateway would have offered to Arrowhead’s future Remote Trading Participants interested in the protocols ability to interact with a local exchange through a standardized API, we would urge the TSE to reconsider its decision, despite the current low demand among its domestic members.
Why not FIX it? FIX is not a new concept in Japan. At MetaBit, we have offered a FIX-to-native exchange gateway to all major securities exchanges in Japan since 2004. The development of Arrowhead provided our company with the catalyst to re-architecture the existing FIX gateway with a focus on low latency and scalability. This need for speed was particularly important given the belief among exchanges that FIX is slow. Our aim was to bring FIX-to-native exchange connectivity below 500 microseconds of additional latency under sustained load, a goal which we have more than achieved. In addition, we have made extensive use of the Orc CameronFIX Universal Server to provide the core FIX connectivity.
Technical challenges and solutions for FIX During the development of our upgraded FIX gateway, we had to consider the following technical challenges of providing a FIX implementation for direct exchange connectivity:
Performance is paramount; request latency (time to exchange) and request throughput (requests per second) represent the key metrics. Larger global members require sub-millisecond latency with throughput exceeding 1000 orders/second.
For scalability, most Japanese exchange architectures require multiple physical connections (so called Virtual Server or VS’s). Each VS is limited to a certain throughput by the exchange; higher throughputs can only be achieved by a broker member subscribing more VS’s. Due to cost, smaller and mid-tier brokers typically subscribe to 20 to 40 VS whilst it is common for large members to run above 100 VS’s per exchange. Efficiently managing and load balancing across such a large number of connections leads to a significant increase in complexity.
Each exchange API has a unique message protocol and message structure. Creating a standardized multi-exchange product requires custom FIX mapping for each implementation. It is the aim to keep custom FIX tags to a minimum possible whilst adhering to the global FIX Protocol.
The FIX API is a simple asynchronous model, well suited to high throughput bidirectional messaging. However, Japan’s older exchange APIs use a synchronous delivery model, providing batched order requests (20 orders per batch) to increase throughput. This complicates mapping between the FIX- and the native exchange API and increases implementation complexity.
Members can run many different types of hardware and operating systems; hence a vendor needs to support as many systems as possible.