At the FIXGlobal Face2Face Forum in Seoul, Korean firms announced the formation of a FIX working group and the Korean Exchange’s intention to build an ultra low latency trading platform.
The opening speaker at the FIXGlobal Face2Face Forum Korea was keenly anticipated by the 200+ delegates, (a quarter of whom were made up of the buy-side and a third the sellside), as he was raising many of the issues that surround the HFT arena, but that are rarely touched on at industry events in Korea. By placing HFT in context , Edgar Perez, author of the recently published “The Speed Traders”, highlighted many of the opportunities and challenges that markets around the world face, in the low latency trading strategies environment. Not least, he pointed out the colossal task facing regulators and associated technology costs, just to monitor high-frequency trading, post trade, let alone real-time.
A recurring theme throughout the day, latency was covered by most of the presentations, especially in the context of FIX. Deutsche Borse’s Hanno Klein, and NYSE Technologies Asia Pacific CEO, Daniel Burgin, stressed that FIX standards are quite at home in the low latency environment, with exchanges around the world already using FIX for their low latency systems. As Mr. Burgin pointed out, “FIX is not slow, but through poor implementation, it can be made slow – and this has happened in various markets”. These comments rang true with the attendees, especially as Mr. Kyung Yoon, Division Head of Financial Investment IT Division of KOSCOM, outlined their plans not only to implement the latest version of FIX at the Korean Exchange, but also that when the new exchange system is rolled out in 2013, that speeds as low as 70 microseconds will be their benchmark. To the ‘icing on the cake’ Mr. Yoon then expressed KOSCOM’s commitment to helping establish a FIX liaison group in Korea that will ensure a highly ‘standard’ implementation of the FIX Protocol.
MC for the day, FIXGlobal’s Edward Mangles, (also FPL Asia PacificRegional Director), welcomed the announcement, stating that he and the FPL Asia Pacific group, looked forward to working more closely with KOSCOM, KRX and the Korean trading community as a whole. With delegates staying put to hear the bi-lingual presentations/discussions throughout the day, (with a few afternoon speakers actually commenting that the crowd in the room was unusually large for the final sessions), the updates on algorithmic trading (Josephine Kim, BAML) and TCA (Ofir Geffin, ITG) provoked a number of follow-up questions and discussions, indicating the delegates’ appetite surrounding these issues.
Raymond Russell, of the FIX Inter-Party Latency (FIXIPL) Working Group and Corvil lays out the use cases for the FIX Inter-Party Latency standard and the functionality of Version 1.0.
Goals for FIXIPL
The principal goal of the Inter-Party Latency Working Group is to ensure interoperability between different latency monitoring vendors. Interoperability is essential because latency monitoring is vital to running a low-latency service, therefore the people building systems need confidence that they can start with one vendor and still migrate to another. What we have seen through the proliferation of latency monitoring systems across the trading world, whether DMA providers, market data providers or trading desks, is that often the problems in managing latency within an environment happen between the cracks. Most firms have a good handle on latency in their own environment because they have engineered it well, but when they connect into a counterparty, it gets tricky.
A trader who sees a slowdown in response time will want to understand why they have missed trades or why their fill rates are low, but there are multiple places where that latency could have occurred. One place is in the exchange matching engine, which in some respects is unavoidable. If there is considerable interest and activity in a symbol at the same time, those orders will have to queue in the matching engine, purely as a result of market activity. The latency might also have occurred in the exchange gateway. It is common practice for exchanges to load balance across multiple gateways to accommodate high volumes, and you might have hit a slow gateway. Perhaps the service provider you connect through may have oversubscribed their network and you could be caught in cross traffic unrelated to trading. We have seen all these things happen, so the ability to see where the latency is occurring requires a consistent set of time stamps across the architecture.
Most exchanges already employ latency monitoring in their own environment, and inter-party latency and the sharing of time stamps, while less important within the exchange, enables them to work with their members to identify areas of latency. The benefits unlocked through interparty latency are somewhat biased towards the end traders, but they also extend to brokers and market data providers, who receive better quality execution feeds and market data speeds, respectively.
For exchanges, the need for latency transparency is becoming a standard requirement as latency has become a competitive differentiator. To the extent that exchanges are comfortable with their own infrastructure and are ready to compete on their latency, they will want to share their latency measurements with members. In my experience, venues and brokers are no longer as reticent to share their latency figures as they were before.
Version 1.0 Rollout
Much of the work that we have done with Version 1.0 involved deciding how to produce a standard that on one hand is simple enough to be easily implemented, while ensuring it can still perform in all the basic use cases. Version 1.0, due out in December 2011, is clean and simple and emphasizes the core capability to publish time stamps. We have agreed on the technical scope and it is now going through the formal review procedures required to be standardized by FPL, including a public review. The other important part to be done before it is real is to get two different implementations. There are a number of things that will be ready in a few months’ time, such as distribution through multicast and the ability to automatically group several measurements together across the trade, which we will include in the next version later next year.
Simo Puhakka, Head of Trading for Pohjola Asset Management, shares his experience trading in the Nordic markets, giving his opinions on interacting with HFT, using TCA and knowing whether you can trust your broker.
The prospects for High Frequency Trading (HFT) are really up to regulators. It will be a free market, but as we all know, regulatory changes affect the whole trading landscape. For example, we can see what is happening in France and the debate that is going on in Sweden, which are quite hostile towards HFT, so those countries.
Personally, I think that HFT is a good thing for the market, as long as you have the proper tools to deal with it. There are a number of small firms that have been suffering from HFT
since MiFID I because they lack the proper technology and tools to measure and deal with it. We have not suffered in our dealings with HFT, and I would actually say in many cases, it is the opposite. HFT firms seem to add liquidity and when you have the proper tools to deal with it, you can take advantage of it.
Speaking of tools, we started building our own Smart Order Router (SOR ) a year and a half ago. The goal was to create an un-conflicted way to interact with the aggregated liquidity. In this process we went quite deep into the data and turned processes upside-down with the result that we have full control of how we interact with the market.
On the other hand, I welcome technological innovation from the sell-side; for example, brokers now disclose the venues where they execute trades on an annual basis. The surveillance responsibilities that brokers have are beneficial. Many of the small, local brokers and buy-sides, however, are now finding it challenging to upgrade their technology.
Trusting your Broker
Our approach was to take control of our order flow and only use our brokers for sponsored access. We chose full control because, in some to deliver what I am asking.These questions first arose a few years ago, and we realized we needed to create a transparent, fully-controlled, non-conflicted path to the market. How you interact with different venues – even lit venues, where you have more transparency – will affect your choice of strategy. In most cases, you are better off without brokers making decisions for you. The root of the problem is, when you send an order to the broker, what happens before it goes to the venue? What control do we have over the broker infrastructure, including their proprietary flow, internalization, market making and crossing, not to mention the routing logic?
When we dug into the data, we were quite surprised to see that, although a broker was connected to all the dark liquidity, many of the fills were coming from that particular broker’s dark pool, suggesting there are preferences in the routing logic. Brokers want to internalize flow, which is not a problem, if you are aware of potentially higher opportunity costs. When it comes to dark liquidity, that is an even bigger problem, since our trades were often routed to the broker’s own dark pool or those it has arrangements with.