By Raymond Russell, Julian Ragless, Richard Leung
Raymond Russell, of the FIX Inter-Party Latency (FIXIPL) Working Group and Corvil lays out the use cases for the FIX Inter-Party Latency standard and the functionality of Version 1.0.
Goals for FIXIPL
The principal goal of the Inter-Party Latency Working Group is to ensure interoperability between different latency monitoring vendors. Interoperability is essential because latency monitoring is vital to running a low-latency service, therefore the people building systems need confidence that they can start with one vendor and still migrate to another. What we have seen through the proliferation of latency monitoring systems across the trading world, whether DMA providers, market data providers or trading desks, is that often the problems in managing latency within an environment happen between the cracks. Most firms have a good handle on latency in their own environment because they have engineered it well, but when they connect into a counterparty, it gets tricky.
A trader who sees a slowdown in response time will want to understand why they have missed trades or why their fill rates are low, but there are multiple places where that latency could have occurred. One place is in the exchange matching engine, which in some respects is unavoidable. If there is considerable interest and activity in a symbol at the same time, those orders will have to queue in the matching engine, purely as a result of market activity. The latency might also have occurred in the exchange gateway. It is common practice for exchanges to load balance across multiple gateways to accommodate high volumes, and you might have hit a slow gateway. Perhaps the service provider you connect through may have oversubscribed their network and you could be caught in cross traffic unrelated to trading. We have seen all these things happen, so the ability to see where the latency is occurring requires a consistent set of time stamps across the architecture.
Most exchanges already employ latency monitoring in their own environment, and inter-party latency and the sharing of time stamps, while less important within the exchange, enables them to work with their members to identify areas of latency. The benefits unlocked through interparty latency are somewhat biased towards the end traders, but they also extend to brokers and market data providers, who receive better quality execution feeds and market data speeds, respectively.
For exchanges, the need for latency transparency is becoming a standard requirement as latency has become a competitive differentiator. To the extent that exchanges are comfortable with their own infrastructure and are ready to compete on their latency, they will want to share their latency measurements with members. In my experience, venues and brokers are no longer as reticent to share their latency figures as they were before.
Version 1.0 Rollout
Much of the work that we have done with Version 1.0 involved deciding how to produce a standard that on one hand is simple enough to be easily implemented, while ensuring it can still perform in all the basic use cases. Version 1.0, due out in December 2011, is clean and simple and emphasizes the core capability to publish time stamps. We have agreed on the technical scope and it is now going through the formal review procedures required to be standardized by FPL, including a public review. The other important part to be done before it is real is to get two different implementations. There are a number of things that will be ready in a few months’ time, such as distribution through multicast and the ability to automatically group several measurements together across the trade, which we will include in the next version later next year.
Julian Ragless and Richard Leung, Hong Kong Exchanges and Clearing Ltd, discuss the value of latency measurement to an exchange and the ways they are reaching out to latency sensitive traders.
Given the multifaceted nature of the trade lifecycle – some latency sensitive, others not – how important is latency to delivering value as a trading platform as well as across the exchange and clearing business lines?
As a general principle, low latency is an important objective for any exchange. Our primary role as a secondary market operator is to provide an efficient market and the lower the latency, the sooner trading information can be priced in and disseminated to the market. The key term is ‘delivering value.’ To some people low latency is the primary value driver, while for other firms it is not. In recent years, many exchanges, particularly those in competitive, fragmented markets, have deliberately and aggressively pursued lower latency to attract High Frequency Trading (HFT). To some this is a good thing, in particular those providing HFT services, yet others have reservations, believing HFT does not improve their trading and may add social cost to the wider market through increased market data rates and bandwidth costs.
Lower latency is not always better. The peak message rate in the US markets is close to 7 million messages per second, which can be difficult for smaller players to absorb and process. Already, some in the US and Europe are making noises about whether this is improving the efficiency of the market and whether smaller players are being driven out of the market by escalating IT expenditures. Recent proposals for a transaction tax and a minimum time for orders to rest on the order book show that momentum is shifting away from low latency. Furthermore, there is no direct correlation between the number of trades and overall market
turnover. Low latency trading increases the number of trades, but they may be of a smaller average order size.
What value does latency measurement give to an exchange and its members?
Latency measurement systems are extremely important tools for an exchange because we must accurately measure the latency in our systems and be able to disseminate it to the market on a real-time basis. Previously, time synchronization protocols like NTP (Network Time Protocol) only gave us resolution up to a millisecond, which was of limited use, whereas now we can measure down to nanosecond intervals using new protocols such as PTP (Precision Time Protocol). To an exchange, the ability to have an objective measurement of latency allows us to determine where there is congestion in the system, allowing us to take remedial action and helping us in the system upgrade decision. As an exchange, there is also a need to measure latency deviation to ensure that we provide a consistent level of latency. It is one thing to be fast, but what the market wants is a consistent, as well as low, level of latency.
Exchange participants benefit from a real-time latency measurement systems because an exchange trading platform is a dynamic system. More and more, end users utilise the realtime latency data as an extra piece of information to determine their trading strategy. In markets with multiple venues, the latency may determine where orders are sent. In markets with a single exchange, real-time latency data still informs traders about when to trade or which model to use. Both for exchanges and participants, real-time latency measurement is as important as a low-latency platform.
What is HKEx’s plan to engage with latency sensitive traders?
We operate a central market and have many constituents, some of whom are latency sensitive and some who are not. We try to balance these needs throughout our platform evolution. The most immediate example is AMS 3.8, which will bring our average latency down to 2 milliseconds, excluding the broker network and gateways. If you look at those 2 milliseconds, however, more than half of the latency is taken up synchronizing our primary site with our Data Recovery (DR) site so this every order is stored on the DR site before an acknowledgement is sent back. We have historically done this because if there is ever a disaster that requires us to failover to our DR site, we guarantee that all the orders will be there. You cannot do that if you are offering ultra low latency because of the speed of light. Our concern for market integrity keeps us from reducing our latency further.
Another example is our data centre, which is under construction and due for launch in 2012. Our bigger goal is to build an ecosystem of financial market participants, including connectivity and other service providers that can add value to participants that choose to host within that facility. A final example is our market data system, which we plan to rollout in 2013. Here again, we provide a range of market data products, some of which will be tick-by-tick aimed at algo traders and some will be conflated feeds for screen-based traders. Our strategy is to address the needs of latency sensitive traders while balancing the needs of all participants, rather than focussing on one segment of the market.
Click to contact the author: