Bob Caisley, CIO of the Singapore Stock Exchange (SGX), recounts the reasons why SGX is adopting FIX 5.0
In June 2010, SGX unveiled a plan to launch the next generation of exchange trading in Asia through its SGX REACH initiative. Positioned as providing the fastest access to Asia, the SGX REACH initiative begins in the first quarter 2011 with the roll-out of co-location services at a Singapore Tier-4 compliant data centre. By the third quarter of 2011, SGX will implement a new trading engine (REACH ST) for the securities market. This will be delivered through NASDAQ OMX’s Genium INET platform providing an average order response time of 90 microseconds door-to-door using the native trading API. The last part of the REACH initiative is to establish points of presence in major liquidity venues around the world including New York, Chicago, London and Tokyo, thus enabling local connectivity for global customers and radically lowering cross-border connectivity costs.
Connectivity over FIX 5.0 is an integral part of the REACH initiative. Since March 2001, SGX has successfully offered FIX 4.2 connectivity to our securities market. However, like many older FIX gateways, SGX gateways are layered over an external trading engine API gateway. Such architecture is not ideal and introduces translation delays that can be more than a millisecond on top of the door-to-door latency of the trading engine. Unfortunately, in today’s context, an additional millisecond is one thousand microseconds too many.
With the new trading engine, SGX will deliver a faster FIX connectivity through NASDAQ OMX’s Genium INET FIX that is integrated directly to the internal message stream of the trading engine removing the need for additional translation. This new architecture and superior solution enables FIX connectivity to give performance very close to that of the native API. The choice of FIX 5.0 rather than continuing with FIX 4.2 or a higher version was taken because we wish to provide a rich FIX interface, delivering both order entry and market data feeds. Analysis of the differences between FIX 4.2 and FIX 5.0 for order entry has shown marginal differences and we believe the move to FIX 5.0 will be a simple step for our current and new FIX customers.
Quod Financial CEO Ali Pichvai advocates a re-examination of speed relative to risk.
The oversimplified debate on latency, which states ‘trading is all about speed’, does not represent the true situation. Latency is primarily a consequence of the market participant’s business model and goals. A liquidity provider sees latency competitiveness as vital, whilst a price taker considers it of less importance in the overall list of success factors. This article uniquely focuses on processing efficiency, considering that distance/ co-location has long been debated.The processing efficiency is determined by:
The processing efficiency is determined by:
*Number of processes:
The number of processes and the time an instruction spends in a given process will give a good measure of latency. As a general rule of thumb, the fewer the number of processes, the lower the latency. An arbitrage system will most likely consist of as few processes as possible, with a limited objective. For instance, a single instrument arbitrage between two exchanges can be built around three processes – two market gateways and one arbitrage calculator/order generator. An agency broker system will host more processes, with pre-trade risk management, order management and routing, intelligence for dealing with multi-listing and the gateway, as the minimum number of processes. The trend of latency reduction was sometimes at the expense of the amount of critical processing; for instance in the pursuit of attracting HFT houses, some brokerage houses provide naked direct market access, which removes pre-trade risk management from the processing chain. An initial conclusion is that it is very hard to reconcile a simplistic and, limited-in-scope, liquidity taker system with more onerous price taker systems.
This is where the process flow between different processing points is as efficient as possible, with minimal loops between processes, waiting time and bottlenecks. It also considers the comprehensive view of the architecture between the network and the application.
*Single process efficiency:
Two important areas must be reviewed:
There is an on-going debate on what the best language for trading applications is. On one side there are the Java/.NET proponents, who invoke that ease of coding and maintaining a high-level development language (at the expense of the need to re-engineer large parts of the Java JVMs). On the other side there are the C++ evangelists, who utilise better control of the resources, such as persistence, I/O and physical access to the different hardware devices, as a demonstration of better performance. The migration from main exchanges and trading applications (away from Java) to C++ seem to indicate that the second church is on the ascendancy. Beyond the coding language, building good parallelism in processing information, within the same component, also called multithreading, has been a critical element in increasing capacity and reducing overall system latency (but not unit latency).
Finally, there are attempts at putting trading applications or components of trading applications on hardware, which is often referred to as hardware acceleration. The current technology can be very useful for latency sensitive firms, to accelerate the most commoditised components of the overall architecture. For instance, vendors are providing specific solutions for market data feedhandlers (in single microseconds), that would result in market data to arbitrage signal detection of tens of microseconds. Yet trading is not standard enough to be easily implemented on such architecture. Another approach is the acceleration of some of the messaging flow, by accelerating middleware and network level content management. This goes in hand with the attempts of leading network providers to move more application programming lower into the network stack.